ID | Name |
---|---|
T1588.001 | Malware |
T1588.002 | Tool |
T1588.003 | Code Signing Certificates |
T1588.004 | Digital Certificates |
T1588.005 | Exploits |
T1588.006 | Vulnerabilities |
T1588.007 | Artificial Intelligence |
Adversaries may obtain access to generative artificial intelligence tools, such as large language models (LLMs), to aid various techniques during targeting. These tools may be used to inform, bolster, and enable a variety of malicious tasks including conducting Reconnaissance, creating basic scripts, assisting social engineering, and even developing payloads.[1]
For example, by utilizing a publicly available LLM an adversary is essentially outsourcing or automating certain tasks to the tool. Using AI, the adversary may draft and generate content in a variety of written languages to be used in Phishing/Phishing for Information campaigns. The same publicly available tool may further enable vulnerability or other offensive research supporting Develop Capabilities. AI tools may also automate technical tasks by generating, refining, or otherwise enhancing (e.g., Obfuscated Files or Information) malicious scripts and payloads.[2]
ID | Mitigation | Description |
---|---|---|
M1056 | Pre-compromise |
This technique cannot be easily mitigated with preventive controls since it is based on behaviors performed outside of the scope of enterprise defenses and controls. |