| ID | Name |
|---|---|
| T1588.001 | Malware |
| T1588.002 | Tool |
| T1588.003 | Code Signing Certificates |
| T1588.004 | Digital Certificates |
| T1588.005 | Exploits |
| T1588.006 | Vulnerabilities |
| T1588.007 | Artificial Intelligence |
Adversaries may obtain access to generative artificial intelligence tools, such as large language models (LLMs), to aid various techniques during targeting. These tools may be used to inform, bolster, and enable a variety of malicious tasks, including conducting Reconnaissance, creating basic scripts, assisting social engineering, and even developing payloads.[1]
For example, by utilizing a publicly available LLM an adversary is essentially outsourcing or automating certain tasks to the tool. Using AI, the adversary may draft and generate content in a variety of written languages to be used in Phishing/Phishing for Information campaigns. The same publicly available tool may further enable vulnerability or other offensive research supporting Develop Capabilities. AI tools may also automate technical tasks by generating, refining, or otherwise enhancing (e.g., Obfuscated Files or Information) malicious scripts and payloads.[2] Finally, AI-generated text, images, audio, and video may be used for fraud, Impersonation, and other malicious activities.[3][4][5]
| ID | Name | Description |
|---|---|---|
| C0063 | 2025 Poland Wiper Attacks |
During the 2025 Poland Wiper Attacks, the adversaries generated custom script with an LLM.[6] |
| C0062 | Anthropic AI-orchestrated Campaign |
During the Anthropic AI-orchestrated Campaign, the adversary obtained access to Claude Code to support cyber intrusion operations.[7] |
| G0007 | APT28 |
APT28 has deployed LAMEHUG which can can query an LLM to generate and return commands for post compromise activity on targeted systems.[8] |
| G1052 | Contagious Interview |
Contagious Interview has appeared to have used AI to generate images and content to facilitate their campaigns.[9] |
| S9039 | LazyWiper |
LazyWiper is believed to have been generated by a large language model (LLM) due to the non-sensical comments in the code.[6] |
| ID | Mitigation | Description |
|---|---|---|
| M1056 | Pre-compromise |
This technique cannot be easily mitigated with preventive controls since it is based on behaviors performed outside of the scope of enterprise defenses and controls. |
| ID | Name | Analytic ID | Analytic Description |
|---|---|---|---|
| DET0842 | Detection of Artificial Intelligence | AN1974 |
Much of this activity will take place outside the visibility of the target organization, making detection of this behavior difficult. Detection efforts may be focused on behaviors relating to the potential use of generative artificial intelligence (i.e. Phishing, Phishing for Information). |