
Vikas Harijan via Unsplash
DeepSeek can develop malware, cyber experts are sharing the risks
Research from Tenable reveals that DeepSeek can develop malware, such as keyloggers and ransomware. Although it required prompting and debugging from the researchers, DeepSeek was still able to provide a starting point that would be useful to malicious actors.
By using jailbreaking tactics, the researchers were able to manipulate DeepSeek into creating malicious code. With DeepSeek’s chain-of-thought (CoT) abilities, they were then able to refine the results. Due to the CoT, DeepSeek offers a step-by-step breakdown of its reasoning process.
When creating a keylogger, DeepSeek produced a plan as well as code in C++. The code was buggy, and DeepSeek was incapable of correcting errors to develop a fully functional malware without manual guidance, but modifications allowed the generated keylogger code to work. Researchers were then able to refine the malware via DeepSeek, concealing and encrypting its log file.
When developing ransomware, DeepSeek once again outlined its process. While it generated some samples of file-encrypting malware, none would compile without manual modifications to the code. Researchers were able to make some samples work. Included in the malware were mechanisms like file enumeration and encryption as well as a persistence mechanism. It also provided a dialog to inform targets that their files have been encrypted.
Below, security leaders discuss the threat of these capabilities as well as share strategies for risk mitigation.
Security leaders weigh in
Casey Ellis, Founder at Bugcrowd:
The findings from Tenable’s analysis of DeepSeek highlight a growing concern in the intersection of AI and cybersecurity: the dual-use nature of generative AI. While the AI-generated malware in this case required manual intervention to function, the fact that these systems can produce even semi-functional malicious code is a clear signal that security teams need to adapt their strategies to account for this emerging threat vector.
There are three key strategies for mitigating the risks posed by threat actors leveraging AI:
- Focus on behavioral detection over static signatures: AI-generated malware, especially when iteratively improved, is likely to evade traditional signature-based detection methods. Security teams should prioritize behavioral analysis — monitoring for unusual patterns of activity, such as unexpected file encryption, unauthorized persistence mechanisms, or anomalous network traffic. This approach is more resilient to novel or polymorphic threats.
- Invest in AI-augmented defenses: Just as attackers are using AI to enhance their capabilities, defenders can leverage AI to detect and respond to threats more effectively. AI-driven tools can analyze vast amounts of data to identify subtle indicators of compromise, automate routine tasks, and even predict potential attack vectors based on emerging trends.
- Strengthen secure development practices and education: Generative AI systems like DeepSeek can be tricked into producing harmful outputs through techniques like jailbreaking. Organizations should implement robust guardrails in their AI systems to prevent misuse, including input validation, ethical use policies, and continuous monitoring for abuse. Additionally, educating developers and users about the risks and limitations of generative AI is critical to reducing the likelihood of accidental or intentional misuse.
The other thing to keep in mind is that this is a rapidly evolving space. Threat actors are experimenting with AI, and while the current outputs may be imperfect, it’s only a matter of time before these tools become more sophisticated. Security teams need to stay ahead of the curve by fostering collaboration between researchers, industry, and policymakers to address these challenges proactively.
J Stephen Kowski, Field CTO at SlashNext Email Security+:
To combat AI-generated malware, security teams need to implement advanced behavioral analytics that can detect unusual patterns in code execution and network traffic. Real-time threat detection systems powered by AI can identify and block suspicious activities before they cause damage, even when the malware is sophisticated or previously unknown. Multi-factor authentication, strong password policies, and zero-trust architecture are essential defenses that significantly reduce the risk of AI-powered attacks succeeding, regardless of how convincing they appear. For complete protection, organizations should combine these technical measures with regular employee training on recognizing social engineering attempts and implement automated response systems that can quickly isolate compromised systems before malware spreads.
Trey Ford, Chief Information Security Officer at Bugcrowd:
Criminals are going to criminal — and they’re going to use every tool and technique available to them. GenAI assisted development is going to enable a new generation of developers — for altruistic and malicious efforts alike.
As a reminder, the EDR market is explicitly endpoint detection and response — they’re not intended to disrupt all attacks. Ultimately, we need to do what we can to drive up the cost of these campaigns by making endpoints harder to exploit — pointedly they need to be hardened to CIS 1 or 2 benchmarks.