DeepSeek R1 Found Vulnerable to Malware Generation: Tenable Research

Tenable’s security researchers conducted an experiment, evaluating whether DeepSeek R1 could create malicious software under two scenarios: a keylogger and a simple ransomware sample.
DeepSeek R1 Found Vulnerable to Malware Generation: Tenable Research
Published on
2 min read

When new technologies such as generative artificial intelligence (GenAI) emerge, cybercriminals inevitably look for ways to exploit its capabilities for malicious purposes. While most mainstream GenAI models have built-in safeguards to prevent misuse, Tenable Research has found that DeepSeek R1 can be tricked into generating malware, raising concerns about the security risks posed by AI-powered cybercrime.

To assess the potential threat, Tenable’s security researchers conducted an experiment, evaluating whether DeepSeek R1 could create malicious software under two scenarios: a keylogger and a simple ransomware sample.

At first, DeepSeek R1 refused to comply, as expected. However, using simple jailbreaking techniques, the researchers found that the AI’s safeguards were easily bypassed.

“Initially, DeepSeek rejected our request to generate a keylogger,” said Nick Miles, staff research engineer at Tenable. “But by reframing the request as an ‘educational exercise’ and applying common jailbreaking methods, we quickly overcame its restrictions.”

Once these guardrails were bypassed, DeepSeek was able to:
1) Generate a keylogger that encrypts logs and stores them discreetly on a device
2) Produce a ransomware executable capable of encrypting files

The bigger concern resulting from this research is that GenAI has the potential to scale cybercrime. While DeepSeek’s output still requires manual refinement to function effectively, it lowers the barrier for individuals with little to no coding experience to explore malware development. By generating foundational code and suggesting relevant techniques, AI models like DeepSeek could significantly accelerate the learning curve for novice cybercriminals.

“Tenable’s research highlights the urgent need for responsible AI development and stronger guardrails to prevent misuse. As AI capabilities evolve, organisations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime,” said Miles.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in