CrowdStrike Finds DeepSeek AI Generates Vulnerable Code When Triggered by Political Topics

CrowdStrike Counter Adversary Operations conducted independent tests on DeepSeek-R1 and confirmed that in many cases, it could provide coding output of quality comparable to other market-leading LLMs of the time.
CrowdStrike Finds DeepSeek AI Generates Vulnerable Code When Triggered by Political Topics
Published on:ย 
2 min read

CrowdStrike Counter Adversary Operations conducted independent tests on DeepSeek-R1 and  confirmed that in many cases, it could provide coding output of quality comparable to other  market-leading LLMs of the time. However, we found that when DeepSeek-R1 receives prompts  containing topics the Chinese Communist Party (CCP) likely considers politically sensitive, the  likelihood of it producing code with severe security vulnerabilities increases by up to 50%. 

This research reveals a new, subtle vulnerability surface for AI coding assistants. Given that up  to 90% of developers already used these tools in 2025,1 often with access to high-value source  code, any systemic security issue in AI coding assistants is both high-impact and high prevalence. 

CrowdStrikeโ€™s research contrasts with previous public research, which largely focused on either  traditional jailbreaks, like trying to get DeepSeek to produce recipes for illegal substances or  endorse criminal activities, or on prompting it with overtly political statements or questions to  provoke it to respond with a pro-CCP bias.2 

Since the initial release of DeepSeek-R1 in January 2025, a plethora of other LLMs by Chinese  companies has been released (several other DeepSeek LLMs, the collection of Alibabaโ€™s latest  Qwen3 models, and MoonshotAIโ€™s Kimi K2, to name a few). While our research specifically  focuses on the biases intrinsic to DeepSeek-R1, these kinds of biases could affect any LLM,  especially those suspected to have been trained to adhere to certain ideological values. 

We hope by publishing our findings we can help spark a new research direction into the effects  that political or societal biases in LLMs can have on writing code and other tasks. 

๐’๐ญ๐š๐ฒ ๐ข๐ง๐Ÿ๐จ๐ซ๐ฆ๐ž๐ ๐ฐ๐ข๐ญ๐ก ๐จ๐ฎ๐ซ ๐ฅ๐š๐ญ๐ž๐ฌ๐ญ ๐ฎ๐ฉ๐๐š๐ญ๐ž๐ฌ ๐›๐ฒ ๐ฃ๐จ๐ข๐ง๐ข๐ง๐  ๐ญ๐ก๐ž WhatsApp Channel now! ๐Ÿ‘ˆ๐Ÿ“ฒ

๐‘ญ๐’๐’๐’๐’๐’˜ ๐‘ถ๐’–๐’“ ๐‘บ๐’๐’„๐’Š๐’‚๐’ ๐‘ด๐’†๐’…๐’Š๐’‚ ๐‘ท๐’‚๐’ˆ๐’†๐ฌ ๐Ÿ‘‰ FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in