

CrowdStrike Counter Adversary Operations conducted independent tests on DeepSeek-R1 and confirmed that in many cases, it could provide coding output of quality comparable to other market-leading LLMs of the time. However, we found that when DeepSeek-R1 receives prompts containing topics the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%.
This research reveals a new, subtle vulnerability surface for AI coding assistants. Given that up to 90% of developers already used these tools in 2025,1 often with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high prevalence.
CrowdStrikeโs research contrasts with previous public research, which largely focused on either traditional jailbreaks, like trying to get DeepSeek to produce recipes for illegal substances or endorse criminal activities, or on prompting it with overtly political statements or questions to provoke it to respond with a pro-CCP bias.2
Since the initial release of DeepSeek-R1 in January 2025, a plethora of other LLMs by Chinese companies has been released (several other DeepSeek LLMs, the collection of Alibabaโs latest Qwen3 models, and MoonshotAIโs Kimi K2, to name a few). While our research specifically focuses on the biases intrinsic to DeepSeek-R1, these kinds of biases could affect any LLM, especially those suspected to have been trained to adhere to certain ideological values.
We hope by publishing our findings we can help spark a new research direction into the effects that political or societal biases in LLMs can have on writing code and other tasks.
๐๐ญ๐๐ฒ ๐ข๐ง๐๐จ๐ซ๐ฆ๐๐ ๐ฐ๐ข๐ญ๐ก ๐จ๐ฎ๐ซ ๐ฅ๐๐ญ๐๐ฌ๐ญ ๐ฎ๐ฉ๐๐๐ญ๐๐ฌ ๐๐ฒ ๐ฃ๐จ๐ข๐ง๐ข๐ง๐ ๐ญ๐ก๐ WhatsApp Channel now! ๐๐ฒ
๐ญ๐๐๐๐๐ ๐ถ๐๐ ๐บ๐๐๐๐๐ ๐ด๐๐ ๐๐ ๐ท๐๐๐๐ฌ ๐ Facebook, LinkedIn, Twitter, Instagram