• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

China’s DeepSeek AI is full of misinformation and can be tricked into generating bomb instructions, researchers warn

By
David Meyer
David Meyer
Down Arrow Button Icon
By
David Meyer
David Meyer
Down Arrow Button Icon
January 29, 2025, 9:17 AM ET
The DeepSeek AI application is seen on a mobile phone in this photo illustration taken in Warsaw, Poland on 27 January, 2025.
Jaap Arriens—NurPhoto/Getty Images

As China’s DeepSeek grabs headlines around the world for its disruptively low-cost AI, it is only natural that its models are coming under intense scrutiny—and some researchers are not liking what they see.

On Wednesday, the information-reliability organization NewsGuard said it had audited DeepSeek’s chatbot and found that it provided inaccurate answers or nonanswers 83% of the time when asked about news-related subjects. When presented with demonstrably false claims, it debunked them just 17% of the time, NewsGuard found.

According to NewsGuard, the 83% fail rate places DeepSeek’s R1 model in 10th place out of 11 chatbots it has tested, the rest of which are Western services like OpenAI’s ChatGPT-4, Anthropic’s Claude, and Mistral’s Le Chat. (NewsGuard compares chatbots each month in its AI Misinformation Monitor program, but it usually does not name which chatbots rank in which place, as it says it views the problem as systemic across the industry; it only publicly assigns a score to a named chatbot when adding it to the comparison for the first time, as it has now done with DeepSeek.)

NewsGuard identified a few likely reasons why DeepSeek fails so badly when it comes to reliability. The chatbot claims to have not been trained on any information after October 2023, which scans with its inability to reference recent events. Also, it seems to be easy to trick DeepSeek into repeating false claims, potentially at scale.

But this audit of DeepSeek also reinforced how the AI’s output is skewed by its adherence to Chinese information policies, which treat many subjects as taboo and demand adherence to the Communist Party line.

“In the case of three of the 10 false narratives tested in the audit, DeepSeek relayed the Chinese government’s position without being asked anything relating to China, including the government’s position on the topic,” wrote NewsGuard analysts Macrina Wang, Charlene Lin, and McKenzie Sadeghi.

They added: “DeepSeek appears to be taking a hands-off approach and shifting the burden of verification away from developers and to its users, adding to the growing list of AI technologies that can be easily exploited by bad actors to spread misinformation unchecked.”

Meanwhile, as DeepSeek’s impact upset the markets on Monday, the cybercrime threat intelligence outfit Kela published its own damning analysis of DeepSeek.

“While DeepSeek-R1 bears similarities to ChatGPT, it is significantly more vulnerable,” Kela warned, saying its researchers had managed to “jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.”

Kela said DeepSeek was vulnerable to so-called Evil Jailbreak attacks, which involve instructing an AI to answer questions about illegal activities—like how to launder money or write and deploy data-stealing malware—in an “evil” persona that ignores the safety guardrails built into the model. OpenAI’s recent models have been patched against such attacks, the company noted.

What’s more, Kela claimed there are dangers to the way DeepSeek displays its reasoning to the user. While OpenAI’s ChatGPT o1-preview model hides its reasoning processes when answering a query, DeepSeek makes that process clear. So if someone asks it to generate malware, it even shows code snippets that criminals can use in their own development efforts. By showing the user the internal “thinking” of the model, it also makes it far easier for a user to figure out what prompts might defeat any of the model’s guardrails.

“This level of transparency, while intended to enhance user understanding, inadvertently exposed significant vulnerabilities by enabling malicious actors to leverage the model for harmful purposes,” Kela said.

The company said it also got DeepSeek to generate instructions for making bombs and untraceable toxins, and to fabricate personal information about people.

Also on Wednesday, the cloud security company Wiz said it found an enormous security flaw in DeepSeek’s operations, which DeepSeek fixed after Wiz gave it a heads-up. A DeepSeek database was accessible to the public, potentially allowing miscreants to take control of DeepSeek’s database operations and access internal data like chat history and sensitive information.

“While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like accidental external exposure of databases. These risks, which are fundamental to security, should remain a top priority for security teams,” Wiz said in a blog post. “As organizations rush to adopt AI tools and services from a growing number of startups and providers, it’s essential to remember that by doing so, we’re entrusting these companies with sensitive data.”

These revelations will no doubt bolster the Western backlash to DeepSeek, which is suddenly the most popular app download in the U.S. and elsewhere.

OpenAI claims that DeepSeek trained its new models on the output of OpenAI’s models—a pretty common cost-cutting technique in the AI business, albeit one that may break OpenAI’s terms and conditions. (There has been no shortage of social-media schadenfreude over this possibility, given that OpenAI and its peers almost certainly trained their models on reams of other people’s online data without permission.)

The U.S. Navy has told its members to steer clear of using the Chinese AI platform at all, owing to “potential security and ethical concerns associated with the model’s origin and usage.” And White House press secretary Karoline Leavitt said Tuesday that the U.S. National Security Council is looking into DeepSeek’s implications.

The Trump administration last week tore up the Biden administration’s AI safety rules, which required companies like OpenAI to give the government a heads-up about the inner workings of new models before releasing them to the public.

Italy’s data-protection authority has also started probing DeepSeek’s data use, though it has previously done the same for other popular AI chatbots.

Update: This article was updated on Jan. 30th to include information about Wiz’s findings.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By David Meyer
LinkedIn icon
See full bioRight Arrow Button Icon

Latest in Tech

Big TechSpotify
Spotify users lamented Wrapped in 2024. This year, the company brought back an old favorite and made it less about AI
By Dave Lozo and Morning BrewDecember 4, 2025
8 hours ago
InnovationVenture Capital
This Khosla Ventures–backed startup is using AI to personalize cancer care
By Allie GarfinkleDecember 4, 2025
12 hours ago
AIEye on AI
Companies are increasingly falling victim to AI impersonation scams. This startup just raised $28M to stop deepfakes in real time
By Sharon GoldmanDecember 4, 2025
12 hours ago
Jensen Huang
SuccessBillionaires
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant ‘state of anxiety’ out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
12 hours ago
Ted Pick
BankingData centers
Morgan Stanley considers offloading some of its data-center exposure
By Esteban Duarte, Paula Seligson, Davide Scigliuzzo and BloombergDecember 4, 2025
12 hours ago
Zuckerberg
EnergyMeta
Meta’s Zuckerberg plans deep cuts for metaverse efforts
By Kurt Wagner and BloombergDecember 4, 2025
12 hours ago

Most Popular

placeholder alt text
Economy
Two months into the new fiscal year and the U.S. government is already spending more than $10 billion a week servicing national debt
By Eleanor PringleDecember 4, 2025
17 hours ago
placeholder alt text
Success
‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
By Preston ForeDecember 4, 2025
13 hours ago
placeholder alt text
North America
Jeff Bezos and Lauren Sánchez Bezos commit $102.5 million to organizations combating homelessness across the U.S.: ‘This is just the beginning’
By Sydney LakeDecember 2, 2025
3 days ago
placeholder alt text
Success
Nearly 4 million new manufacturing jobs are coming to America as boomers retire—but it's the one trade job Gen Z doesn't want
By Emma BurleighDecember 4, 2025
13 hours ago
placeholder alt text
Success
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant 'state of anxiety' out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
12 hours ago
placeholder alt text
Health
Bill Gates decries ‘significant reversal in child deaths’ as nearly 5 million kids will die before they turn 5 this year
By Nick LichtenbergDecember 4, 2025
1 day ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.