• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechOpenAI

DeepSeek used OpenAI’s model to train its competitor using ‘distillation,’ White House AI czar says

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
January 29, 2025 at 3:26 PM UTC
David Sacks, U.S. President Donald Trump's AI and Crypto Czar
David Sacks, U.S. President Donald Trump's AI and crypto czar.Anna Moneymaker/Getty Images
  • David Sacks says OpenAI has evidence that Chinese company DeepSeek used a technique called “distillation” to build a rival model.

OpenAI has evidence that China’s DeepSeek used OpenAI’s models to train its competitor, David Sacks, Donald Trump’s newly appointed AI and crypto czar, has said.

Recommended Video

In an interview with Fox News, Sacks said there was “substantial evidence” that DeepSeek had used a technique known as “distillation” to build its AI models.

Sacks did not go into detail about the evidence OpenAI had.

“I think one of the things you’re going to see over the next few months is our leading AI companies taking steps to try and prevent distillation,” he said. “That would definitely slow down some of these copycat models.” 

What is distillation?

In machine learning, distillation is a process where outputs from a large, pre-trained model are used to train another, usually smaller model to exhibit similar capabilities.

The technique is common in the field and often used when companies want to deploy a model on devices with limited resources, like mobile phones.

However, Bloomberg reported that OpenAI and Microsoft are probing whether DeepSeek used the technique to help train its rival reasoning model—something that would breach OpenAI’s terms of service.

An OpenAI spokesperson told Fortune that it was aware that China-based companies and others were “constantly trying to distill the models of leading U.S. AI companies.”

“As the leading builder of AI, we engage in countermeasures to protect our IP, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the U.S. government to best protect the most capable models from efforts by adversaries and competitors to take U.S. technology,” the spokesperson said.

OpenAI said it has technical measures in place to prevent and detect these sorts of attempts and has worked with Microsoft to jointly identify attempts to distill models, revoking access to accounts attempting to do so.

Microsoft representatives did not immediately respond to Fortune’s request for comment. OpenAI declined to provide details when asked about evidence of distillation by DeepSeek.

DeepSeek R1 causes shock waves across AI industry

Rumors that DeepSeek had used distillation to build its models have been circulating since the company released its powerful large language model, V3, in December.

Those rumors only intensified in the wake of DeepSeek’s release of its R1 model last week.

R1, an enhanced version of V3, uses reinforcement learning to improve performance on math, logic, and reasoning questions. It takes longer to respond, employing a step-by-step “chain of thought” process to evaluate different strategies before producing an answer.

Before DeepSeek released R1, very few models had been released that could perform this kind of reasoning.

The best-known and most capable of these reasoning models is OpenAI’s o1 model, which debuted a preview version in September and a more capable, full version in December. DeepSeek may have used examples of questions and answers from o1 to help train its own R1 model.

While OpenAI hid the actual chain of thought reasoning from users in its o1 model, the step-by-step nature of the o1 responses could be enough to help train a rival reasoning model.

OpenAI could try various ways to gather evidence of distillation, but it could be difficult for the AI lab to prove exactly what DeepSeek has done without access to internal company data.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • Ceo Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.