DeepSeek has given open-source AI a big lift. But is it safe for companies to use?

Use of open-source AI models like DeepSeek's R1 creates additional cybersecurity concerns.
Use of open-source AI models like DeepSeek's R1 creates additional cybersecurity concerns.
Sheldon Cooper/SOPA Images/LightRocket via Getty Images

A generative AI model from a startup based in China has created huge buzz in recent weeks for being potentially groundbreaking. DeepSeek-R1, an open-source model developed by the Hangzhou-based DeekSeek, drastically reduced the cost for businesses of using generative AI with no loss in performance compared to established market leaders like OpenAI. 

But national security officials have called the new model risky to use due to its origin and vulnerabilities in the software that could lead to data leaks. Last week, Texas Gov. Greg Abbott went so far as to ban state employees from using DeepSeek and other applications that depend on Chinese-linked AI.

Despite the uncertainty, many businesses are still flocking to DeepSeek-R1 and other new open source models in search of cheaper, more efficient tools that can rival the tech giants. Cybersecurity experts have some advice for those companies seeking to experiment with open source models: Take a deep breath, tread carefully, and expect more rapid change in the artificial intelligence landscape.

For U.S. businesses, particularly those involving critical infrastructure, basic cybersecurity precautions like ensuring data fed into AI models doesn’t end up on Chinese-owned servers seem obvious. But companies must also take a number of other considerations into account, all made more complicated by the fact that AI technology is constantly changing and safety measures are not yet well established.

“It’s still the Wild West,” said Michael Malone, CEO and founder of Lumifi Cybersecurity, a startup that provides cybersecurity to corporate customers.

The most heavily-regulated businesses like banks are an exception. Under the law, they must ensure that access and security for any AI is limited. That could mean corporations creating rules within their systems that ensure open-source models for a potential product can only access test data rather than real data. Another precaution could be to simply ban the internal use of non-U.S. models, which lets heavily-regulated businesses still experiment with open-source AI, though from a smaller pool of potential sources.  

“This is an awesome moment for (chief information security officers) to take charge and say: ‘I need the budget around cyber that I’ve been asking for. I need the tools and the people to make it happen,’” said Malone.

DeepSeek’s R1 isn’t the first AI model to be open-source, the term for software that anyone can contribute code to). In 2023, Facebook-parent Meta released its now widely used LLaMA language model as open source.  

Open-source AI is often favored over closed models because the community-focused development practice has shown to be a boost for innovation, while increasing access to the latest technological advances and cutting costs. 

However, the open philosophy behind such AI models can be limited. Both Meta and DeepSeek chose not to release the training data that went into their models.

Malone explained that when it comes to using open-source models, businesses should “aim small and miss small.” Using open-source is seen as innovative and fast moving, but it adds risk unless businesses have deeply studied the types and sources of models they’re using. The solution is to try smaller projects with open-source AI. Going big creates greater risk that the project will become outdated or open the door to unexpected problems if new tech arrives.

For example, switching from using a subscription-based AI model to an open-sourced one could change the legal liability a business faces. Developers typically offer open-source software with licenses that have no guarantees and are in the spirit of a collective research project for public use rather than a product with a warranty. A business using open-source AI models could be liable if sensitive customer data is leaked or hackers exploit vulnerabilities in the software. 

DeepSeek made its R1 model available under an MIT license, which basically means that it comes “as is.” The open-source license frees DeepSeek of liability for any damages stemming from how users work with the model while requiring that they disclose that R1 is used in the product. 

Open-source R1 models and multiple variants are all available for free on the AI-hosting platform HuggingFace, which can be downloaded directly with some technical know-how. DeepSeek also offers a paid version through an API and is hosted on a DeepSeek server.

“When you’re using open source the responsibility is on you,” said Lior Div, co-founder and CEO of the AI cybersecurity firm 7AI. “Now it is the problem of how are we going to use it and how are we going to implement it?”

The questions to ask grow exponentially when it comes to open-source models like DeepSeek’s R1 that come from China. Among them: What type of data was used to train the model? What is the environment the model is running in? And how can guardrails be created to prevent sensitive data from being funneled through nations like China? 

To start with, leaders must have expertise inside their organizations that can add appropriate guardrails to open-source models, if they allow their use at all. They must also ask themselves whether they can trust the information the model produces?

It’s the same problem for any kind of generative AI, open source or otherwise. The model will always produce an answer, regardless of whether the information is grounded in reality, Lior warned.

Andrew Stiefel, senior product marketing manager at open source security company Endor Labs, gave a partial defense of some of DeepSeek’s technology. There is little evidence that any data will be shared with the Chinese government if a company hosts the R1 models itself rather than use DeepSeek’s hosted version, he said in a statement.

“Let’s acknowledge two things: Open source AI models are not inherently risky; many argue they’re safer to use than proprietary models because they’re built in the open. But there are valid questions about whether DeepSeek models are safe to use because the maintainers are in China,” Steifel said.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.