Bill Gates likens the rise of A.I. to nuclear weapons: ‘It’s not as grim as some people think’

Eleanor PringleBy Eleanor PringleReporter

Eleanor Pringle is an award-winning reporter at Fortune covering news, the economy, and personal finance. Eleanor previously worked as a business correspondent and news editor in regional news in the U.K. She completed her journalism training with the Press Association after earning a degree from the University of East Anglia.

Bill and Melinda Gates Foundation, during the EEI 2023 event in Austin, Texas, US, on Monday, June 12.
Bill Gates says estimations about how good or bad A.I. will be are overdramatic.
Jordan Vonderhaar—Bloomberg/Getty Images

Microsoft cofounder Bill Gates is doubling down on his support for artificial intelligence—dismissing fears the technology could destroy humanity and take over the world.

The billionaire philanthropist has long been a cautionary advocate of the technology, having worked with Sam Altman’s OpenAI since 2016. Microsoft has since poured $13 billion into the ChatGPT maker.

In a blog post on his website Gates Notes, the founder of the Bill & Melinda Gates Foundation likened large language models (LLMs) like ChatGPT and Google’s Bard to other disruptive innovations like cars and nuclear weapons.

Both had filled the public with fear, Gates pointed out, and yet, through a series of guardrails, had been wrestled into a usable form.

“We didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road,” wrote Gates—who just helped mint another A.I. unicorn.

“Although the world’s nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up.”

Using nuclear weapons as an example, Gates suggested regulators should look to history for a blueprint on how to handle the development of chatbots.

He explained: “For example, it will have a big impact on education, but so did handheld calculators a few decades ago, and, more recently, allowing computers in the classroom. We can learn from what’s worked in the past.”

The result, Gates believes, is that “the future of A.I. is not as grim as some people think or as rosy as others think.”

It’s this opinion that has previously incurred the wrath of fellow tech titan, Tesla CEO Elon Musk. In a response to one of Gates’ earlier essays on the power of LLMs, Musk lashed out: “I remember the early meetings with Gates. His understanding of A.I. was limited. Still is.”

At the time Musk was one of the early signatories of an open letter calling a six-month pause on the development of anything more advanced than OpenAI’s GPT-4 chatbot. The Twitter owner has since launched his own A.I. company, xAI.

A.I. should be turned on itself

Gates also believes that the problems created by A.I. can be combated by the technology itself.

Take deepfakes—created by a type of A.I. called deep learning that can produce realistic videos and images that are digitally altered—which Gates believes have the power to undermine elections and democracy as well as have “horrific emotional impact” on individual victims.

Gates, reportedly worth $134 billion, said he is “hopeful” however, thanks to the fact that A.I. can not only create deepfakes but also identify them. Intel, he highlighted, has developed a deepfake detector while the government’s Defense Advanced Research Projects Agency is working on technology to identify whether video or audio has been manipulated.

The 67-year-old is also “guardedly optimistic” that the security industry will be able to combat more advanced hackers by using the technology to their own effect.

“A.I. can be used for good purposes as well as bad ones,” he wrote—pointing out that government and private-sector security teams need to have access to the most up-to-date technology in order to combat such attacks.

Gates made a veiled dig at the Musk-backed development pause for this reason, writing: “This is also why we should not try to temporarily keep people from implementing new developments in A.I., as some have proposed.

“Cybercriminals won’t stop making new tools. Nor will people who want to use A.I. to design nuclear weapons and bioterror attacks. The effort to stop them needs to continue at the same pace.”

Governments need to step up

Gates also addressed two of the major concerns from the public: job losses and changes to education.

On job losses—of which Goldman Sachs has predicted there will be 300 million because of A.I.—Gates squarely placed the responsibility on governments and business: “They’ll need to manage it well so that workers aren’t left behind—to avoid the kind of disruption in people’s lives that has happened during the decline of manufacturing jobs in the United States.”

Across the board—from deepfakes to children using ChatGPT to do their homework—Gates told policymakers they need to “be equipped to have informed, thoughtful dialogue with their constituents,” as well as establishing how closely they would work with other countries on legislation.

Lastly, Gates had advice for the public: engage.

“It’s the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks,” he concluded.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.