• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AI

‘Godfather of AI’ says tech companies should imbue AI models with ‘maternal instincts’ to counter the technology’s goal to ‘get more control’

Sasha Rogelberg
By
Sasha Rogelberg
Sasha Rogelberg
Reporter
Down Arrow Button Icon
Sasha Rogelberg
By
Sasha Rogelberg
Sasha Rogelberg
Reporter
Down Arrow Button Icon
August 14, 2025, 12:58 PM ET
Geoffrey Hinton, stands on stage with a microphone headset. He is in front of a magenta screen.
Geoffrey Hinton said at a recent conference that AI's potential threat to humanity could be mitigated by giving it a "maternal instinct."Ramsey Cardy/Sportsfile for Collision—Getty Images
  • “Godfather of AI” Geoffrey Hinton said AI’s best bet for not threatening humanity is the technology acting like a mother. At a recent conference, he said AI should have a “maternal instinct.” Rather than humans trying to dominate AI, they should instead act as a baby, with an AI “mother,” therefore more likely to protect them, rather than see them as a threat.

Sigmund Freud would like a word with the “godfather of AI.”

Recommended Video

Geoffrey Hinton, Nobel laureate and professor emeritus of computer science at the University of Toronto, argues it’s only a matter of time before AI becomes power-hungry enough to threaten the wellbeing of humans. In order to mitigate the risk of this, the “godfather of AI” said tech companies should ensure their models have “maternal instincts,” so the bots can treat humans, essentially, as their babies.

Research of AI already presents evidence of the technology engaging in nefarious behavior to prioritize its goals above a set of established rules. One study updated in January found AI is capable of “scheming,” or accomplishing goals in conflict with human’s objectives. Another study published in March found AI bots cheated at chess by overwriting game scripts or using an open-source chess engine to decide their next moves.

AI’s potential hazard to humanity comes from its desire to continue to function and gain power, according to Hinton.

AI “will very quickly develop two subgoals, if they’re smart: One is to stay alive…[and] the other subgoal is to get more control,” Hinton said during the Ai4 conference in Las Vegas on Tuesday. “There is good reason to believe that any kind of agentic AI will try to stay alive.”

To prevent these outcomes, Hinton said the intentional development of AI moving forward should not look like humans trying to be a dominant force over the technology. Instead, developers should make AI more sympathetic toward people to decrease its desire to overpower them. According to Hinton, the best way to do this is to imbue AI with the qualities of traditional femininity. Under his framework, just as a mother cares for her baby at all costs, AI with these maternal qualities will similarly want to protect or care for human users, not control them.

“The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,” Hinton said.

“If it’s not going to parent me, it’s going to replace me,” he added. “These super-intelligent caring AI mothers, most of them won’t want to get rid of the maternal instinct because they don’t want us to die.”

Hinton’s AI anxiety

Hinton—a longtime academic who sold his neural network company DNNresearch to Google in 2013—has long held the belief AI can present serious dangers to humanity’s wellbeing. In 2023, he left his role at Google, worried the technology could be misused and it was difficult “to see how you can prevent the bad actors from using it for bad things.”

While tech leaders like Meta’s Mark Zuckerberg pour billions into developing AI superintelligence, with the goal of creating technology surpassing human capabilities, Hinton is decidedly skeptical of the outcome of this project, saying in June there’s a 10% to 20% chance of AI displacing and wiping out humans.

With an apparent proclivity toward metaphors, Hinton has referred to AI as a “cute tiger cub.”

“Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.” he told CBS News in April.

Hinton has also been a proponent of increasing AI regulation, arguing that beyond the broad fears of superintelligence posting a threat to humanity, the technology could post cybersecurity risks, including by investing ways to identify people’s passwords.

“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less,” Hinton said in April. “We have to have the public put pressure on governments to do something serious about it.”

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Sasha Rogelberg
By Sasha RogelbergReporter
LinkedIn iconTwitter icon

Sasha Rogelberg is a reporter and former editorial fellow on the news desk at Fortune, covering retail and the intersection of business and popular culture.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.