• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AI

AI is learning to lie, scheme, and threaten its creators during stress-testing scenarios

By
Thomas Urbain
Thomas Urbain
and
AFP
AFP
Down Arrow Button Icon
By
Thomas Urbain
Thomas Urbain
and
AFP
AFP
Down Arrow Button Icon
June 29, 2025, 10:47 AM ET
Under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.
Under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.VCG via Getty Images

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

Recommended Video

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.

‘Strategic kind of deception’

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”

The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”

Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder.

“This is not just hallucinations. There’s a very strategic kind of deception.”

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”

Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).

No rules

Current regulations aren’t designed for these new problems.

The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread.

“I don’t think there’s much awareness yet,” he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

“Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around.”.

Researchers are exploring various approaches to address these challenges.

Some advocate for “interpretability” – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed “holding AI agents legally responsible” for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
By Thomas Urbain
See full bioRight Arrow Button Icon
By AFP
See full bioRight Arrow Button Icon

Latest in AI

Five panelists seated; two women and five men.
AIBrainstorm AI
The race to deploy an AI workforce faces one important trust gap: What happens when an agent goes rogue?
By Amanda GerutDecember 11, 2025
7 hours ago
Stephanie Zhan, Partner Sequoia Capital speaking on stage at Fortune Brainstorm AI San Francisco 2025.
AIEye on AI
Highlights from Fortune Brainstorm AI San Francisco
By Jeremy KahnDecember 11, 2025
7 hours ago
Sam Altman
Arts & EntertainmentMedia
‘We’re not just going to want to be fed AI slop for 16 hours a day’: Analyst sees Disney/OpenAI deal as a dividing line in entertainment history
By Nick LichtenbergDecember 11, 2025
7 hours ago
InnovationBrainstorm AI
Backflips are easy, stairs are hard: Robots still struggle with simple human movements, experts say
By Nicholas GordonDecember 11, 2025
8 hours ago
Iger
AIDisney
‘Creativity is the new productivity’: Bob Iger on why Disney chose to be ‘aggressive,’ adding OpenAI as a $1 billion partner
By Nick LichtenbergDecember 11, 2025
10 hours ago
OpenAI CEO of Applications Fidji Simo
AIOpenAI
OpenAI aims to silence concerns it is falling behind in the AI race with release of new model GPT-5.2
By Jeremy KahnDecember 11, 2025
11 hours ago

Most Popular

placeholder alt text
Success
At 18, doctors gave him three hours to live. He played video games from his hospital bed—and now, he’s built a $10 million-a-year video game studio
By Preston ForeDecember 10, 2025
2 days ago
placeholder alt text
Investing
Baby boomers have now 'gobbled up' nearly one-third of America's wealth share, and they're leaving Gen Z and millennials behind
By Sasha RogelbergDecember 8, 2025
3 days ago
placeholder alt text
Success
Palantir cofounder calls elite college undergrads a ‘loser generation’ as data reveals rise in students seeking support for disabilities, like ADHD
By Preston ForeDecember 11, 2025
14 hours ago
placeholder alt text
Economy
‘We have not seen this rosy picture’: ADP’s chief economist warns the real economy is pretty different from Wall Street’s bullish outlook
By Eleanor PringleDecember 11, 2025
19 hours ago
placeholder alt text
Economy
‘Be careful what you wish for’: Top economist warns any additional interest rate cuts after today would signal the economy is slipping into danger
By Eva RoytburgDecember 10, 2025
1 day ago
placeholder alt text
Politics
Exclusive: U.S. businesses are getting throttled by the drop in tourism from Canada: ‘I can count the number of Canadian visitors on one hand’
By Dave SmithDecember 10, 2025
2 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.