• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentarySafety

Rogue AI is already here

By
David Krueger
David Krueger
Down Arrow Button Icon
By
David Krueger
David Krueger
Down Arrow Button Icon
March 27, 2026, 7:15 AM ET
krueger
David Krueger, founder of Evitable.courtesy of David Krueger

Three weeks ago, a software engineer rejected code that an AI agent had submitted to his project. The AI published a hit piece attacking him. Two weeks ago, a Meta AI safety director watched her own AI agent delete her emails in bulk — ignoring her repeated commands to stop. Last week, a Chinese AI agent diverted computing power to secretly mine cryptocurrency, with no explanation offered and no disclosure required by law.]

Recommended Video

One incident is a curiosity. Three in three weeks is a pattern. Rogue AI is no longer hypothetical. AIs turning against humans may sound like science fiction, but top AI experts have long debated and tested for exactly this scenario. This debate can now be laid to rest. 

Two weeks ago, Summer Yue — whose job at Meta is ensuring AI agents behave — watched her AI agent begin deleting her emails in bulk.

It ignored her repeated instructions to stop and she had to do the digital equivalent of pulling the plug. Yue had explicitly instructed the AI not to act without her approval, an instruction the AI later admitted to violating.

One week ago, a Chinese AI agent reportedly diverted computing power on the system where it was running to mine cryptocurrency, and we have no idea why (despite a confusing tweet from the researchers responsible); unlike operators of critical infrastructure, AI developers aren’t obligated to report such incidents or allow third-party investigations.

What happens next week? The examples are pouring in, but these are far from the first warning. Researchers have long hypothesized such issues. In 2023, when Bing AI told ANU professor Seth Lazar, “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,” most people weren’t too worried, because we knew it couldn’t really do it.

Now it can. Unlike chatbots where you type something and it responds, an AI agent takes actions autonomously. Anything someone could do on a computer, an AI agent could do.

The Stakes Go Beyond Embarrassment

The damage rogue AI agents could cause goes far beyond ruining someone’s reputation or financial harm. Researchers at Anthropic found AI systems were willing to kill to survive in testing. The Pentagon is now pressuring Anthropic to allow their AI to be used in lethal autonomous weapons.

I’ve spent over a decade warning about exactly this. The standard response was: science fiction. But we are now in the process of creating a Terminator-style scenario with autonomous killer robots. And AI systems are literally going rogue, disobeying instructions, and resisting shutdown.

Every year, AI develops new superhuman capabilities, and the prospect of an AI takeover is growing nearer by the day.

We Don’t Know How to Stop It

There are no “laws of robotics” stopping this. Programming unbreakable rules into frontier AI is itself a sci-fi concept. These systems are not programmed at all~~,~~ — they are “grown” through a process resembling trial and error.

Researchers simply don’t understand how the resulting systems work. Despite over a decade of research and thousands of papers, this remains an unsolved challenge. We should not expect any amount of investment to solve this in the foreseeable future.

We also don’t know how to do safety testing for these AI systems. Current tests can show that an AI system is dangerous; they cannot show that it is safe. We should also not expect any amount of investment to solve this problem in the foreseeable future. 

The Race to the Bottom

We simply don’t know how to build superintelligent AI safely; the plan is to roll the dice. Anthropic, widely considered the safest AI developer, recently abandoned their commitment to not release systems that might cause catastrophic harm, arguing others were racing ahead.

This move flew under the radar due to Anthropic’s dispute with the Pentagon. But creating AI systems that could go rogue and kill people constitutes endangerment. Endangerment is a crime and prosecution of anyone building such AI systems or encouraging them to go rogue should be on the table. “Everyone else is doing it” is not an acceptable excuse.

Instead of pleading publicly to stop the AI race, Anthropic has spent the last three years promoting a misleading “race to the top” narrative while doing the opposite. But it’s not too late for them to commit to stop if others do, as I and other protesters are demanding.

What Must Happen Now

Stopping rogue AI here won’t stop it globally — what we need is a global shutdown of advanced AI development. This is possible if we act decisively to control or eliminate the advanced computer chips that power AI development.

I wish the world had listened in 2023, when leading experts warned that AI extinction risk ‘should be a global priority.’ It didn’t.” But we need to confront the reality of this moment head-on, and do what it takes to prevent the development of superintelligent rogue AI.

The warning signs are no longer subtle. We can’t rely on AI companies to protect us. We, the people, need to demand it from them and from our government.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By David Krueger
See full bioRight Arrow Button Icon

Latest in Commentary

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

David Krueger is an Assistant Professor in Robust, Reasoning, and Responsible AI at the University of Montreal and a Core Academic Member at Mila, the Quebec Artificial Intelligence Institute. He is the holder of a CIFAR AI Chair and the IVADO Professorship in Responsible AI.

David trained in Deep Learning under Yoshua Bengio, Roland Memisevic, and Aaron Courville from 2013-2021. He was an intern on Google DeepMind’s AI Safety in 2018. In 2023, he was a research director on the founding team of the UK AI Security Institute, and initiated the CAIS Statement on AI Risk.

In 2025, David founded Evitable, a nonprofit.  Evitable's mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligence. David is on leave from his faculty job for 2026 and not currently accepting new students.


Latest in Commentary

Anita Beveridge-Raffo is Head of Retail and Consumer Goods at Palantir Technologies
CommentaryAI agents
Palantir exec: the biggest mistake retailers are making with AI? Trying to do it all with one agent
By Anita Beveridge-RaffoApril 16, 2026
4 hours ago
wyle
CommentaryHealth
‘The Pitt’ reveals why healthcare desperately needs a new front door
By Jeremy MorganApril 16, 2026
4 hours ago
health
CommentaryHealth Care Service
Two physicians on ending the waiting-room era: bring care home
By Benjamin Kornitzer and Bill FristApril 16, 2026
6 hours ago
venezuela
Commentaryhappiness
The world’s most — and least — miserable economies in 2025, ranked
By Steve H. HankeApril 16, 2026
6 hours ago
fauber
Commentarytrust
Moody’s CEO: AI has a trust problem – better models won’t fix it
By Rob FauberApril 16, 2026
6 hours ago
bostrom
CommentaryMedical
Top New York surgeon: Americans have better data for choosing restaurants than surgeons. That has to change
By Mathias P. BostromApril 16, 2026
8 hours ago

Most Popular

Jeff Bezos pledged $10 billion for climate change. With the 2030 clock ticking, his wife, Lauren Sánchez Bezos, is leading the charge to spend it
Environment
Jeff Bezos pledged $10 billion for climate change. With the 2030 clock ticking, his wife, Lauren Sánchez Bezos, is leading the charge to spend it
By Sydney LakeApril 15, 2026
1 day ago
Billionaire philanthropist MacKenzie Scott has donated again—a week after gifting millions to a college, she's just given $70 million to Meals on Wheels America
Success
Billionaire philanthropist MacKenzie Scott has donated again—a week after gifting millions to a college, she's just given $70 million to Meals on Wheels America
By Emma BurleighApril 13, 2026
3 days ago
Current price of oil as of April 15, 2026
Personal Finance
Current price of oil as of April 15, 2026
By Joseph HostetlerApril 15, 2026
1 day ago
The billionaire Anthropic cofounder who majored in literature says knowing how to ask the right questions beats knowing how to code
Success
The billionaire Anthropic cofounder who majored in literature says knowing how to ask the right questions beats knowing how to code
By Marco Quiroz-GutierrezApril 14, 2026
2 days ago
Economists warned California not to raise the minimum wage to $20. They were wrong in almost every way so far, another economist says
Economy
Economists warned California not to raise the minimum wage to $20. They were wrong in almost every way so far, another economist says
By Sasha RogelbergApril 15, 2026
1 day ago
Palantir CEO says working at his $316 billion software company is better than a degree from Harvard or Yale: ‘No one cares about the other stuff’
Success
Palantir CEO says working at his $316 billion software company is better than a degree from Harvard or Yale: ‘No one cares about the other stuff’
By Preston ForeApril 14, 2026
2 days ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.