• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks

By
Dylan Sloan
Dylan Sloan
Down Arrow Button Icon
By
Dylan Sloan
Dylan Sloan
Down Arrow Button Icon
May 21, 2024, 2:33 PM ET
U.K. Prime Minster Rishi Sunak
U.K. Prime Minster Rishi Sunak was one of a number of officials and AI executives who agreed to new commitments regarding responsible AI development at a summit in Seoul on Tuesday.Carl Court—Getty Images

There’s no stuffing AI back inside Pandora’s box—but the world’s largest AI companies are voluntarily working with governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators. Without strict legal provisions strengthening governments’ AI commitments, though, the conversations will only go so far.

Recommended Video

This morning, 16 influential AI companies including Anthropic, Microsoft, and OpenAI, 10 countries, and the EU met at a summit in Seoul to set guidelines around responsible AI development. One of the big outcomes of yesterday’s summit was AI companies in attendance agreeing to a so-called kill switch, or a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds. Yet it’s unclear how effective the policy actually could be, given that it fell short of attaching any actual legal weight to the agreement, or defining specific risk thresholds. Other AI companies not in attendance, or competitors to those that agreed in spirit to the terms, would not be subject to the pledge. 

“In the extreme, organizations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds,” read the policy paper the AI companies, including Amazon, Google, and Samsung, signed on to. The summit was a follow-up to last October’s Bletchley Park AI Safety Summit, which featured a similar lineup of AI developers and was criticized as “worthy but toothless” for its lack of actionable, near-term commitments to keep humanity safe from the proliferation of AI.

Following that earlier summit, a group of participants wrote an open letter criticizing the forum’s lack of formal rulemaking and AI companies’ outsize role in pushing for regulations in their own industry. “Experience has shown that the best way to tackle these harms is with enforceable regulatory mandates, not self-regulatory or voluntary measures,” reads the letter.

First in science fiction, and now in real life, writers and researchers have warned of the risks of powerful artificial intelligence for decades. One of the most recognized references is the “Terminator scenario,” the theory that if left unchecked, AI could become more powerful than its human creators and turn on them. The theory gets its name from the 1984 Arnold Schwarzenegger film, where a cyborg travels back in time to kill a woman whose unborn son will fight against an AI system slated to spark a nuclear holocaust.

“AI presents immense opportunities to transform our economy and solve our greatest challenges, but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology,” U.K. Technology Secretary Michelle Donelan said.

AI companies themselves recognize that their most advanced offerings wade into uncharted technological and moral waters. OpenAI CEO Sam Altman has said that artificial general intelligence (AGI), which he defines as AI that exceeds human intelligence, is “coming soon” and comes with risks attached.

“AGI would also come with serious risk of misuse, drastic accidents, and societal disruption,” reads an OpenAI blog post. “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

But so far, efforts to assemble global regulatory frameworks around AI have been scattered and have mostly lacked legislative authority. A UN policy framework asking countries to safeguard against AI risks to human rights, monitor personal data usage, and mitigate AI risks was unanimously approved last month, but it was nonbinding. The Bletchley Declaration, the centerpiece of last October’s global AI summit in the U.K., contained no tangible commitments regarding regulation. 

In the meantime, AI companies themselves have begun to form their own organizations pushing for AI policy. Yesterday, Amazon and Meta joined the Frontier Model Foundation, an industry nonprofit “dedicated to advancing the safety of frontier AI models,” according to its website. They join founding members Anthropic, Google, Microsoft, and OpenAI. The nonprofit has yet to put forth any firm policy proposals.

Individual governments have been more successful: Executives lauded President Biden’s executive order on regulating AI safety last October as “the first time where the government is ahead of things” for its inclusion of strict legal requirements that go beyond the vague commitments outlined in other similarly intentioned policies. Biden has invoked the Defense Production Act to mandate AI companies to share safety test results with the government, for example. The EU and China have also enacted formal policies dealing with topics such as copyright law and harvesting users’ personal data.

States have taken action, too: Colorado Gov. Jared Polis yesterday announced new legislation banning algorithmic discrimination in AI and requiring developers to share internal data with state regulators to ensure they’re complying.

This is far from the last chance for global AI regulation: France will host another summit early next year, following up on the meetings in Seoul and Bletchley Park. By then, participants say they will have drawn up formal definitions of what constitute risk benchmarks that would require regulatory action—a big step forward for what’s been a relatively timid process thus far. 

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Dylan Sloan
See full bioRight Arrow Button Icon

Latest in Tech

AIchief executive officer (CEO)
Microsoft AI boss Suleyman opens up about his peers and calls Elon Musk a ‘bulldozer’ with ‘superhuman capabilities to bend reality to his will’
By Jason MaDecember 13, 2025
1 hour ago
InvestingStock
There have been head fakes before, but this time may be different as the latest stock rotation out of AI is just getting started, analysts say
By Jason MaDecember 13, 2025
7 hours ago
Politicsdavid sacks
Can there be competency without conflict in Washington?
By Alyson ShontellDecember 13, 2025
7 hours ago
InnovationRobots
Even in Silicon Valley, skepticism looms over robots, while ‘China has certainly a lot more momentum on humanoids’
By Matt O'Brien and The Associated PressDecember 13, 2025
9 hours ago
Sarandos
Arts & EntertainmentM&A
It’s a sequel, it’s a remake, it’s a reboot: Lawyers grow wistful for old corporate rumbles as Paramount, Netflix fight for Warner
By Nick LichtenbergDecember 13, 2025
13 hours ago
Oracle chairman of the board and chief technology officer Larry Ellison delivers a keynote address during the 2019 Oracle OpenWorld on September 16, 2019 in San Francisco, California.
AIOracle
Oracle’s collapsing stock shows the AI boom is running into two hard limits: physics and debt markets
By Eva RoytburgDecember 13, 2025
14 hours ago

Most Popular

placeholder alt text
Economy
Tariffs are taxes and they were used to finance the federal government until the 1913 income tax. A top economist breaks it down
By Kent JonesDecember 12, 2025
2 days ago
placeholder alt text
Success
Apple cofounder Ronald Wayne sold his 10% stake for $800 in 1976—today it’d be worth up to $400 billion
By Preston ForeDecember 12, 2025
1 day ago
placeholder alt text
Success
40% of Stanford undergrads receive disability accommodations—but it’s become a college-wide phenomenon as Gen Z try to succeed in the current climate
By Preston ForeDecember 12, 2025
1 day ago
placeholder alt text
Economy
The Fed just ‘Trump-proofed’ itself with a unanimous move to preempt a potential leadership shake-up
By Jason MaDecember 12, 2025
1 day ago
placeholder alt text
Economy
For the first time since Trump’s tariff rollout, import tax revenue has fallen, threatening his lofty plans to slash the $38 trillion national debt
By Sasha RogelbergDecember 12, 2025
1 day ago
placeholder alt text
Success
Apple CEO Tim Cook out-earns the average American’s salary in just 7 hours—to put that into context, he could buy a new $439,000 home in just 2 days
By Emma BurleighDecember 12, 2025
1 day ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.