• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

A.I. experts downplay ‘nightmare scenario of evil robot overlords’. Over 1,300 sign letter claiming it’s a ‘force for good, not a threat to humanity’

By
Chloe Taylor
Chloe Taylor
Down Arrow Button Icon
By
Chloe Taylor
Chloe Taylor
Down Arrow Button Icon
July 19, 2023, 6:31 AM ET
A visitor takes a picture of humanoid AI robot "Ameca" at the booth of Engineered Arts company during the world's largest gathering of humanoid AI Robots as part of International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, on July 5, 2023.
A visitor takes a picture of humanoid A.I. robot "Ameca" at the A.I. for Good Global Summit in Geneva on July 5, 2023.Fabrice Coffrini/AFP via Getty Images

The development of superintelligent machines is a “journey without a return ticket”—but not a ride that will end with humans being destroyed by “evil robot overlords,” a cohort of international technologists has argued.

In an open letter published on Tuesday, more than 1,370 signatories—including business founders, CEOs and academics from various institutions including the University of Oxford—said they wanted to “counter ‘A.I. doom.’”

“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” they insisted.

The people who signed the letter—most of whom are U.K. based—argued that Britain had an opportunity to “lead the way” by setting professional and technical standards in A.I. jobs.

The development of the technology needed to be achieved alongside a robust code of conduct, international collaboration and strong regulation, they noted.

A.I. Armageddon anxiety

Since OpenAI’s large language model chatbot ChatGPT took the world by storm late last year, artificial intelligence has generated countless headlines, won billions of dollars from investors, and divided experts on how it will change the planet.

To many business leaders and technologists—including two of the three tech pioneers known as the “godfathers of A.I.”—the technology is a potential source of humanity’s downfall.

Back in March, 1,100 prominent technologists and A.I. researchers, including Elon Musk and Apple cofounder Steve Wozniak, signed an open letter calling for a six-month pause on the development of powerful A.I. systems.

As well as raising concerns about the impact of A.I. on the workforce, the letter’s signatories pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX cofounder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.”

Musk has since launched his own A.I. firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Even Sam Altman, CEO of ChatGPT creator OpenAI, has painted a bleak picture of what he thinks could happen if the technology goes wrong.

“The bad case—and I think this is important to say—is, like, lights-out for all of us,” he noted in an interview with StrictlyVC earlier this year.

No ‘nightmare scenario of evil robot overlords’

The signatories to Tuesday’s letter strongly disagree with Musk and Altman’s doomsday predictions, however.

“Earlier this year a letter, signed by Elon Musk had called for a ‘pause’ on A.I. development, which [we] said was unrealistic and played into the hands of bad actors,” BCS—the U.K.’s Chartered Institute for IT, which wrote and circulated the letter—said in a statement alongside the letter on Tuesday.

The organization’s CEO, Rashik Parmar, added that the technologists and leaders who signed the letter “believe A.I. won’t grow up like The Terminator but instead [will be used] as a trusted co-pilot in learning, work, healthcare [and] entertainment.”

“One way of achieving that is for AI to be created and managed by licensed and ethical professionals meeting standards that are recognized across international borders. Yes, A.I. is a journey with no return ticket, but this letter shows the tech community doesn’t believe it ends with the nightmare scenario of evil robot overlords.”

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Chloe Taylor
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.