• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI
Europe

Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
May 26, 2023, 2:22 PM ET
Yamada HITOSHI/Gamma-Rapho via Getty Images

Hello and welcome to May’s special monthly edition of Eye on A.I.

Recommended Video

The idea that increasingly capable and general-purpose artificial intelligence software could pose extreme risks, including the extermination of the entire human species, is controversial. A lot of A.I. experts believe such risks are outlandish and the danger so vanishingly remote as to not warrant consideration. Some of these same people see the emphasis on existential risks by a number of prominent technologists, including many who are working to build advanced A.I. systems themselves, as a cynical ploy intended both to hype the capabilities of their current A.I. systems and to distract regulators and the public from the real and concrete risks that already exist with today’s A.I. software.

And just to be clear, these real-world harms are numerous and serious: They include the reinforcement and amplification of existing systemic, societal biases, including racism and sexism, as well as an A.I. software development cycle that often depends on data taken without consent or regard to copyright, the use of underpaid contractors in the developing world to label data, and a fundamental lack of transparency into how A.I. software is created and what its strengths and weaknesses are. Other risks also include the large carbon footprint of many of today’s generative A.I. models and the tendency of companies to use automation as a way to eliminate jobs and pay workers less.

But, having said that, concerns about existential risk are becoming harder to ignore. A 2022 survey of researchers working at the cutting edge of A.I. technology in some of the most prominent A.I. labs revealed that about half of these researchers now think there is a greater than 10% chance that A.I.’s impact will be “extremely bad” and could include human extinction. (It is notable that a quarter of researchers still thought the chance of this happening was zero.) Geoff Hinton, the deep learning pioneer who recently stepped down from a role at Google so he could be freer to speak out about what he sees as the dangers of increasingly powerful A.I., has said models such as GPT-4 and PALM 2 have shifted his thinking and that he now believes we might stumble into inventing dangerous superintelligence anytime in the next two decades.

There are some signs that a grassroots movement is building around fears of A.I.’s existential risks. Some students picketed OpenAI CEO Sam Altman’s talk at University College London earlier this week. They were calling on OpenAI to abandon its pursuit of artificial general intelligence—the kind of general-purpose A.I. that could perform any cognitive task as well as a person—until scientists figure out how to ensure such systems are safe. The protestors pointed out that it was particularly crazy that Altman himself has warned that the downside risk from AGI could mean “lights out for all of us,” and yet he continues pursuing more and more advanced A.I. Similar protestors have picketed outside the London headquarters of Google DeepMind in the past week.

I am not sure who is right here. But I think that if there’s a nonzero chance of human extinction or other severely negative outcomes from advanced A.I., it is worthwhile having at least a few smart people thinking about how to prevent that from happening. It is interesting to see some of the top A.I. labs starting to collaborate on frameworks and protocols for A.I. Safety. Yesterday, a group of researchers from Google DeepMind, OpenAI, Anthropic, and several nonprofit think tanks and organizations interested in A.I. Safety published a paper detailing one possible framework and testing regime. The paper is important because the ideas in it could wind up forming the basis for an industry-wide effort and could guide regulators. This is especially true if a national or international agency specifically aimed at governing foundation models, the kinds of multipurpose A.I. systems that are underpinning the generative A.I. boom, comes into being. OpenAI’s Altman has called for the creation of such an agency, as have other A.I. experts, and this week Microsoft put its weight behind that idea too.

“If you are going to have any kind of safety standards that govern ‘is this A.I. system safe to deploy?’ then you’re going to need tools for looking at that AI system and working out: What are its risks? What can it do? What can it not do? Where does it go wrong?” Toby Shevlane, a researcher at Google DeepMind, who is the lead author on the new paper, tells me.

In the paper, the researchers called for testing to be conducted by both the companies and labs developing advanced A.I. as well as by outside, independent auditors and risk assessors. “There are a number of benefits to having external perform the evaluation in addition to the internal staff,” Shevlane says, citing accountability and vetting safety claims made by the model creators. The researchers suggested that while internal safety processes might be sufficient to govern the training of powerful A.I. models, regulators, other labs and the scientific community as a whole should be informed of the results of these internal risk assessments. Then, before a model can be set loose in the world, external experts and auditors should have a role in assessing and testing the model for safety, with the results also reported to a regulatory agency, other labs, and the broader scientific community. Finally, once a model has been deployed, there should be continued monitoring of the model, with a system for flagging and reporting worrying incidents, similar to the system currently used to spot “adverse events” with medicines that have been approved for use.

The researchers identified nine A.I. capabilities that could pose significant risks and for which models should be assessed. Several of these, such as the ability to conduct cyberattacks and to deceive people into believing false information or into thinking that they are interacting with a person rather than a machine, are basically already true of today’s existing large language models. Today’s models also have some nascent capabilities in other areas the researchers identified as concerning, such as the ability to persuade and manipulate people into taking specific actions and the ability to engage in long-term planning, including setting sub-goals. Other dangerous capabilities the researchers highlighted include the ability to plan and execute political strategies, the ability to gain access to weapons, and the capacity to build other A.I. systems. Finally, they warned of A.I. systems that might develop situational awareness—including possibly understanding when they are being tested, allowing them to deceive evaluators—and the capacity to self-perpetuate and self-replicate.

The researchers said those training and testing powerful A.I. systems should take careful security measures, including possibly training and testing the A.I. models in isolated environments where the model had no ability to interact with wider computer networks or its ability to access other software tools could be carefully monitored and controlled. The paper also said that labs should develop ways to rapidly cut off a model’s access to networks and shut it down should it start to exhibit worrying behavior.

In many ways, the paper is less interesting for these specifics than for what its mere existence says about the communication and coordination between cutting-edge A.I. labs regarding shared standards for the responsible development of the technology. Competitive pressures are making the sharing of information on the models these tech companies are releasing increasingly fraught. (OpenAI famously refused to publish even basic information about GPT-4 for what it said was largely competitive reasons and Google has also said it will be less open going forward about exactly how it builds its cutting-edge A.I. models.) In this environment, it is good to see that tech companies are still willing to come together and try to develop some shared standards on A.I. safety. How easy it will be for such coordination to continue, absent a government-sponsored process, remains to be seen. Existing laws may also make it more difficult. In a white paper released earlier this week, Google president of global affairs Kent Walker called for a provision that would give tech companies safe harbor to discuss A.I. safety measures without falling afoul of antitrust laws. That is probably a sensible measure.

Of course, the most sensible thing might be for the companies to follow the protestors’ advice, and abandon efforts to develop more powerful A.I. systems until we actually understand enough about how to control them to be sure they can be developed safely. But having a shared framework for thinking about extreme risks and some standard safety protocols is better than continuing to race headlong into the future without those things.

With that here’s a few more items of A.I. news from the past week:  

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

A.I. IN THE NEWS

OpenAI’s Altman threatens to pull out of Europe, then pulls back that threat. The OpenAI CEO told reporters in London that the company would pull out of Europe if it could not find a way to comply with the European Union’s new A.I. Act. The draft of the act, which is currently approaching finalization, includes a requirement that those developing general-purpose foundation models, such as OpenAI, comply with other European laws, such as the bloc’s strict data privacy rules. It also requires them to list any copyrighted material they’ve used in training A.I. models. Both requirements may be difficult for OpenAI and other tech companies to meet given the way large A.I. models are currently developed. But today Altman said on Twitter that OpenAI is “excited to continue to operate here and of course have no plans to leave.”

White House announces new A.I. roadmap, call for public comment on A.I. Safety, advice for educators. The Biden administration on Tuesday rolled out new efforts focused on A.I., including an updated federal roadmap for A.I. research and development. It also released a Department of Education report on the risks and opportunities for education that the fast-moving technology presents. The White House also issued a request for public input on "how to manage A.I. risks and harness A.I. opportunities." Individuals and organizations are asked to submit comments by July 7. You can read more from the White House press release here.

Adobe adds generative A.I. capabilities to Photoshop. Adobe is introducing its A.I.-powered image generator Firefly into Photoshop, enabling users to edit photos more quickly and easily, CNN reported. The tool allows users to add or remove elements from images using a simple text prompt, while automatically matching the lighting and style of the existing image. Firefly was trained on Adobe's own stock images and publicly available assets, which the company hopes will help it avoid copyright issues faced by other A.I. image generator tools that use online content without licensing.

India becomes the latest country to plan A.I. regulation. India's IT minister said that the country’s new Digital India Bill will include regulations on A.I. as well as online content, tech publication The Register reported. The bill, which is set to be introduced in June, will address concerns such as users harmed by A.I. and the moderation of "fake news" on social media. The bill is likely to face opposition both domestically and from Big Tech companies and international lobby groups.

A.I. used to find new antibiotic to treat superbug bacteria. Scientists from McMaster University and MIT used A.I. to discover a new antibiotic called Abaucin, which can effectively kill the deadly bacteria Acinetobacter baumannii, The Guardian reports. Often found in hospitals and care homes, Acinetobacter baumannii is among the pathogens that are called superbugs because they have evolved resistance to most existing antibiotics. The researchers used an A.I. algorithm to screen thousands of known antibacterial molecules and find structural features that correlated strongly with the ability to kill bacteria. They then screened thousands of chemicals with unknown antibacterial properties against this model to get predictions of which ones were likely to be effective. The results pointed them to Abaucin. This breakthrough offers promising prospects for combating drug-resistant bacteria.

About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Newsletters

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • World's Most Admired Companies
  • See All Rankings
  • Lists Calendar
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Newsletters

Meta's Hyperion data-center site in Northeastern Louisiana.
NewslettersEye on AI
Big Tech will spend nearly $700 billion on AI this year. No one knows where the buildout ends
By Sharon GoldmanApril 30, 2026
7 hours ago
The Tory Burch Foundation is almost halfway to its $1 billion goal for women entrepreneurs
NewslettersMPW Daily
The Tory Burch Foundation is almost halfway to its $1 billion goal for women entrepreneurs
By Emma HinchliffeApril 30, 2026
9 hours ago
The startup that wants to give surgeons X-ray vision
NewslettersTerm Sheet
The startup that wants to give surgeons X-ray vision
By Allie GarfinkleApril 30, 2026
14 hours ago
Google Cloud CEO Thomas Kurian at Fortune Brainstorm AI 2025 in San Francisco. (Photo: Stuart Isett/Fortune)
NewslettersFortune Tech
Google Cloud is almost one-fifth of Alphabet’s business
By Andrew NuscaApril 30, 2026
15 hours ago
The $665 billion question: Will Big Tech’s AI gamble pay off?
NewslettersCEO Daily
The $665 billion question: Will Big Tech’s AI gamble pay off?
By Diane BradyApril 30, 2026
16 hours ago
How JPMorgan’s CIO is reshaping work at the bank with a $19.8 billion annual tech and AI budget
NewslettersCIO Intelligence
How JPMorgan’s CIO is reshaping work at the bank with a $19.8 billion annual tech and AI budget
By John KellApril 29, 2026
1 day ago

Most Popular

Apple cofounder Ronald Wayne—whose stake would be worth up to $400 billion had he not sold it in 1976—says that at 91, he has no regrets
Success
Apple cofounder Ronald Wayne—whose stake would be worth up to $400 billion had he not sold it in 1976—says that at 91, he has no regrets
By Preston ForeApril 27, 2026
3 days ago
Google Cloud revenue is now 18% of Alphabet's business. Is this the beginning of the end of Google's search identity?
Big Tech
Google Cloud revenue is now 18% of Alphabet's business. Is this the beginning of the end of Google's search identity?
By Alexei OreskovicApril 29, 2026
24 hours ago
‘The cost of compute is far beyond the costs of the employees’: Nvidia executive says right now AI is more expensive than paying human workers
AI
‘The cost of compute is far beyond the costs of the employees’: Nvidia executive says right now AI is more expensive than paying human workers
By Sasha RogelbergApril 28, 2026
3 days ago
Jamie Dimon gets candid about national debt: ‘There will be a bond crisis, and then we’ll have to deal with it’
Economy
Jamie Dimon gets candid about national debt: ‘There will be a bond crisis, and then we’ll have to deal with it’
By Eleanor PringleApril 29, 2026
2 days ago
‘They left me no choice’: Powell isn’t going anywhere—blocking Trump from another Fed appointee
Banking
‘They left me no choice’: Powell isn’t going anywhere—blocking Trump from another Fed appointee
By Eva RoytburgApril 29, 2026
1 day ago
With no end in sight, Trump considers new options in Iran war—including the ‘Dark Eagle’ hypersonic missile
Big Tech
With no end in sight, Trump considers new options in Iran war—including the ‘Dark Eagle’ hypersonic missile
By Jim EdwardsApril 30, 2026
15 hours ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.