The Paris AI Action Summit was a fork in the road—but whether the chosen path leads to prosperity or disaster remains unclear

Jeremy KahnBy Jeremy KahnEditor, AI
Jeremy KahnEditor, AI

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

Emmanuel Macron and Brigitte Macron stand alongside JD Vance and Usha Vance
French President Emmanuel Macron and his wife Brigitte Macron welcome U.S. Vice President JD Vance and his wife Usha Vance during the AI Action Summit in Paris.
Chesnot—Getty Images

Bonjour! Greetings from Paris, where the French government is currently hosting government officials from dozens of nations for what it is calling the AI Action Summit. The Summit is the successor to two prior international gatherings, the first convened by the U.K. government and held at Bletchley Park, in England, in November 2023, and the second held by the South Korean government in Seoul in May 2024.

But it would be hard to overstate the difference in vibe between those previous two meetings and this one. The Bletchley Summit was a decidedly sober affair, with just 29 governments represented and top executives from the handful of AI labs, such as OpenAI, Google DeepMind, and Anthropic, at the cutting-edge of AI technology. The conversation was dominated by what some would call AI “doomerism”—or how to head off the most catastrophic risks from powerful AI. It led to a commitment by the countries present to identify AI risks and work together to head them off. Then in Seoul, 16 leading AI companies agreed to publish frameworks for how they would seek to identify and mitigate AI safety risks, and under what circumstances they might decide not to develop models.

An extreme vibe shift

For this Summit, France has taken, shall we say, a different approach. Matt Clifford, a tech investor turned U.K. government advisor who helped plan the Bletchley Summit, said on a panel the Tony Blair Institute hosted here on Sunday that it “was exciting to see what [the French summit] team have done, in blowing it up.”

He positioned the remark as a compliment, that France has widened the aperture of the summit to look at AI’s other potential risks—around bias, inequality, and job displacement—but most importantly to highlight AI’s economic opportunities. France transformed a summit originally into what could best be described as an AI festival, complete with glitzy corporate side events and even a late night dance party held amid the opulent tapestries and neo-baroque gilded mouldings of the French foreign ministry at Quai d’Orsay. That rumbling you can barely make out beneath the thumping bass line? That would be the cognitive dissonance between the party atmosphere in Paris, along with French President Emmanuel Macron’s repeated exhortations to move “faster and faster” on AI deployment, and the fact that executives at leading AI companies are predicting human-level intelligence may arrive in two to five years—with far-ranging, disruptive consequences for society and workers everywhere.

Blowing it up

For those who care about AI’s potential catastrophic risks, an alternate meaning of Clifford’s “blowing it up” comes to mind. Once the main focus of the summit, AI Safety was relegated to a small subset of discussions within a broader “Trust in AI”  pillar, which itself was just one of five separate summit tracks. The word “safety” was banished from the Summit’s name, in favor of the term Action—and Anne Bouverot, French President Emmanuel Macron’s special envoy for the Summit, dismissed concerns about AI’s potential existential risks as “science fiction” in her opening address. (Even though there is mounting empirical evidence that today’s AI models, if used as agents that carry out actions on a user’s behalf, can indeed pose a risk of loss of control—with models seeking to achieve human-assigned goals but doing so in ways the human user never intended.) Safety didn’t make an appearance in the Summit’s final communique either. Nor did the final declaration include any clear path forward for future international meetings to work specifically on AI risks. (India, which co-hosted the Paris Summit, said it would host the next summit in its own country, but without any promises of what it would focus on.)

The Paris Summit bitterly disappointed many who work on AI Safety. Max Tegmark, the MIT physicist who is the founder and president of the Future of Life Institute, called the Summit “a tremendous missed opportunity” and the declaration’s omission of any safety steps “a recipe for disaster.” Tegmark, in an earlier interview with Fortune, said he still held out hope that world leaders would come to recognize that uncontrollable human-level AI would pose a risk to their own power, and that when they recognized this fact, they would move to regulate it. Some AI safety experts think the effort to create international agreements to address AI’s risks will have to shift to a different forum. (There are other efforts underway at the United Nations, OECD, and G7.) More than one AI safety expert told me at the Summit that they think that it may now take some sort of “freak out moment”—when increasingly powerful AI agents cause some sort of harm, or perhaps just demonstrate how easily they could cause harm—to actually get progress on international AI governance. Some predicted that such a moment could come in the next year as more and more companies roll out AI agents and AI model capabilities continue to advance. 

The DEI Declaration

While not mentioning “safety,” the Summit’s final declaration did include some vague language about the need to ensure AI’s “diversity,” and lots of talk about “inclusive” and “sustainable” AI. The use of these trigger terms guaranteed that the Trump Administration—which sent Vice President J.D. Vance to be the official U.S. representative to the Summit—wouldn’t sign the meeting’s final declaration. This might not have been Macron’s intentions, but it did allow him to credibly claim France was leading “a third way” on AI between the two opposing camps that have been leading in the technology’s development, the U.S. and China. (China did sign the statement.)

And largely because the U.S. wouldn’t sign, the U.K. also decided against signing—apparently to avoid any risk of antagonizing the Trump Administration—although 61 other countries did sign. (Top execs from Google, OpenAI, and Anthropic were all present, but only one company, Hugging Face, the AI model repository and open source AI champion, signed.) Anthropic released a statement from its CEO Dario Amodei in which he hinted at disappointment the Summit hadn’t done more to address the looming risks of human-level artificial general intelligence. “Greater focus and urgency is needed,” Amodei said “given the pace at which the technology is progressing. The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit.”

The Summit did create a new foundation with a $400 million endowment (and a target of $2.5 billion within five years), devoted to funding projects aimed at creating datasets and small AI models designed to serve the public interest. It also created a Coalition on Sustainable AI that includes Nvidia, IBM, and SAP, as well as French energy giant EDF, but without any clear targets or road map for what the organization will do going forward, leaving climate campaigners disappointed. Union leaders also decried the lack of concrete steps to make sure workers have a clear seat at the table for discussions of AI policy. And the creation of these new organizations was eclipsed by big announcements on AI investment: Macron’s own reveal of a 109 billion euro plan for AI investments in France and the European Union’s unveiling of a 200 billion euro plan to speed AI adoption in European industry. 

Vance Makes Trump’s AI Policy Clear

Elon Musk’s close ties to U.S. President Donald Trump and Trump’s occasional comments about AI’s potential dangers had left some in doubt about exactly where the Trump Administration would come down on AI regulation. Vance laid those doubts to rest, giving a red meat speech that said U.S. A.I. policy would be built on four pillars: the maintenance of U.S. AI technology as “the gold standard;” a belief that excessive regulation could kill innovation and that “pro-growth” AI policies are required; that AI must “remain free from ideological bias, and that American AI will not be co-opted into a tool for authoritarian censorship;” and that workers will be consulted on AI policy and that the Trump Administration will “maintain a pro-worker growth path for AI” with the belief AI will create more jobs than it displaces. With Google CEO Sundar Pichai sitting uncomfortably on stage behind him, and OpenAI CEO Sam Altman and Anthropic’s Amodei in the audience, Vance also warned that companies calling for AI regulation were attempting to engage in regulatory capture, enshrining rules that would lock in their advantage to the detriment of competitors.

At a time when many companies have been rushing to deploy Chineses startup DeepSeek’s R1 reasoning model, Vance also used his speech to caution the countries present against partnering with Chinese companies—although he did not mention China by name. “From CCTV to 5G equipment, we’re all familiar with cheap tech in the marketplace that’s been heavily subsidized and exported by authoritarian regimes,” he said. “As some of us in this room have learned from experience, partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure.”

Chinese researchers present at the conference meanwhile bemoaned the emerging new cold war between Washington and Beijing, saying that it made the whole world less safe. “It’s difficult to hold a very optimistic view about cooperation between the China and the U.S. on AI safety in the future,” Xiao Qian, vice dean of the AI International Governance Institute at Tsinghua University, told the audience at a side event on AI safety in Paris, my Fortune colleague Vivienne Walt reported.

With that, here’s more AI News.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

AI IN THE NEWS

Musk makes unsolicited $97 billion offer for OpenAI. Elon Musk and a group of investors have made a $97.4 billion bid to acquire the nonprofit controlling OpenAI, The Wall Street Journal reported. The offer complicates Altman’s plans to transition OpenAI into a for-profit company and complicates his plans to secure $40 billion in fresh venture capital funding in a deal that could value the AI company at as much as $300 billion. If the conversion to a for-profit does not go through, OpenAI may also have to refund $6.6 billion it raised from investors last year. The move intensifies Musk’s ongoing conflict with OpenAI CEO Sam Altman, who said OpenAI’s board would reject Musk’s offer, while Musk's investor group said it remains committed to securing control over OpenAI’s assets.

Ilya Sutskever’s startup could be valued at $20 billion in new funding deal. Former OpenAI cofounder and chief scientist Ilya Sutskever's new AI startup, Safe Superintelligence Inc. (SSI), is in discussions to raise funding at a valuation of at least $20 billion, Reuters reported. That would be a big bump up from the $5 billion valuation it secured in September 2024, when it raised $1 billion from investors including Sequoia Capital and Andreessen Horowitz. SSI aims to develop advanced AI systems aligned with human interests. Although it has yet to release a product or even an AI model, the company has attracted substantial investor interest, largely due to Sutskever's reputation as an AI researcher.

European defense AI company Helsing and French AI darling Mistral team up. Helsing, a European defense technology company that has built AI systems focused on sensor fusion and intelligence analysis, has announced a strategic partnership with Mistral, the French AI company that has become a national champion for its home country’s AI potential. Their collaboration will focus on creating Vision-Language-Action models that will enable defense platforms to comprehend their environment, communicate naturally with soldiers, and facilitate faster, more reliable decision-making in complex scenarios, Helsing said in a press release.

Leading chatbots distort and mislead on current events, BBC study finds. The British broadcaster has found that leading AI chatbots often produce distorted information about news events. More than half of the AI-generated answers provided by OpenAI’s ChatGPT, Microsoft Copilot, Google’s Gemini, and Perplexity were judged to have “significant issues,” according to expert BBC journalists who fact-checked them. About 20% of the answers had errors about numbers, dates or statements. About 13% of quotes sourced to the BBC were either altered or did not exist in the original source articles.

EYE ON AI RESEARCH

New benchmarks on AI’s sustainability impact. One of the more interesting initiatives to come out of the Paris AI Action Summit is an attempt to create a system for rating the environmental impact of AI models. The benchmarking and rating effort is being backed by Salesforce, Hugging Face, AI company Cohere, and researchers at Carnegie Mellon University. Called the AI Energy Score, it will provide standardized ratings of between 1 to 5 stars, for how models perform on a variety of different tasks, similar to how household appliances are graded on their energy efficiency. The initial testing looked at 166 Ai models. Some of the best so far for text generation include Microsoft’s small language model Phi, as well as Google’s small model Gemma. Salesforce’s small models also got high marks. Meanwhile, Meta’s popular Llama models scored a middling three stars. You can look at the rankings here and read more about the standard here.

FORTUNE ON AI

Businesses using AI face a future with more regulation. Preparing for it now is critical, experts say—by Christian Vasquez

Workday debuts AI agents, with CEO saying they’ll ‘peacefully coexist’ with humans rather than replace them—by Sharon Goldman

AI’s English focus puts many countries at a disadvantage. A new EU project aims to fix that for 32 languages—by David Meyer

LinkedIn cofounder Reid Hoffman, Hugging Face CEO Clement Delangue sign open letter calling for AI ‘public goods’—by Jeremy Kahn

AI CALENDAR

March 3-6: MWC, Barcelona

March 7-15: SXSW, Austin

March 10-13: Human [X] conference, Las Vegas

March 17-20: Nvidia GTC, San Jose

April 9-11: Google Cloud Next, Las Vegas

May 6-7: Fortune Brainstorm AI London. Apply to attend here.

BRAIN FOOD

A new way to consult labor on AI and work? There was a fascinating exchange yesterday on a panel about AI and the future of work at the Paris AI Action Summit. Gilbert Houngbo, president of the International Labor Organization (ILO), as well as Christy Hoffman, president of the UNI Global union, both said that they wanted workers “to have a seat at the table” when decisions were being made about how to deploy AI technologies. Both insisted they were not interested in blocking the use of AI—in fact, many workers would like to use AI in their work and both said there were many ways in which AI might make workers’ lives better—but instead insisted that organized labor had a role to play in ensuring AI software was used to augment humans, not replace them. They also said that workers were eager to be trained in how to use AI technology and that if there were changes in the nature of roles due to AI, organized labor could help ease that transition.

This all sounds reasonable, but it received some interesting pushback from Guillame Faury, the CEO of aerospace giant Airbus. Faury said that AI was moving so fast as a technology that it was very difficult to consult labor, because the current way of consulting labor was primarily through contract negotiations that only happened once every several years and which then essentially locked many conditions in place until the next contract negotiation cycle. Faury said there was no way management could necessarily commit to certain practices or strategies around AI because the technology was moving so fast, management itself had little insight into events more than a few months ahead. Asking management to bind itself to a particular AI deployment strategy in a contract that it could then not touch for several years would be potentially suicidal for the business, he said. Management needed broad discretion to act at the speed of the technology.

This got me thinking that perhaps both sides had a point and that different ideas about exactly how labor and management can work together collaboratively, outside the contract process, would make sense. I’m just not sure what that mechanism would be. Perhaps AI can help?

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.