7 key takeaways from Fortune’s Brainstorm AI conference in London

Jeremy KahnBy Jeremy KahnEditor, AI
Jeremy KahnEditor, AI

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

Hugging Face cofounder and chief scientist Thomas Wolf.
Hugging Face cofounder and chief science officer Thomas Wolf at Fortune Brainstorm AI London.
FORTUNE

Hello and welcome to Eye on AI…In this edition, Trump fires the head of the U.S. Copyright Office days after it releases report suggesting tech companies may not be able to claim the use of copyrighted works for AI training without consent is “fair use”…Saudi Arabia launches a new AI company…don’t let AI run your vending machines…but it could help run your democracy.

I spent last week at Fortune Brainstorm AI London. The two-day conference brought together an impressive set of executives from companies big and small, from the U.S., U.K, and Europe, as well as British lawmakers, civil society leaders, and academics, to talk about AI and its impact on business and society. I want to share some key takeaways.

Regulation and innovation are not in opposition. Regulation, in fact, can speed AI adoption because companies and consumers buying AI products will have more assurance around AI and know where liability lies if anything goes wrong. That was a key message from a panel I moderated on governing AI that included Lord Tim Clement-Jones and Lord Chris Holmes, both members of the U.K. House of Lords who have introduced private members’ bills to try to put some mandated guardrails around how AI is deployed in the U.K.

Standards can be a path forward on international AI governance. The Trump administration’s anti-regulation stance, and its threat to punish other countries that enact laws that it perceives as unfairly targeting U.S. technology companies, combined with U.S.-China geopolitical tensions, make nation state-level agreement on international governance for AI difficult. But Lord Clement-Jones suggested that industry standard-setting around AI—particularly by the businesses using AI software, as opposed to just the AI companies developing it—could accomplish much of what one would want in an international governance regime, without having to necessarily have government-level conventions.

But standards need to be technical, not just policy-oriented. That was the message from Navrina Singh, the founder and CEO of Credo AI, which helps companies implement compliance mechanisms around responsible AI. She faulted current standards, such as ISO 42001—a voluntary certification for AI systems—as being too focused on high-level policies and not enough on technical requirements and testing methodologies.

AI competition is rapidly shifting to the application layer. Highly-capable base AI models are increasingly commoditized, veteran tech analyst Benedict Evans said during his presentation. To gain an advantage, AI companies will need to focus on building better applications on top of those models for specific use cases, create deeper integrations with other software, and focus more on the design of the interfaces through which users access AI software.

We all need better, more individualized benchmarks. I moderated a fireside chat with Thomas Wolf, the cofounder and chief scientist at Hugging Face, the company best known for offering a large repository of open source AI models, who talked about the fact that so many AI models now perform at the top of many public benchmarks, making it difficult to tell which is actually superior for a given use case. Plus, for a variety of reasons, these tests are increasingly poor proxies for how the models will perform on real world tasks. Wolf said companies purchasing AI systems need to build their own individualized benchmarks for the specific tasks they are seeking to automate.

Embodied AI is making rapid advances and represents the next big frontier for AI. Wolf explained why Hugging Face had recently acquired humanoid robotics company Pollen Robotics. He said it was partly because of what AI has done for robotics lately—making it far easier to interact with robots in natural language and creating “foundation models for robotics” that make it far easier to get robots to do useful things without laborious mapping and programming. But he also said the acquisition was about what he thinks robotics will do for AI—helping to solve remaining grand challenges around “world models” (AI models that understand the physics of the world and have some grasp of cause and effect), common sense reasoning, and maybe even self-awareness.

Wolf was not the only speaker who was enthusiastic about embodied AI. Alex Kendall, the cofounder and CEO of Wayve, which makes software for self-driving cars, also said that world models were starting to enable robots that could take safe actions even when encountering situations they hadn’t seen during training. Meanwhile, Tye Brady, the chief technology officer of Amazon Robotics, showed off the capabilities of Amazon’s new Vulcan robot (unveiled last week), which has the ability to reason about how best to pick up an object from a storage bin through touch and feel, not just computer vision.

Employers need to focus on bringing workers along.
A theme of several discussions, both on the mainstage and in breakout sessions, was how companies can make their workforces “AI ready.” The key, many said, was to frame the introduction of AI around what it would do to empower workers and make their professional lives better, and not just as a productivity tool. “If you’re not investing in skills development and you’re not invested in learning, I think you are going to leave your workforce behind,” Karalee Close, global talent and organization lead at IT services and management consulting firm Accenture, said.

Jason Warner, the cofounder and CEO of Poolside, which makes AI tools for software engineering, talked about using AI to mentor early-career software developers. The AI could help them get up to speed on a company’s code base, tutor them in the tradeoffs that had gone into building existing software, as well as assisting them in writing new code. If implemented correctly, workers will readily embrace the technology.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

If you missed Fortune Brainstorm AI London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore? You can learn more about that event here.

AI IN THE NEWS

Trump fires head of U.S. Copyright Office days after it releases report critical of AI companies’ use of copyrighted material. The Trump administration fired Shira Perlmutter, the Register of Copyrights and head of the U.S. Copyright Office. Her dismissal came just days after that office took the unusual step of releasing a pre-publication report on AI and copyright suggesting that tech companies training AI models on copyrighted material without consent would, in many cases, be unlikely to qualify as “fair use.” Although no reason was given for Perlmutter’s firing, Rep. Joe Morelle, the senior Democrat on the Committee on House Administration, suggested on social media that the timing was not a coincidence. You can read more from Fortune’s Bea Nolan here and Politico here

Microsoft and OpenAI renegotiating their partnership. That’s according to the Financial Times, which says the renegotiation of the partnership has held up OpenAI’s plans to reconfigure its corporate structure. OpenAI wants to convert from being a non-profit foundation that controls a for-profit subsidiary, in which investors are given a right to share profits up to a certain cap, into a public benefit corporation that would issue traditional equity. The restructuring is seen as essential for allowing OpenAI, currently valued at $260 billion, to continue to raise additional capital, perhaps through an eventual initial public offering. Microsoft, according to the FT, which cited multiple unnamed sources it said were familiar with the negotiations, has been offering to surrender some equity in exchange for the right to continued access to OpenAI’s technology beyond 2030, when the current tech sharing agreement between the two sunsets.

Saudi Arabia launches state-backed AI company Humain. Saudi Arabia has launched a new AI company called Humain, chaired by the country’s Crown Prince Mohammed bin Salman and backed by the $940 billion from its sovereign wealth fund, the Public Investment Fund. The company aims to develop Arabic-language AI models and also build AI infrastructure. Humain could be viewed as an effort by Saudi Arabia to catch-up with its regional rival, the United Arab Emirates, which has launched two similar companies with global ambitions, G42 and MGX. The company’s launch came the day before U.S. President Donald Trump visited the Kingdom accompanied by a number of prominent tech executives, including Elon Musk, OpenAI’s Sam Altman, and Amazon’s Andy Jassy. Read more from the Financial Times here.

Perplexity reported close to raising $500 million at $14 billion valuation. The startup, which aims to rival Google by offering genAI search tools, is close to finalizing a new venture capital investment that would value it at $14 billion, Bloomberg News reports. The $500 million funding round is being led by venture capital firm Accel.

Police are using AI software that tracks people’s body shape and clothes to get around bans on facial recognition. That’s according to a story in MIT Tech Review. According to the article, U.S. police departments are increasingly using an AI tool called Track from software company Veritone that can identify people in recorded video based on their body shape and clothing, as a way to get around state-level bans on law enforcement use of facial recognition software. But, just as with facial recognition tools, Track raises civil liberties concerns, according to the American Civil Liberties Union.

EYE ON AI RESEARCH

Lessons on running AI agents from a vending machine simulator. Hat tip here to The Atlantic’s Nick Thompson who highlighted this research paper—which is actually from February—on his daily LinkedIn vlog earlier this week. The founders of Andon Labs, a little outfit that develops tests to evaluate advanced AI models, tested a bunch of AI models on how well they could run a simulated vending machine business. They then compared the results with human testers. The results hold lessons for any company hoping to deploy AI agents.

Two of the AI models—Anthropic’s Claude Sonnet 3.5 and OpenAI’s o3-mini—performed better than human testers on average. And in fact, Claude Sonnet 3.5 on average produced far more than twice the profit that the humans did. But all of the AI models performed with much higher variance than people did. All of the AI models sometimes made terrible errors or got caught in strange loops from which they could not recover. The worst performance of even the best model, Claude 3.5, was less than 25% of its average performance. Humans, on the other hand, were far more reliable, even if they were also less profitable. What’s more, the reasons the models went awry had nothing to do with how much context or information they had. In fact, sometimes giving the models more instructions and information seemed to degrade their performance, the researchers found.

The research suggests these models are probably not yet suitable to use for fully automating tasks (i.e. with no human in the loop) for anything where failure has significant consequences. You can read Andon Labs’ full paper here.

FORTUNE ON AI

British Airways used AI to cut flight delays, but long waits still outpace pre-pandemic levels —by Prarthana Prakash

An Apple exec suggested AI chatbots were eroding Google’s search business, sending Alphabet shares plummeting. The truth is more complicated —by Jeremy Kahn

CEOs say that just a fraction of AI initiatives are actually delivering the return on investment they expected —by Sara Braun

Digital marketing used to be about clicks, but the rise of ChatGPT means it’s ‘now all about winning the mentions’ —by Stuart Dyos

Mastercard exec says AI agents helping you make your next purchase could be key to solving online shopping’s $750 million fraud problem —by Marco Quiroz-Gutierrez

AI CALENDAR

May 19-22: Microsoft Build, Seattle

May 20-21: Google IO, Mountain View, Calif.

May 20-23: Computex, Taipei

June 9-13: WWDC, Cupertino, Calif.

July 13-19: International Conference on Machine Learning (ICML), Vancouver

July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

BRAIN FOOD

An experiment in AI-mediated democracy. A lot gets written about how corrosive AI is to democracy. But there are ways AI can enhance democracy, including by helping to run what are essentially caucuses—sometimes called citizens’ assemblies—at scale. A big trial of this kind of digital democracy, called Waves, is happening in the U.K., led by the nonprofit public policy organization and think tank Demos and backed by €1 million in funding from Google. It is working with two local government councils—the London Borough of Camden, which is looking at how it handles adult social care, and South Staffordshire, near Birmingham, which is looking to create a new master plan for local development. You can learn more about the project here on Demos’ site here.

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.