Big AI’s ‘reverse acqui-hire’ deals get more scrutiny in the U.K. and U.S.

Satya Nadella, chief executive officer of Microsoft Corp., speaks during the company event on AI technologies in Jakarta, Indonesia, on Tuesday, April 30, 2024.
Satya Nadella, chief executive of Microsoft.
Dimas Ardian—Bloomberg/Getty Images

When Microsoft dropped its observer seat on OpenAI’s board last week—and Apple abandoned reported plans to gain such a seat—the uptick in regulatory scrutiny around the AI sector was a clear culprit. And the scope of that scrutiny just widened further.

The U.K.’s antitrust regulator, the Competition and Markets Authority, yesterday gave notice that it’s in the early stages of probing Microsoft’s March hiring of key staff from AI startup Inflection, which came with $650 million in licensing fees for Inflection.

The hired crew included Inflection cofounder Mustafa Suleyman, who now runs Microsoft’s in-house AI efforts. This seemed to be a case of Microsoft trying to avoid becoming over-reliant on OpenAI, into which it has invested $13 billion for a profit share. But authorities also want to check that the nature of the deal (a so-called reverse acqui-hire, as a more traditional acqui-hire would involve buying the company) wasn’t also a tactic to sidestep antitrust rules that might be more clearly triggered by a straightforward acquisition.

The CMA’s initial probe formally begins today, and the watchdog will decide by Sept. 11 whether to press on with a proper merger investigation.

Meanwhile, Reuters reports that the U.S. Federal Trade Commission—which has already been nosing around the Microsoft-Inflection deal since early June—is also asking questions about Amazon’s deal with AI startup Adept late last month.

This was a very similar arrangement, with CEO David Luan and other key Adept players joining Amazon and Amazon paying Adept to license its technology. One might also see a parallel between Microsoft’s earlier OpenAI investment and Amazon’s earlier investment of $4 billion into OpenAI rival Anthropic.

Apart from the difference in dynamics stemming from the fact that OpenAI is so far a much bigger name than Anthropic, there does seem to be a playbook here, and it’s no surprise that U.S. and U.K. (and EU) regulators would like to know if rules are being skirted. U.S. lawmakers are certainly upset about the trend. “A few companies control a major portion of the market, and just concentrate—rather than on innovation—trying to buy out everybody else’s talent,” complained Sen. Ron Wyden (D-Or.) last week.

Incidentally, there are also a couple more interesting news tidbits out there about AI regulation.

First, King Charles III announced today that the U.K.’s new Labour government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”—a big shift from the previous Conservative government’s hands-off approach.

Second, with the EU now having officially published its own AI Act—its rules on model-makers will begin to apply from February next year—European privacy regulators said yesterday that they would be the right ones to enforce the new law in many cases. “I strongly believe that [data protection authorities] are suitable for this role because of their full independence and deep understanding of the risks of AI for fundamental rights, based on their existing experience,” said Irene Loizidou Nicolaidou, deputy chair of the watchdogs’ umbrella body, the European Data Protection Board, in a statement.

Of course, AI has been a big theme at Fortune’s Brainstorm Tech conference in Park City, Utah, this week, as my colleague Jeremy Kahn wrote yesterday. I was particularly intrigued by Jeremy’s chat with Google chief scientist Jeff Dean, who warned against people overplaying AI’s role in his company’s rising carbon emissions.

“There’s been a lot of focus on the increasing energy usage of AI, and from a very small base that usage is definitely increasing,” Dean said. “But I think people often conflate that with overall data center usage—of which AI is a very small portion right now but growing fast—and then attribute the growth rate of AI-based computing to the overall data center usage.”

A fair point, for now. But I think the criticisms of AI’s massive hunger for energy will remain valid until companies like Google and Microsoft can prove that rolling it out doesn’t mean deviating from their emissions-reduction goals.

A few more articles based on yesterday’s Brainstorm Tech action:

How trust and safety leaders at top tech companies are approaching the security threat of AI: ‘Trust but verify’

Why Grindr’s CEO believes ‘synthetic employees’ are about to unleash a brutal talent war for tech startups

Salesforce’s AI chief says the company uses its Einstein products internally: ‘We like to drink our own martinis’

How VCs from Alphabet’s CapitalG to Norwest are coping with a dead IPO landscape: ‘We’re not here to time the market’ 

Sequoia’s Roelof Botha says Silicon Valley’s legendary VC firm will not take a political point of view on the election

Rent the Runway cofounder Jennifer Fleiss on why cofounder relationships are critical for mental wellness in the startup game

Tech talent and killer powder: The recipe that startups say is fueling the rise of Utah’s Silicon Slopes

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

Trump knocks TSMC. Former and possibly future President Donald Trump said in remarks published yesterday that Taiwan should pay the U.S., which is “no different than an insurance company,” for defense. “They did take about 100% of our chip business,” he complained to Bloomberg Businessweek, adding that Taiwan was far away from the U.S. and very close to China, which sees it as a renegade province. The report on his stance knocked the shares of TSMC, the Taiwanese contract chipmaker that is crucial to the tech industry, down 5%.

TikTok antitrust trouble. TikTok operator ByteDance has lost a legal bid to be designated as a “gatekeeper” under the EU’s new antitrust law, the Digital Markets Act, Reuters reports. ByteDance can still appeal the decision by the EU’s General Court, but it seems it will have to abide by rules around things like providing interoperability with other services, and not processing users’ data for targeted advertising without their express consent.

Anthropic + Menlo Ventures. Menlo Ventures has a new $100 million fund for AI startups that use Anthropic’s Claude AI models, Bloomberg reports. Anthropic, which counts Menlo among its investors, will provide model-use credits as well as networking opportunities. While it won’t get any stakes in the startups, it will of course promote use of its models. Side note: Android users finally have their own Claude app now.

SIGNIFICANT FIGURES

45%

—The injury rate suffered by Amazon’s warehouse workers during Prime Day 2019, according to a Sen. Bernie Sanders-penned Senate committee report. “Prime Day and the holiday season are characterized by extremely high volume and intense pressure to work long hours and ignore safety guidelines,” the report states. Amazon responded to the allegations by saying “the safety and health of our employees is and always will be our top priority.”

IN CASE YOU MISSED IT

Exclusive: Google is backing a Danish startup ‘brewing’ CO2 that can clean up one of the most polluting industries in the world, by Prarthana Prakash

Andreessen Horowitz founders the latest to stake Trump, as tech money piles into his coffers, by Bloomberg, by Bloomberg

Elon Musk’s potential $180 million donation to Trump—who hates EVs—is a stunning risk to Tesla, by Eva Roytburg

Musk is moving SpaceX and X to Texas, by Bloomberg

Nvidia’s market cap will soar to $50 trillion—yes, trillion—says early investor in Amazon and Tesla, by Sasha Rogelberg

Cybersecurity giant Kaspersky to shutter all US operations after the government banned its software nationwide, by the Associated Press

To get a discount from this mattress company, you have to negotiate with its AI, by Marco Quiroz-Gutierrez

BEFORE YOU GO

Bitcoin’s not-inventor could face charges. Craig Wright tried to convince the world he was Bitcoin’s pseudonymous inventor Satoshi Nakamoto, but crypto firms took him to court in the U.K. and he lost resoundingly earlier this year. Now, the judge who ruled that Wright “extensively and repeatedly” lied to his court, by producing bogus documents and testimony, said he’s asking the Crown Prosecution Service to figure out if the Australian computer scientist should face criminal perjury and forgery charges. “In advancing his false claim to be Satoshi through multiple legal actions, Dr Wright committed ‘a most serious abuse’ of the process of the courts of the U.K., Norway and the USA,” wrote James Mellor in a ruling yesterday, according to the Guardian.

This is the web version of Fortune Tech, a daily newsletter breaking down the biggest players and stories shaping the future. Sign up to get it delivered free to your inbox.