Apple-watchers will have noted the company’s recent progression from “We’ve deployed machine learning for years and are too sui generis to talk about ‘AI’” to “We have a lot of AI stuff coming soon, honest!” But with Apple being way behind in the generative AI race it has now deigned to join, how will it catch up?
According to a Bloomberg report his morning, the answer lies in partnerships—at least, for what the publication calls the “heavy lifting of generative AI,” as opposed to the on-device AI action that would presumably/hopefully power a smarter Siri. It seems Apple has considered using OpenAI’s models for image generation and the like, but “active negotiations” with Gemini proprietor Google are underway.
Of course, Apple and Google have already had a close partnership for nearly two decades, around making Google the default search engine in Safari. That bond is now being frayed by pressure from antitrust regulators in the U.S. (where the Justice Department has sued Google for paying Apple and others billions of dollars to make it the search default) and in Europe (where the new Digital Markets Act has forced Apple to prompt iOS Safari users to choose which search engine they want as their default).
As Bloomberg notes, a new AI deal could “help make up for” the falling value of Google and Apple’s search agreement—but it could also increase antitrust scrutiny for both companies. After all, the smartphone operating system market is essentially divided between the two companies, so having the same AI garnish on both Android and iOS could raise questions.
What if Apple opts for OpenAI instead? Here, Microsoft enters into the equation. On the one hand, Microsoft’s patronage of OpenAI is also under antitrust scrutiny in both the U.S. and Europe, so an OpenAI-Apple deal could be quite useful in demonstrating that Sam Altman’s company is not effectively a Microsoft unit. But can you imagine if the deal were to extend to MacOS? Then the whole desktop operating system market would be powered by the same large language model.
Perhaps a more durable solution would be for Apple to partner with an AI company like Anthropic or Mistral, where the potential antitrust pitfalls aren’t quite so predictable. Or maybe Apple can get its act together on developing its own models into a competitive option.
Incidentally, while on the subject, there’s a new report out that says OpenAI’s models have developed prejudices against speakers of African American English, “exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement… Language models are more likely to suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death.”
Apple won’t be using AI for such life-changing (or life-ending) decisions, but still, my advice would be to tread really carefully when deciding which models to use, and how to use them. More news below.
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
NEWSWORTHY
Meta drug probe. Federal prosecutors are investigating whether Meta’s social media platforms have been facilitating and profiting from the illicit drug trade, according to the Wall Street Journal, which reports that the Food and Drug Administration has been helping the investigation. The Facebook and Instagram parent says it works “to find and remove this content from our services.” Meanwhile, Turkey’s competition authority has hit Meta with an interim measure to stop it from sharing data between Instagram and the Insta-linked Threads microblogging platform—and is also fining Meta $148,000 daily for giving users an allegedly insufficient and opaque notification about that data-sharing.
SpaceX spy satellites. Elon Musk’s SpaceX is reportedly building a constellation of spy satellites for U.S. intelligence. According to Reuters, the deal between SpaceX’s Starshield unit and the National Reconnaissance Office involves hundreds of satellites and was signed back in 2021. In other Musk news, xAI has as promised open-sourced its Grok AI model, which you can find here—as The Verge reports, this version of Grok dates back to last October and hasn’t been fine-tuned for any particular purpose.
Lynch trial begins. A mere dozen years after HP took a $8.8 billion writedown on its 2011 purchase of the British enterprise software firm Autonomy, former Autonomy CEO Mike Lynch finally goes on trial today for what U.S. prosecutors describe as “the largest fraud” in Silicon Valley’s history. As the Financial Times reports, Lynch is accused of falsifying Autonomy’s accounts in the run-up to the deal; former Autonomy finance chief Stephen Chamberlain is also on trial, and former CFO Sushovan Hussain recently served years behind bars over the affair.
ON OUR FEED
“Given the novel nature of these technologies and commercial arrangements, we are not surprised that the FTC has expressed interest in this area. We do not believe that we have engaged in any unfair or deceptive trade practice.”
—Reddit comments on the Federal Trade Commission’s probe into its sale and licensing of user data for the training of Google’s AI models.
IN CASE YOU MISSED IT
Analysis: Amazon sellers say their businesses are facing an extinction event—they might not be wrong, by Jason Del Rey
What billionaire Frank McCourt saw online changed his life and sent him on a Big Tech crusade, by Paolo Confino
Apple’s $330 billion swoon since the New Year, vague AI plans make it a Coca-Cola-style value stock in the eyes of the Street, by Bloomberg
Tesla settles discrimination case with ex-employee who alleges Elon Musk’s EV maker allowed him to be subjected to racial epithets, by the Associated Press
Hertz’s electric vehicle and CEO about-face is the latest twist after a COVID bankruptcy filing and a deep relationship with Carl Icahn, by Steve Mollman
BEFORE YOU GO
India backs off on AI. India’s government is dropping its plan to force “significant” tech companies to ask for permission before releasing new AI models in the country. According to TechCrunch, the government is now only advising companies to tell users about the potential unreliability of under-tested models. The earlier advisory, from the start of this month, had caused outrage in the tech industry.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.