Who’s getting the better deal in Microsoft’s $10 billion tie-up with ChatGPT creator OpenAI?
Hello everyone! A belated Happy New Year! I’ve been off writing the next cover story of Fortune magazine. You can read it online here. Also look for it on newsstands (and in your mailboxes for print subscribers) soon. It’s on, you guessed it, OpenAI and ChatGPT.
There’s so much to unpack about ChatGPT—and about Microsoft’s expanded “multi-year, multi-billion dollar” partnership with OpenAI, which was formally announced by both companies yesterday.
But here’s a question I’ve been thinking about: There’s an old saying in business that if you don’t know who the fool is in a transaction, it’s you. (Actually, the old saying is usually expressed in far less polite language, but Eye on A.I. aims to be a family publication.) In the Microsoft-OpenAI deal, who is the fool?
The answer depends a lot on exactly how valuable you think ChatGPT and the other generative A.I. technologies that OpenAI currently has in its portfolio, such as text-to-image generator DALL-E 2, are, and how close you think OpenAI is to achieving its stated mission of artificial general intelligence (AGI)—which the company defines as autonomous systems capable of outperforming humans at most economically-valuable work.
First, a few details about the deal: According to sources familiar with the deal who spoke to us and documents seen by Fortune, Microsoft is investing $10 billion in OpenAI and the transaction values the company at close to $29 billion. In exchange, Microsoft is getting the right to 75% of OpenAI’s profits until it earns back this $10 billion plus an additional $3 billion it has already invested in the company–$1 billion in 2019, and another $2 billion which it quietly put into OpenAI in 2021. After that, Microsoft will be entitled to a further 49% of OpenAI’s profits until it earns a profit of $92 billion. At that point, Microsoft’s shares in OpenAI will revert to OpenAI’s non-profit foundation.
This structure alone is highly unusual. And there’s a lot about this deal that we don’t know: For instance, how much cold, hard cash is Microsoft actually shipping to OpenAI? Microsoft said that as part of the partnership it is investing more in building A.I. supercomputing clusters in its Azure datacenters. It is possible that much of what OpenAI is actually getting out of the deal is the right to use these supercomputing clusters at little to no cost and that the $10 billion is largely a “payment-in-kind” for computing resources. We also don’t know over how many years Microsoft is delivering this $10 billion to OpenAI. In addition, it’s unclear if Microsoft has to pay anything to OpenAI in terms of licensing or royalties to use its technology across its suite of products. If Microsoft integrates ChatGPT into Bing, as it is reportedly planning, will OpenAI make a tiny cut on every search? I doubt this is how the deal works, but the fact is that we don’t know.
Ok, with all that said, who is getting the better bargain here? The terms certainly seem extremely favorable to Microsoft. Even if it were paying the entire $10 billion all in cash and all in one year (which Microsoft seems to be indicating is not the case), this would represent just 15% of its $63 billion in free cash flow over the past 12 months. That’s not a lot of money to pay for a technology that is likely to deliver an array of advantages to Microsoft, not least the first real shot its ever had at undermining Google’s dominance in search.
A quick back of the envelope calculation: Search generated $150 billion in revenue for Google in 2021 and the company has about 90% of the global search market, compared to just 3% for Microsoft’s Bing. By my calculations, if incorporating ChatGPT into Bing allows Microsoft to increase its market share to even just 10%, the additional revenue will likely already compensate for the $10 billion Microsoft is investing in OpenAI as part of the expanded partnership. (Big caveat here: it’s not clear if a company can generate nearly as much revenue from a chat-based search interface since returning a single, coherent answer in response to a question makes it far less likely that someone will click through on any links, such as the ads that Google includes alongside its search results.)
There’s a lot of other benefits here too for Microsoft. Being able to offer OpenAI’s models to its Azure cloud customers, makes Microsoft’s cloud potentially more attractive. Wall Street analysts estimate that Azure brought Microsoft about $35 billion in revenue last year. (Microsoft does not break out its cloud revenue or profits in its financial statements.) So even a small boost here could be worth about $1 billion in topline growth. Then there’s all the potential advantages Microsoft will have by incorporating OpenAI’s technology into everything from its Office software suite to its Xbox gaming console.
Now, some of these benefits might get eroded quickly by competition. Google looks poised to roll out its own advanced chatbot, Lambda, and text-to-image generative A.I. systems. It is likely to experiment with an enhanced chat interface for its search engine too. Google’s sister company DeepMind also has a chatbot, called Sparrow, that it now says it plans to release in a beta test this year. It has also experimented with A.I. systems that can learn to perform a large number of different tasks, and which could form the basis of future digital assistants.
Meanwhile, a host of other companies are building their own large language models (LLMs), the technology on which ChatGPT is based. These include Cohere AI, Anthropic (which was formed by a breakaway team from OpenAI and has a ChatGPT competitor out now called Claude), Hugging Face, and Stability AI (which also helped create the text-to-image generator Stable Diffusion which competes with OpenAI’s DALL-E 2 system.) So Microsoft might not gain an edge for long, and whatever edge it does achieve might end up being worth less than I am estimating above. Still, $13 billion seems like a reasonable amount for a company of Microsoft’s size and wealth to pay for the chance to gain the benefits I’ve outlined here.
What’s more, Microsoft will essentially own OpenAI for what will likely be a long time because of its right to a share of the company’s profits until OpenAI has handed over $105 billion to Microsoft (that’s the $13 billion Microsoft has invested to date plus the $92 billion in capped-profits its entitled to, according to the outline of the deal Fortune has seen.) Documents seen by Fortune show that last year OpenAI was projected to lose $544 million. And while OpenAI is also projecting its revenues will ramp up dramatically, from less than $30 million last year to more than $1 billion by 2024, there’s no clear indication of exactly when OpenAI will turn a profit, or how big that profit is likely to be. Remember, even mighty Apple only makes about $100 billion in net income per year right now.
Let’s say OpenAI can manage to turn profitable in 2024 and has a net margin of 35%—which is not unreasonable for a software business–then it would make $350 million in net income, of which Microsoft would be entitled to 75%, or $262 million. My back of the envelope calculation is that even assuming OpenAI can then double its net profit every year going forward (all of these seem to me like hugely optimistic assumptions), it will still be a decade until Microsoft is paid off and OpenAI becomes largely independent again.
Now, you might say what is the downside to OpenAI of essentially being owned by Microsoft for most of the next decade? Isn’t Microsoft just giving OpenAI the computing resources it needs to do the exact same stuff it would be doing any way? Doesn’t the partnership with Microsoft allow OpenAI to grow rapidly without having to invest much in sales or marketing team? Well, yes—but maybe at a significant cost in other respects. Former OpenAI employees I spoke to have said that even Microsoft’s initial investment back in 2019 pushed the company to be more focused creating commercial products. It helped cement a focus on large language models to the detriment of other research avenues. Now, that’s fine if you think LLMs are the path to AGI. But, if they are not—and quite a lot of people think they are at best only part of what’s needed—then it’s quite possible OpenAI gets distracted trying to build products for Microsoft and misses out on the next big A.I. breakthrough. Microsoft might not really care if OpenAI ever achieves AGI, as long as what OpenAI produces is commercially useful to Microsoft. But for OpenAI, that outcome would be a failure.
The only way this ends up being a bad deal for Microsoft is if OpenAI is able to use Microsoft’s supercomputing infrastructure to actually achieve its goal of AGI, and do so ahead of any competitor. In that case, the benefits of the technology are so great that OpenAI might really be able to pay Microsoft $105 billion in profits quickly, leaving Microsoft with a tidy profit on its $13 billion investment, but without any ownership in what would be one of the most powerful technological advances in human history. In that scenario, Microsoft will have settled far too cheaply. Interestingly, this could also end up being a bad deal for Microsoft if one of OpenAI’s competitors, maybe one less focused on rolling out commercial products, gets to AGI first.
But, if I had to guess, I would think that in a decade’s time, Microsoft CEO Satya Nadella will wind up looking very smart for having cut this deal—and that we will still be waiting for AGI.
And with that, here’s the rest of this week’s A.I. news.
January 25: This story has been updated to include a link to Fortune’s February/March cover story on OpenAI and ChatGPT.
A.I. IN THE NEWS
Google plans to “recalibrate” risk tolerance at it seeks to counter the threat from Microsoft-OpenAI.
That’s according to a story in the New York Times, which obtained internal Google documents about the company’s planned response to OpenAI’s ChatGPT and OpenAI’s expanded relationship with Google’s rival Microsoft. The potential threat to Google’s search dominance has even led Google co-founders Larry Page and Sergey Brin to re-engage with Google’s business after several years of leaving most decisions to CEO Sundar Pichai, the paper reported. It said the company now plans to debut more than 20 A.I.-enabled products this year and also demonstrate a version of their search engine with chatbot features.
Kenyan contractors paid $2 an hour and exposed to psychological trauma building ChatGPT.
OpenAI contracted with a data labeling firm called Sama that hired contractors in the developing world to provide data used to ensure ChatGPT and other A.I. systems OpenAI was building weren’t trained on toxic language, pornography, and descriptions of graphic violence and child sexual abuse, a Time investigation found. While the $2 per hour that the labelers were paid is slightly more than the Kenyan minimum wage, several of the workers told Time they were traumatized by what they were exposed to doing the job and were provided with inadequate mental health support. OpenAI said in a statement that, “Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”
Getty sues Stability AI over copyright infringement.
Photo agency Getty Images has filed suit against Stability AI, the company that helped create Stable Diffusion, alleging it unlawfully scraped copyrighted images from Getty’s website to train the popular text-to-image generator. Tech publication The Register says that Getty has, in the past, licensed its technology to software companies that want to use the images in A.I. training datasets. But Stability did not pay for a license. Stability did not immediately comment on the lawsuit. The company is already facing another class action lawsuit from artists who claim Stability also violated their copyright in training Stable Diffusion.
Madison Square Garden uses facial recognition to bar lawyers and critics from venues.
MSG Entertainment, the company that owns the Manhattan sports and concert venue, as well as Radio City Music Hall and The Beacon Theater, has begun using facial recognition technology to bar anyone who works for a law firm that is involved in litigation against the company from attending events at the venues. The ban has produced lawsuits from some of those excluded, resulting in a partial victory for at least one lawyer, with a judge ruling that if he had a valid ticket, he could still attend some events. Now New York lawmakers are considering enacting legislation that would prevent companies from using facial recognition technology in this way, The Associated Press reported.
EYE ON A.I. TALENT
OpenAI has hired Shane Gu to be a researcher on the reinforcement learning team that helped create ChatGPT, Gu tweeted. He had been a researcher at Google Brain previously.
Fraser Kelton, the head of product at OpenAI, announced on Twitter that he is leaving the company to become a blogger and investor in other A.I. companies.
EYE ON A.I. RESEARCH
Which is the best chatbot? Engineers at the data labelling colossus Scale AI conducted an experiment to see how ChatGPT compared to Claude, a new chatbot that A.I. research company Anthropic debuted recently. It found that Claude held up well against ChatGPT and that Claude “feels not only safer but more fun than ChatGPT.” It found Claude’s writing more natural than many of ChatGPT’s responses. But it also found that Claude’s ability to write computer code was not as good as ChatGPT’s and that Claude was no better than ChatGPT at tasks such as calculation and reasoning—which is to say, not all that good. You can read the full blog post from the Scale team here.
FORTUNE ON A.I.
What will ChatGPT do to the value of expertise? There have been a bunch of headlines recently about ChatGPT being able to get a passing grade on MBA exams, bar exams, and even medical licensing board exams. All of which probably says a lot more about how poorly these exams actually test what a person needs to know in order to do those various jobs well, than it does about how much of an imminent threat A.I. poses to those professions. But Vinod Khosla, the billionaire Sun Microsystems co-founder turned venture capitalist who is also an early investor in OpenAI, recently suggested that the advent of ChatGPT and systems like it would be the death knell of experts? In the future, he argued, experience would count for little. Instead, what would matter would be the ability to ask the most creative and insightful questions, which could then be answered by an all-knowing chatbot like ChatGPT.
But I have to say, I’m skeptical. And so are others too. Industry analyst Benedict Evans said on Twitter: “Machine learning doesn’t automate experts - it gives you infinite interns. That probably applies to generative models as well.” And MIT professor David Autor, a leading researcher on the impact of technological change on the economy, told The Atlantic’s Annie Lowery in a story this past week, ““In many ways, AI will help people use expertise better. It means that we’ll specialize more.”
I think Evans and Autor are correct. A.I. will wind up doing a lot of the relatively simple stuff. But we’ll still need human experts to handle more specialized and complicated cases. In fact, I would argue that the advent of A.I. might increase the value of elite levels of expertise—while at the same time cheapening or even eliminating the value of lower-level expertise. For instance, if you want to beat a traffic ticket, it might be ok to use a chatbot lawyer. But if you are facing a murder charge, you’re still going to want the best human defense counsel you can find. It might be that this same defense lawyer now uses an A.I. “copilot” to do a portion of their job, or to handle some lower level, less serious cases, but she is liable to be able to charge even more for her hour in court.
One issue this presents though is how humans will acquire the experience and expertise to become experts in their fields. In most human endeavors, people learn by doing—but if A.I. can handle a lot of routine situations, there may be a lot fewer low-stakes situations in which humans who are learning their field can learn. (Maybe we will need more simulators/game environments for training to compensate.)
What do you think?
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.