The legal premise of Elon Musk’s OpenAI lawsuit is weak. But the questions it raises are not

Jeremy KahnBy Jeremy KahnEditor, AI
Jeremy KahnEditor, AI

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

Elon Musk
Elon Musk has sued OpenAI, alleging it has broken promises made to him when the organization was founded.
Grzegorz Wajda—SOPA Images/LightRocket via Getty Images

Hello and welcome to Eye on AI.

The most interesting AI news of the past several days was undoubtedly Elon Musk’s lawsuit against OpenAI. Musk contends that OpenAI—and specifically Sam Altman and Greg Brockman, who cofounded the organization with Musk—violated its founding agreement and charter.

A central thrust of Musk’s suit seems self-evident. OpenAI was founded as a nonprofit lab that pledged to keep superpowerful “artificial general intelligence”—defined as AI software that could perform most economically valuable cognitive tasks as well or better than a person—out of the hands of corporate control. Any AGI it created was supposed to be “for the good of humanity.” OpenAI initially committed to publishing all of its research and open-sourcing all of its AI models. And for a while, it did exactly that.

Fast forward to today: OpenAI operates a for-profit arm valued at $80 billion in a recent funding round and is largely in the orbit of and highly dependent on a single giant tech corporation, Microsoft. It no longer publishes critical details of its most powerful models or gives them away for free. Instead, these models are available only to paying customers through a closed API. That OpenAI no longer resembles anything like the organization it was set up to be seems indisputable.

But whether Musk can successfully turn what amounts to a charge of hypocrisy into a winning court case is an entirely different matter. Remember, frustrated with what he saw as OpenAI’s inability to catch up with Google’s DeepMind, Musk had proposed in 2018 that he bring OpenAI under his own direct personal control. It was the refusal of the rest of OpenAI’s nonprofit board and the OpenAI staff to go along with this plan that resulted in Musk resigning from OpenAI’s board in 2018 and reneging on a pledge to deliver $1 billion in funding to the nonprofit lab. This withdrawal of support prompted Altman to seek commercial backers for the lab.

Most of the changes Musk objects to were instituted by Altman and approved by OpenAI’s board after Musk departed. Update: After this newsletter initially went to press, OpenAI released a blog post that contained a number of emails it said had been sent between Musk and Altman, Brockman, and OpenAI co-founder and chief scientist Ilya Sutskever between 2015 and 2018. The emails seemed to indicate that Musk was not only aware, but had approved of OpenAI’s plan to build commercial products in order to help raise more money to pursue its mission of building AGI “in the interests of humanity.” One email thread also seemed to indicate that Musk approved of the idea that the lab might have to become increasingly secretive about its work both for commercial reasons and in order to prevent malicious actors from using its powerful AI for nefarious purposes or in ways that would be unsafe.

Musk is now trying to claim that a loose set of discussions he had with Altman and Brockman when first setting up OpenAI—many of which are apparently not fully documented—constituted a “Founding Agreement” that should have taken precedence over later actions that OpenAI’s leadership and board made.

As many legal scholars have pointed out, this is a highly unusual case, and likely a weak one. It’s not at all clear the court will find that the discussions Musk had with Altman and Brockman, and which he is trying to argue constituted a binding contract in his lawsuit, will actually be interpreted as such. Andrew Stoltmann, a securities lawyer and adjunct professor at Northwestern University law school, told Bloomberg that these kinds of discussions are called “illusory promises” and generally are not legally enforceable. Noah Feldman, a Harvard University legal scholar who has advised OpenAI rival Anthropic, told the New York Times the supposed contract contains “a hole you can drive a truck through” and that much of the language in OpenAI’s charter is also vague enough that OpenAI can easily argue it is adhering to it. What’s more, if there is no contract, then Musk probably doesn’t have standing to sue and his case could well be thrown out of court. Even if OpenAI’s board has violated its own charter, in many states only the state attorney general can bring a legal action over such a matter.

If this is such a weak case, why bring it at all? Well, Musk loves to pick a fight and he has a history of getting entangled in contentious lawsuits, and sometimes prevailing. As one of the world’s richest people, he can afford to roll the legal dice. And the billionaire has made no secret that he feels betrayed by Altman. Musk is also running a rival AI startup, xAI, and has a rival chatbot to ChatGPT, Grok. So if his lawsuit can take out a rival, or at least distract them and cost them some cash that they otherwise might be spending to out-compete him, why not?

It’s also likely he’s hoping that the case won’t get thrown out of court quickly and will proceed to discovery. That process could surface all kinds of emails and text messages from Altman and Brockman that would likely enter the public record and could prove embarrassing to the OpenAI execs. That hope, in fact, may be the entire point of the lawsuit.

Another interesting aspect of Musk’s suit is his claim that OpenAI’s GPT-4 model is itself AGI. Under the terms of OpenAI’s charter, OpenAI’s nonprofit board has the sole discretion to determine when AGI has been achieved. But Musk claims the board has failed in its duty to do so. Also, any system constituting AGI is not supposed to be commercialized by Microsoft under the terms of OpenAI’s strategic partnership with the tech giant. But Musk contends that OpenAI has given Microsoft AGI by sharing GPT-4 with it.

Few people agree with Musk’s contention that GPT-4 is AGI. I certainly don’t. But the suit does perhaps helpfully focus attention on how fraught and ill-defined a concept AGI is. Scientists can’t agree on what human intelligence is. So defining artificial general intelligence is tricky. In a recent paper, DeepMind researchers tried to present AGI as not a single thing, but a spectrum of capabilities, and argued that there might be “levels of AGI” depending on how good an AI system is at each of these different capabilities.

That said, OpenAI’s charter had one very specific definition: software that can do most economically valuable cognitive tasks as well as people. Yet even this raises more questions than it answers: What constitutes most? Which people are we talking about? Are we talking about an average person or an expert in a particular field? And by what benchmark do we judge whether the AI can match humans at a particular cognitive task? Right now, GPT-4 seems to score better than most human test takers at a number of professional exams, such as the bar exam, medical licensing exams, and tough software coding challenges.

But while these tests are designed to assess professional knowledge, it’s pretty clear they are imperfect proxies. A lawyer can pass the bar and still not be a great lawyer. GPT-4 can pass the medical licensing board and yet a doctor who scored less well might be much better at diagnosing and treating people. This brings us to the “economically valuable” part of OpenAI’s AGI definition. Right now, it’s clear that GPT-4 can assist a lot of knowledge workers with many tasks. But it cannot really do the entire job of most workers.

At the same time, it’s also evident that even today’s most powerful AI software performs a lot worse than the average human at many critical tasks. One of the most important of these is the ability to tell fact from fiction. Also, the visual understanding of the most powerful AI models is still much weaker than that of most humans. Today’s AI systems don’t seem to actually have a great grip on physics, despite training on vast video libraries. They struggle to sort causation from correlation and to understand compositionality—which is basically how to tell a whole from its parts, and an understanding of which parts give the whole a particular meaning. Children tend to grasp most of these things much better than today’s most advanced AI.

If the only outcome of Musk’s lawsuit is to force us towards a better definition of intelligence and AGI, and better benchmarks for assessing both, it may well have been worth it.

There’s plenty more AI news to discuss below.

But first, do you want to learn more about how your company can harness the power of generative AI to supercharge your workforce and your bottom line, while also navigating regulation and avoiding the technology’s many pitfalls? Of course you do! So come and join me and a fantastic lineup of thinkers and doers from the worlds of technology, big business, government, entertainment, and more at Fortune’s first-ever Brainstorm AI conference in London on April 15 and 16. Our confirmed speakers include investor and entrepreneur Ian Hogarth, who also chairs the U.K. AI Safety Institute; Jaime Teevan, the chief scientist and technical fellow at Microsoft; Zoubin Ghahramani, vice president of research at Google DeepMind; Sachin Dev Duggal, founder of Builder.ai, and Paula Goldman, the chief ethical and humane use officer at Salesforce; Balbir Bakshi, the chief risk officer at the London Stock Exchange; Conor Leahy, the founder and CEO of Conjecture; and many more. You can register your interest in attending here: brainstormAI@fortune.com (and if you mention you are an Eye on AI reader, you may qualify for a discount).

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction: A news item in last week’s edition (Feb. 27) misidentified Nat Friedman as GitHub’s CEO. He is the former CEO.

AI IN THE NEWS

Anthropic unveils new family of powerful AI models. The San Francisco AI startup, founded by researchers who broke away from OpenAI, debuted its latest set of AI models on Monday. The Claude 3 family of models are, like Google’s Gemini and OpenAI’s GPT-4, multi-modal, meaning that they can input and output not just text, but images too. The models come in three sizes, called Haiku, Sonnet, and Opus, with Opus being the most powerful but also the most expensive. According to benchmark tests Anthropic published in the blog post announcing the new models, Opus performed better than GPT-4 or Gemini 1.0 Ultra at coding, math, and tests of graduate-level reasoning. The models have a 200,000 token context window, meaning they can ingest an entire book’s worth of data at once. And, according to Anthropic’s blog post, they are fast, with near instantaneous responses in some use cases, such as customer support chats. Amazon, which has formed a strategic partnership with Anthropic and pledged up to $4 billion to the startup, immediately made the Claude 3 Sonnet model, which is the middle-tier in terms of capability and cost, generally available to customers on its AWS cloud platform. This helps put AWS back in the generative AI game, matching capabilities that Microsoft has been offering its Azure cloud customers with GPT-4, and that Google’s Cloud Platform has offered through Gemini, in particular its Gemini 1.0 Pro.

Microsoft asks court to dismiss portions of New York Times lawsuit. Microsoft filed its response to the newspaper’s copyright infringement lawsuit against it and its partner OpenAI earlier this week. According to the New York Times, the tech giant argued that chatbots such as ChatGPT did not pose a commercial threat to the Times’ news reporting or business model. The tech company also cited a famous court ruling in a case in which movie studios had sued Sony over its Betamax VCR, arguing that the product helped people violate copyright. In that case, which went all the way to the Supreme Court, judges ruled the VCR manufacturers were not liable for contributory copyright infringement since the VCR had other legitimate uses besides violating copyright. But the Times’ lawyer told the newspaper that the VCR precedent shouldn’t apply since copying movies was not a key aspect of manufacturing a VCR while today’s large language models depended on making copies of the Times’ content as part of the training process. OpenAI used similar arguments in its response to the newspaper’s lawsuit, which it had filed last week.

Snowflake invests in Mistral, signs multiyear partnership. Cloud data platform Snowflake has announced a partnership with French open-source AI company Mistral, Venture Beat reports. Snowflake plans to offer customers Mistral’s LLMs alongside its own Snowflake Cortex fully managed LLM and vector database services. Snowflake’s venture capital arm is making an investment in Mistral, but the amount of that investment was not disclosed. Snowflake is also making an investment into Andrew Ng’s computer vision AI company, Landing AI.

Chinese local governments are offering compute vouchers to GPU-poor startups. With U.S. export restrictions and high worldwide demand combining to severely restrict Chinese companies’ access to the graphics processing units (GPUs) needed to train and run AI software, at least 17 Chinese city governments have begun offering local startups vouchers to help them purchase GPU time in AI data centers, the Financial Times reported. The paper said the vouchers were worth $140,000 to $280,000. It said that China’s biggest internet companies, such as Alibaba, Tencent, and ByteDance, had gobbled up much of the country’s supply of Nvidia GPUs, leaving smaller AI companies desperate to get a hold of the computing resources they need. It also said the Chinese national government was considering subsidizing AI companies to use domestically made chips and also was considering standing up state-run data centers to help alleviate the supply crunch.

India asks AI companies to seek approval before debuting new models. India has mandated tech firms obtain government approval before releasing AI models that could be deemed “unreliable,” Reuters reports. The advisory, issued by India’s IT ministry, came after Google angered top officials when its Gemini chatbot produced responses that implied India Prime Minister Narendra Modi was implementing “fascist” policies. The advisory also told AI companies to ensure their software does not compromise the integrity of India’s elections. The country will vote in national elections this spring. The advisory comes amid a global wave of similar AI regulation.  

EYE ON AI RESEARCH

Video games created on the fly. That’s what a new AI model from Google DeepMind researchers called Genie can do. The model can take a short text description of the game, a photo, or even a small sketch of what a game screen should look like and then generate a simple two-dimensional, arcade-style game from that input. What makes Genie different from previous models that could also generate video games is that Genie was trained only on video footage—about 30,000 hours’ worth—of people playing video games. The data was unlabeled and was not paired in any way with the gamers’ control actions on a keyboard or joystick, MIT Tech Review explains. What’s also remarkable is that Genie generates each frame of the video game on the fly, as the user plays the game. While this is slow—Tech Review notes that the frame rate of a Genie game is only one frame per second compared to 30 frames per second for most video games—it opens up the possibility of creating endless simulated worlds on the fly. This may have important uses for simulating real-world scenarios and then using those simulations to train AI agents to perform actions in the real world. Currently, the games Genie generates are simple. But given the current pace of AI progress, it may not remain that way for long.

FORTUNE ON AI

Sergey Brin, who ‘kind of came out of retirement’ to work on AI, says Google ‘definitely messed up’ with Gemini’s racial image generation problem —by Marco Quiroz-Gutierrez

Marc Andreessen says OpenAI is the ‘security equivalent of swiss cheese’ and a tempting target for Chinese espionage —by Christiaan Hetzner

All the tech layoffs are because AI is like ‘corporate Ozempic’—it trims the fat and you keep the fact you’re using it a secret, says marketing guru Scott Galloway —Paolo Confino

Investing in the AI founder —by John Kell

AI CALENDAR

March 18-21: Nvidia GTC AI conference in San Jose, Calif.

March 11-15: SXSW artificial intelligence track in Austin

April 15-16: Fortune Brainstorm AI London (Register here.)

May 7-11: International Conference on Learning Representations (ICLR) in Vienna

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

EYE ON AI NUMBERS

88.5%

That’s the percentage of human coders GPT-4 could best in a hacking competition, according to researchers from New York University. The researchers tested the OpenAI model, along with others from Anthropic, Google, and Mistral on hacking competition challenges used for real “capture the flag” hacking contests. GPT-4 scored particularly well compared to the others. It’s an indication that LLMs and chatbots can potentially help hackers to attack networks in the real world, a development cybersecurity experts have been sounding the alarm about. You can read the research paper on arxiv.org here.  

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.