Microsoft, freed from relying on OpenAI, joins the race for ‘superintelligence’—and AI chief Mustafa Suleyman wants to ensure it serves humanity

Sharon GoldmanBy Sharon GoldmanAI Reporter
Sharon GoldmanAI Reporter

Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

Jeremy KahnBy Jeremy KahnEditor, AI
Jeremy KahnEditor, AI

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

Microsoft AI CEO Mustafa Suleyman
Microsoft AI CEO Mustafa Suleyman
Stephen Brashear/Getty Images)

When Mustafa Suleyman joined Microsoft in March 2024 to lead the company’s new consumer AI unit—home to products like Copilot—there were clear limits to what he could do.

Because of Microsoft’s landmark deal with OpenAI, the company was barred from pursuing its own AGI research. The agreement even capped how large of a model Microsoft could train, restricting the company from building systems beyond a certain computing threshold. (This limit was measured in FLOPS, or the number of mathematical calculations an AI model performs per second. It is a rough approximation of the cumulative computing power used to train a model.) 

“For a company of our scale, that’s a big limitation,” Suleyman told Fortune.           

That’s all changing now: Suleyman announced the formation of the new MAI Superintelligence Team on Thursday. Led by Suleyman and part of the broader Microsoft AI business, the team will work towards “Humanist Superintelligence (HSI),” which Suleyman defined in a blog post as “incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally.”

Microsoft is the just latest company to rebrand its advanced AI efforts as a drive towards “superintelligence”—the idea of artificial intelligence systems that would potentially be wiser than all of humanity combined smarts. But for now, it’s better marketing than science. No such systems currently exist and scientists debate whether superintelligence is even achievable with current AI methods.

That has not stopped companies, however, from announcing superintelligence as a goal and setting up teams branded as “superintelligence.” Most notably, Meta rebranded its AI efforts as Meta Superintelligence Labs in June 2025. OpenAI CEO Sam Altman has written that his company has already figured out how to build artificial general intelligence, or AGI—the idea of an AI system that is as capable as an individual human at most cognitive tasks—and, even though it has yet to release an AI model that meets that initial goal, that it has begun to look beyond AGI to superintelligence.

Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist, cofounded an AI startup called Safe Superintelligence that is also dedicated to creating this hypothetical superpowerful AI and making sure it remains controllable. He had previously led a similar effort within OpenAI. AI company Anthropic also has a team dedicated to researching how to control a hypothetical future superintelligence.

Microsoft’s framing of its own new superintelligence drive as “humanist superintelligence” is a deliberate effort to contrast it to the more technological goals of rivals like OpenAI and Meta[hotlink]. “We reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavor to improve our lives and future prospects,” Suleyman wrote in the blog post. “We also reject binaries of boom and doom; we’re in this for the long haul to deliver tangible, specific, safe benefits for  billions of people. We feel a deep responsibility to get this right.” </p> <p>For the last year or so Microsoft AI has been on a journey to establish an AI “self-sufficiency effort,” Suleyman told <em>Fortune, </em>while also seeking to <a href="https://fortune.com/2025/10/28/openai-for-profit-restructuring-microsoft-stake/">extend its OpenAI partnership</a> through 2030 so that it continues to get early access to OpenAI’s best models and IP.</p> <p>Now, he explained, “we have a best-of-both environment, where we’re free to pursue our own superintelligence and also work closely with them.” </p> <p>That new self-sufficiency has required significant investments in AI chips for the team to train its models, though Suleyman declined to comment on the size of the team’s GPU stash. But most of all, he said, the effort is about “making sure we have a culture in the team that is focused on developing the absolute frontier [of AI research].” It will take several years before the company is fully on that path, he acknowledged, but said that it’s a “key priority” for Microsoft. </p> <p>Karén Simonyan will serve as the chief scientist of the new Humanist Superintelligence team. Simonyan joined Microsoft in the <a href="http://Why Microsoft's surprise deal with $4 billion startup Inflection is the most important non-acquisition in AI | Fortune ↗">same March 2024 deal</a> that brought Suleyman and a nunber of other key researchers from the AI startup he founded, Inflection, to the company. The team also includes several researchers that Microsoft had already poached from [hotlink]Google DeepMind, Meta, OpenAI and Anthropic. 

The new superintelligence effort, with its focus on keeping humanity at the forefront, does not mean that the company won’t be innovating quickly, Suleyman insisted–even though at the same time he admitted that developing a “humanist” superintelligence would always involve being cautious about capabilities that are “not ready for prime time.” 

When asked about how his viewpoints align with AI leaders in the Trump Administration, such as AI and crypto ‘czar’ David Sacks, who are pushing for no-holds-barred AI acceleration and less regulation, Suleyman said that, in many ways, Sacks is correct. 

“David’s totally right, we should accelerate, it’s critical for America, it’s critical for the West in general,” he said. However, he added, AI developers can push the envelope while also understanding  potential risks like misinformation, social manipulation and autonomous systems that act outside of human intent. 

“We should be going as fast as possible within the constraints of making sure it doesn’t harm us,” he said.