Fortune Brainstorm AI showed cautious optimism, but companies are growing skeptical about hyped-up promises

Sharon GoldmanBy Sharon GoldmanAI Reporter
Sharon GoldmanAI Reporter

Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

Amazon SVP and head of AI Rohit Prasad.
Amazon SVP and head of AI Rohit Prasad.
Stuart Isett/Fortune

Hello and welcome to Eye on AI! In today’s edition: OpenAI finally releases Sora and considers dropping the “AGI” clause in its Microsoft contract; Cerebras claims a reasoning breakthrough; and AI causes open source problems.

I’m writing from the St. Regis Hotel in San Francisco, where I am attending my first Brainstorm AI conference since joining Fortune in March. 

It’s an interesting moment in generative AI, which burst into public view with the launch of OpenAI’s ChatGPT in November 2022. No longer the shiny new thing sparkling with new possibilities that had tech-watchers oohing and ahhing at 2023 conferences, 2024 has proven to be a more nuanced AI era. Companies using AI are moving beyond FOMO to ROI—they are working on actually getting AI projects into production and questioning how quickly the seeds of generative AI can realistically bear fruit. Meanwhile, the startups and Big Tech companies developing AI models, tools, and infrastructure are fielding hard questions about the progress they are making, the environmental impacts of their work, and whether the billions invested in generative AI will ultimately pay off. 

I felt this vibe shift at Brainstorm AI as I chatted with tech leaders at Brainstorm AI over coffee, lunch, and dinner. There is a cautious optimism about the potential of generative AI, but a sense of realism reigns: The companies that have gone all-in on generative AI now need to see results, and they are becoming more skeptical of hyped-up promises and predictions. 

That was reflected in the on-stage conversations with AI leaders, as well. Here are some of the key themes and issues covered on Day 1: 

Visa’s head of tech wants AI companies to focus less on pitch decks and more on code: As investors and adopters weather the wave of AI hype, Rajat Taneja, Visa’s head of technology, is asking founders to spend more time on their product than their pitch decks. He said that in a highly regulated sector, it’s key to cut through the buzz surrounding generative AI. His advice to emerging AI companies: “Move away from PowerPoint and go to code.” 

Stability AI’s new CEO, hired six months ago, says business growing by “triple digits” and no more debt: “We have now a clean balance sheet, no debt, nothing,” said Stability CEO Prem Akkaraju, who joined Stability about six months ago. Despite being a one-time AI industry darling as the creator of Stable Diffusion, a popular image generation tool, Stability AI had been roiled by chaotic management that caused investors to abandon the company and led to the departure of its founding CEO. The company was considering a sale.

Amazon’s top AI exec says industry concerns that LLMs hit a “wall” are overblown, and says Jeff Bezos “very involved” in AI efforts: Amazon’s head of AI brushed off concerns that AI foundation models have “hit a wall” in terms of how much improvement new releases are demonstrating over past versions. “Every time we come close to a wall, there’s a new dimension,” said Amazon SVP Rohit Prasad, who oversees the tech giant’s Artificial General Intelligence division. He was responding to a question about the current debate in AI circles over whether new versions of large language models aren’t improving as much as they once did. Prasad argued that AI developers have repeatedly found ways to overcome technical barriers.

Health care executives are banking on AI to unburden doctors, make processes smoother, and save time: At Brainstorm AI, health care executives discussed the ways they expect the technology will transform the field. In their minds, this can mean freeing doctors from burdensome tasks, simplifying processes, and saving time—with the hope of reducing human error, and potentially reducing costs to consumers. “Anything we’re doing with AI that makes our care and clinical professionals’ jobs easy is my favorite,” said Tilak Mandadi, executive vice president of ventures and chief digital, data, analytics and technology officer at CVS Health. Mandadi pointed to the company’s implementation of AI-based case preparation that saves its healthcare professionals a substantial amount of time that can then be used to spend with their patients.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

The rest of today’s Eye on AI was written by David Meyer.

AI IN THE NEWS

OpenAI finally releases Sora. Ten months after unveiling its Sora text-to-video generative AI tool, and a couple weeks after it was leaked by activist artists, OpenAI has officially released the thing—or rather, a version called Sora Turbo that is “significantly faster” than the version shown off in February. As OpenAI explains in a blog post, ChatGPT Pro users can now use the “standalone” product to generate full-HD videos that are up to 20 seconds long, based on text prompts or pre-existing images and videos (mere Plus users have to live with lower-res videos). The company acknowledges that Sora sometimes produces “unrealistic physics” and struggles with sustained complex actions, but claims releasing it now gives “society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.” The service is not available in Europe yet, perhaps because EU law would force OpenAI to publicly summarize Sora’s copyright-protected training material.

OpenAI looks to drop AGI provision. OpenAI’s deal with its biggest sponsor, Microsoft, would cut off Microsoft’s access to OpenAI’s technology once the smaller company achieves so-called “artificial general intelligence,” defined as a “highly autonomous system that outperforms humans at most economically valuable work.” Now OpenAI wants to scrap the provision, the Financial Times reports. CEO Sam Altman has recently been claiming AGI (which he has breathlessly hyped in the past) will emerge “sooner than most people in the world think and it will matter much less.” If OpenAI is planning to claim an AGI breakthrough anytime soon, dropping the AGI clause in its Microsoft contract may help to ensure future funding from its deep-pocketed benefactor.

Cerebras claims reasoning breakthrough. Cerebras, the company that claims to have the world’s fastest AI inference chip, made a couple big announcements at the NeurIPS conference today. Perhaps the most intriguing is the Cerebras Planning and Optimization (CePO) reasoning framework, which apparently allows Meta’s Llama models to “reason.” CEO Andrew Feldman told Fortune that CePO can make Llama’s 70 billion-parameter version perform better than its 405 billion-parameter version “and in many cases better than GPT-4,” based on benchmarks like MMLU-Pro (math) and GPQA (science and reasoning.) “This is the only reasoning model that runs in real time,” he said. Cerebras also collaborated with Sandia National Laboratories to show it could train a trillion-parameter AI model on a single-chip system; training at this scale usually requires thousands of GPUs.

FORTUNE ON AI

Generative AI can’t shake its reliability problem. Some say ‘neurosymbolic AI’ is the answer —by David Meyer

Nvidia slapped with a Chinese antitrust probe as Beijing collects ‘bargaining chips’ ahead of Trump’s return —by Lionel Lim

OpenAI’s nightmare: What David Sacks as AI Czar (and Elon Musk as wingman) could mean for Sam Altman’s $157 billion startup —by Sharon Goldman

AI boom means Europe’s universities are becoming the new Harvard and Stanford for finding tech talent —by Ryan Hogg

Visa’s head of tech wants AI companies to focus less on pitch decks and more on code —by Jenn Brice

AI CALENDAR

Dec. 11-12: The AI Summit, New York 

Jan. 7-10: CES, Las Vegas

Jan 16-18: DLD Conference, Munich

Jan. 20-25: World Economic Forum, Davos, Switzerland

March 3-6: MWC, Barcelona

March 7-15: SXSW, Austin

March 17-20: Nvidia GTC, San Jose

April 9-11: Google Cloud Next, Las Vegas

BRAIN FOOD

AI causes open source problems. There’s been plenty of lively debate about “open source” AI and its implications, but how about the implications of AI for open-source projects, which rely on the contributions of external bug hunters?

Seth Larson, who works on the security team for the Python Software Foundation, wrote a blog post last week decrying how people are using generative AI to send his team “extremely low-quality, spammy, and LLM-hallucinated security reports.” As is so often the way with AI-generated content, the reports look legit at first, meaning they waste time. “If this is happening to a handful of projects that I have visibility for, then I suspect that this is happening on a large scale to open source projects. This is a very concerning trend,” Larson wrote.

As The Register notes, the Curl open-source data-transfer project is also having trouble with “AI slop” submitted by people who an AI may have fooled into seeing bugs that aren’t there. Or, those submitting the bugs might themselves be bots. Who knows these days?

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.