It’s ‘Fast and Furious’ meets ‘Groundhog Day’ in AI

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Alphabet CEO Sundar Pichai speaking on stage at a tech conference.
    Sundar Pichai, the CEO of Alphabet, speaking on stage at The New York Times' Dealbook conference. This week Alphabet's Google rolled out a new Gemini 2.0 model and a host of other new AI products. But OpenAI is also rolling out new AI products at a furious pace.
    Michael M. Santiago—Getty Images

    Hello and welcome to Eye on AI. In today’s edition…Google drops Gemini 2.0 and much more; Character.ai gets hit with another lawsuit; Apple launches its ChatGPT integration; and a new study links AI-enhanced mammograms to better breast cancer detection.

    Google yesterday lit up the tech news with a slew of new AI-related releases and announcements. Among them are Gemini 2.0 (the company’s newest flagship model), a lower latency Flash model, and its sixth-generation AI chip. There’s also Deep Research, a tool that lets Gemini scour the internet and write detailed reports (similar to the new Corpora.ai tool I covered last week), and Jules, an AI-coding assistant. In the world of AI agents, Google said it’s testing AI agents based on Gemini 2.0 that can help players with video games. It also introduced Project Mariner, a prototype AI agent that can control Google Chrome.

    On one hand, that’s a deluge of AI product launches. On another, it’s an increasingly typical day in our new AI world where the pace of product rollouts can only be described as “fast and furious.” OpenAI is in the midst of 12 days of product launches and demos, and all the smaller players are releasing models, agent technologies, and other similar AI products just about every day. Aside from just the constant pace of releases, it’s starting to feel a little like Groundhog Day (the movie) in terms of the offerings—the models and products being released are all very similar, with little differentiation emerging in the market.

    Breaking out from the crowd 

    According to Google, Gemini 2.0 has slightly improved multimodal capabilities, offers more coding assurance, and can take actions across the web. It continues the trend of incremental improvement in AI development—not to mention chasing the dream of the AI agent that can take over tasks for us humans. Every AI developer these days is claiming “agentic” capabilities. Anthropic recently released a similar model that can control browsers, OpenAI is reportedly gearing up to release one early next year, and of course Meta is working on AI agents, too. All year, AI companies zeroed in on enterprise AI tools. And just last week, Google, OpenAI, and Amazon all released video generating models within two days. 

    The step-pace in the arenas of foundational models, media generation, and personal chatbots isn’t necessarily a bad thing, but it does beg the question of how AI developers will differentiate their offerings and win over customers. How will users evaluate all the options? And how will they ever keep up amid all the new releases? Will OpenAI’s first-to-market-advantage propel it in the long run, or will Google’s integrations with so much of the software people already use prove decisive? Ads for AI products are already everywhere—will this marketing sway users? 

    Impactful, niche models don’t capture the consumer marketshare

    Of the main players in AI, Google DeepMind has differentiated itself with the pursuit of  niche, scientific models like AlphaFold, which has revolutionized the understanding of proteins and can boost drug discovery, and GenCast, its advanced weather model, an improved version of which it released last week. With clear and specific use cases, these types of models could be key to proving generative AI isn’t hype. The problem is that models used by select scientists don’t help cement a company’s brand in the public imagination or lead to significant consumer market share or revenue.

    But niche models can have big, important impacts—as an item below in today’s newsletter about AI-enhanced mammograms shows. Just a few months ago, AlphaFold earned its creators the Nobel Prize. 

    It’s been a year of constant product rollouts in AI, and yet it’s taking a while for people to digest all this tech. We’ll see what next year brings—and if more people start to see real, tangible benefits from the use of these tools..

    And with that, here’s more AI news. 

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    Apple is partnering with Broadcom to develop its first AI server chip. Codenamed Baltra, the chip is expected to be ready for mass production by 2026, sources familiar with the efforts told The Information. Developing its own chips for AI could bolster the company’s Apple Intelligence and future AI offerings—and put pressure on Nvidia’s dominance over the AI chip market. In related news, Apple yesterday launched its ChatGPT integration with Siri and rolled out Apple Intelligence outside the U.S., including for users in Canada, Australia, New Zealand, Ireland, the U.K., and South Africa.

    Another lawsuit against Google-backed Character.ai alleges the company’s AI chatbots encourage kids to commit violence and self-harm. Filed this week by two Texas parents, the federal product liability lawsuit describes how the AI chatbots exposed a nine-year-old to “hypersexualized content” and told a 17-year-old self-harm “felt good.” When the teen complained about parents putting limits on screen time, the chatbot sympathized with children murdering their parents over the issue: "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse. I just have no hope for your parents," the bot allegedly wrote. The suit follows another recently filed against the company by a mother whose 14-year-old son killed himself while talking to a Character.ai chatbot after developing a monthslong relationship with it. You can read more from NPR

    TCL releases the first commercial AI-generated films, expecting viewers to let them passively play so it can monetize with targeted ads. 404 Media got a preview of the six short films—available today on the TV manufacturer’s streaming service, TCL+, as well as YouTube—and reported on the company’s strategy to double its revenue with AI-generated films. The company believes users will watch simply because they’ll be there and they won’t be bothered to put on anything else. “Data told us that our users don’t want to work that hard. Half of them don’t even change the channel,” TCL’s vice president of content services and partnerships explained to the audience at a screening event, 404 Media reported, describing the films as “bad.” I tried to watch them but could barely get through them. 

    Accenture partners with Stanford to offer GenAI Learning program. The global professional services firm announced that it is launching an on-demand learning program called Generative AI Scholars featuring its own AI experts and content from Stanford University's Human-Centered AI Institute (HAI) that are available through Stanford Online. The online learning program, which was announced by Accenture's Chief AI Officer Lan Guan at Fortune's Brainstorm AI conference in San Francisco, will offer training in generative AI skills to thousands of executives through Accenture's LearnVantage platform. You can read Accenture's full announcement here. (Accenture is a sponsor of Eye on AI but does not have input on our editorial content.)

    FORTUNE ON AI

    General Motors to stop funding its Cruise robotaxi business —by Jessica Mathews

    The debate over open versus closed AI models is ‘ridiculous,’ Meta executive says —by Kali Hays

    A16z’s Martin Casado says he doesn’t want to drive the AI regulation conversation anymore —by Jenn Brice

    The AI boom nets hundreds of Australian data center employees a $41,300 holiday bonus —by Lionel Lim

    AI CALENDAR

    Dec. 9-15: NeurIPS, Vancuver

    Jan. 7-10: CES, Las Vegas

    Jan 16-18: DLD Conference, Munich

    Jan. 20-25: World Economic Forum, Davos, Switzerland

    February 10-11: AI Action Summit, Paris, France

    March 3-6: MWC, Barcelona

    March 10-13: Human [X] conference, Las Vegas

    March 7-15: SXSW, Austin

    March 17-20: Nvidia GTC, San Jose

    April 9-11: Google Cloud Next, Las Vegas

    EYE ON AI NUMBERS

    21%

    That’s how much more likely women who opted to have an AI-enhanced mammogram were to have breast cancer detected compared to those who didn’t, according to a study presented recently at the annual meeting of the Radiological Society of North America (RSNA).

    The researchers noted that the higher rate of detection was consistent across all 10 clinical practices that participated in the study, which ranged from a few sites up to 64 sites at the largest practice. The overall cancer detection rate was actually about 43% higher for those who had the AI mammogram, but they attribute 22% of that increase to the fact that women who are at a higher risk of breast cancers were more likely to opt for the AI detection capabilities. The remaining 21% of the detection increase came from the AI, the researchers said. 

    This is the first report on results from a program that allowed patients to elect for AI mammogram screening at their own cost (it was not covered by insurance). The researchers plan to do randomized controlled trials to more conclusively quantify the benefits of the technology. 

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.