Samsung declares its latest smartphone marks a new dawn for mobile AI, but the reality doesn’t quite match the hype

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Image of drone swarm spelling out "Galaxy AI is here"
    Samsung pulled out all the stops, including this drone swarm show in London, to announce the new AI features of its Galaxy S24 smartphone. But the phone hasn't wowed analyst so far.
    Joe Maher—Getty Images for Samsung

    Hello and welcome to Eye on AI. 

    Samsung yesterday unveiled its new line of Galaxy smartphones, a release it hyped would usher in a new era of mobile AI. Tapping Google’s Gemini models as well as models Samsung itself created, the phones’ AI features include fairly impressive on-device translation for both calls and texts, but also a lot of what we’ve already seen in the new generative AI landscape: The ability to create a transcript from a voice memo, automatically summarize a meeting, organize notes, and so on. 

    “Samsung has announced its new range of AI-powered Galaxy S24 flagship smartphones claiming a new era for mobile is opening up. Really? Well, not quite yet,” said Forrester VP principal analyst Thomas Husson in a note shared with Eye on AI after the event. Daishin Securities analyst Park Kang-ho similarly told the Financial Times, “I don’t think the added AI features are compelling enough.”

    The most notable new feature is undoubtedly “Circle to Search,” which lets users search Google for more information about anything from within any app by simply circling it on their screen. In an on-stage demo, Google VP of Search Cathy Edwards showed how this could be used to search for a clothing item that catches your eye in an Instagram post. In another scenario, she received a text from a friend asking for thrift store recommendations in a particular location and simply circled the message to get instant search results overlaid directly within the text app—no typing or switching apps required. 

    “You probably come across things in your apps all the time that you want to learn more about when you’re immersed in a moment of discovery. It can feel disruptive to stop what you’re doing and switch to another app to learn more. So today we’re introducing a solution,” Edwards said, though it should be noted that the ability to search without apps doesn’t mean the search results are going to be any good. This capability is obviously a huge commerce opportunity and a potential driver for Google’s ad business, and if the results include ads, it “could quickly end up being more frustrating than efficient,” as Wired noted. 

    Either way, Circle to Search hardly seems like a killer AI app, and it’s safe to say Samsung didn’t transform the smartphone into an AI-first device with this launch. It could, however, be laying the foundation for a new, more centralized way of interacting with smartphones that doesn’t hinge on jumping from app to app. It begs the question of what this means for all the companies that exist only as apps on our phones. It also leaves unanswered the question of how much the smartphone in its current form can be reoriented to fully embrace AI and what other kinds of devices could be next.

    AI obviously took center stage at the Consumer Elections Show (CES), which was held last week in Las Vegas and offered the first large-scale showing of AI-centered consumer hardware since OpenAI fired the AI starting gun in November 2022. One of the breakout devices of the show was the Rabbit R1, a pocket-sized virtual AI assistant. It has no screen or apps, and while it can’t fully replace our smartphones, it is meant to take over a variety of tasks we perform on them, from calling an Uber to making a dinner reservation on OpenTable. 

    While not too dissimilar from Humane’s AI Pin—the much pricier screen-less, app-less wearable for interacting with LLMs that was met with heavy skepticism—the Rabbit R1 garnered a lot of positive interest, landing on several “best of CES”-type lists. We also know it’s not only under-the-radar startups that are chipping away at creating entirely new classes of devices. All eyes are on Apple, which will not only release a new, likely AI-featured iPhone in nine or so months but is also moving steadily into “spatial computing” (the fancy new buzzword for what used to be called augmented and virtual reality). Apple has a glowing track record of pioneering new devices, including the iPhone itself. 

    There’s no question Samsung was a bit overzealous in its declaration that its latest smartphones are a “eureka moment” for mobile AI. Maybe a future smartphone will deliver on this promise, or maybe it will require another device altogether. 

    And with that, here’s more AI news, and also, a new section—Eye on AI Numbers—that will offer a quick take on a key figure, stat, or numerical fact pertaining to AI news. It’s one of, ahem, a number of new sections we will be experimenting with in Eye on AI over the coming weeks. Let us know what you think!

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    Meta merges its two advance AI divisions and ramps up GPU investment in effort to catch up to rivals. Meta CEO Mark Zuckerberg announced Thursday that the company is merging its two advanced AI divisions in an effort to accelerate its push towards more general purpose AI chatbots and assistants. Fundamental AI Research, the company's AI research lab, was founded in 2013, and has been responsible for a number of major AI advances over the years. But FAIR was not directly tied to Meta's product teams. More recently, Meta created a separate GenAI team to work on generative AI models for its products, such as its celebrity chatbot personas and its powerful open-source language model Llama 2. Now it is combining the two divisions in the hopes of catching up to OpenAI and Microsoft, which are currently perceived to be leading the AI race, and Google, which last year also merged its two AI research labs, Google Brain and DeepMind, also in an effort to catch OpenAI. Zuckerberg said Meta will have 350,000 of Nvidia's most advanced H100 GPUs working on its AI applications by the end of the year and a total of 600,000 "H100-equivalent" chips in place. The announcement signals Zuckerberg's intentions to ensure Meta is not left behind if, as many predict, AI personal assistants become the new interface through which most people interact with computers and internet content. 

    Generative AI headlines Davos. The topic is dominating both public and private discussions among attendees at the World Economic Forum this week, currently underway in Davos, Switzerland. AI is one of the four categories at this year’s event and will be the topic of around 30 separate sessions. AI leaders including Sam Altman are in attendance, and AI-invested tech companies including Microsoft, Google, and Salesforce took over local storefronts around the event “as a show of force,” according to CNBC.

    OpenAI announces a team to build public input into its models. The company shared in a blog post the results of its recent experiment to crowdsource governance ideas for AI systems and announced the new team to continue this line of work. Called the Collective Alignment team, it will include both researchers and engineers and be tasked with building out a system for collecting and encoding public input into OpenAI’s systems. “As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity,” reads the blog post.

    Australia creates a new advisory body to regulate AI risks. That’s according to Reuters. In addition to the forthcoming new body, the government also said it also plans to work with existing industry groups to introduce a range of guidelines around issues, such as watermarking AI-created content. The initial guidelines will be voluntary, making Australia the latest country to attempt to regulate AI by asking nicely. 

    Correction, Jan. 18: An earlier version of this story misstated the name of Meta's AI research lab. It is the Fundamental AI Research lab. Facebook AI Research lab is its former name. That version also misattributed the development of Meta's Llama 2 large language model to FAIR. It was developed by Meta's GenAI team.

    EYE ON AI NUMBERS

    66%

    That's the percentage of executives who say they’re ambivalent or outright dissatisfied with their organization’s progress on AI and generative AI so far.

    The number comes from a recent survey of 1,400 C-suite executives in 50 markets, conducted by Boston Consulting Group. The executives cited three primary reasons for their dissatisfaction: a lack of talent and skills (62%), an unclear AI roadmap and investment priorities (47%), and an absence of strategy regarding responsible AI and GenAI (42%). Additionally, 90% of CEOs said they’re still waiting for GenAI to move past the hype.

    FORTUNE ON AI

    Google DeepMind AI software makes a breakthrough in solving geometry problems —Jeremy Kahn

    Sam Altman admits being pushed out of OpenAI was ‘wild’ and caught him ‘off guard’—but he’s done talking about it —Eleanor Pringle

    It’s time to get serious about AI hallucinations —Rachyl Jones

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.