AI’s leap from the cloud to your laptop could fix some of the technology’s weak spots

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Intel CEO Pat Gelsinger wants to bring AI power to laptop PCs
    Intel CEO Pat Gelsinger wants to bring AI power to laptop PCs
    (Tom Williams/CQ-Roll Call, Inc via Getty Images

    Hello and welcome to Eye on AI.

    The big AI story from this past week comes in chip form, courtesy of Intel. At its developer event in San Jose, the company unveiled its forthcoming laptop chip, code-named Meteor Lake, which it says will enable AI workloads to run natively on a laptop, including a GPT-style generative AI chatbot. It’s all part of the company’s vision for the “AI PC,” a near future where laptops will deliver personal, private, and secure AI capabilities. And with Meteor Lake arriving this December, Intel says these laptops will begin hitting store shelves next year.

    “We see the AI PC as a sea change moment in tech innovation,” Intel CEO Pat Gelsinger said during his opening keynote before assisting a colleague in demonstrations of AI PC applications live on stage. In one demo, they created a song in the style of Taylor Swift in mere seconds. In another, they showed off text-to-image generative capabilities using Stable Diffusion—all run locally on the laptop. 

    For those looking for a full deep dive on the chip specs, The Verge has a great breakdown. But we’re going to zero in on the new AI component that’s making this all possible—and the impact it could have on generative AI adoption for security-concerned users. 

    The ability to run these more complex AI applications on the laptop comes via the new Neural Processing Unit (NPU), Intel’s first-ever component dedicated to specialized AI workloads. The GPU and CPU will continue to have their roles in running AI applications too, but the NPU opens up a host of possibilities. 

    In a video offering a more technical breakdown of Meteor Lake, Intel senior principal engineer of AI software architecture Darren Crews described where each component shines. The CPU is good for very small workloads, while the GPU is good for large batch workloads that don’t require much run time. This is because when algorithms run on the CPU, you’re limited by the amount of efficient compute. And while the GPU could technically power some of these more intensive AI workloads, it’s a stretch for a battery-constrained device like a laptop and would require exorbitant amounts of electricity. 

    The NPU, however, offers a more power-efficient way to run AI applications, Crews said. This makes it useful for those continuous, large batch workloads with higher complexity that are too intensive for the CPU and GPU and becoming more and more sought-after as AI booms. Now, it’s important to be clear that this isn’t the first ever instance of AI running locally on a laptop, and some developers have even rigged up tools to do so with GPT-style LLMs. But it is a very real step toward doing so in a massive, publicly-available way to meet this generative AI moment.

    Perhaps the biggest takeaway from all this is the potential impact on data security and privacy. The ability to run these AI workloads locally could allow users to forgo the cloud and keep sensitive data on the device. This isn’t to say the cloud is going anywhere, but as far as generative AI goes, it’s a shift that could have a lot of impact. 

    A few weeks ago when Eye on AI talked with companies across industries about why they would or would not be using ChatGPT Enterprise, concerns about data security, privacy, and compliance were cited as a reason for refraining. This was one concern of the executives at upskilling platform Degreed, for example, who said they’d need to see transparent and measurable security practices (among other changes, like actionable insights to combat misinformation) in order to consider adopting the tech.

    “This is definitely a step in the right direction,” Fei Sha, VP of data science and engineering at Degreed, told Eye on AI when asked after the Intel announcement if this is the type of security improvement they’d need to see. 

    But while acknowledging that running an AI chatbot locally can provide security and privacy benefits compared to a cloud-based solution, she said it’d still be just as important to ensure the security and compliance of the on-premise AI chatbot and also reiterated other concerns about the tech. 

    “We also need to investigate and take actions to address other concerns associated with AI chatbots, such as accuracy and reliability, lack of human touch, bias, and discrimination, lack of empathy, limited domain knowledge, difficulty in explaining decisions, misaligned user expectations, and ways for continuous improvement, etc,” she said.

    And with that, here’s the rest of this week’s AI news.


    But first…a reminder: Fortune is hosting an online event next month called “Capturing AI Benefits: How to Balance Risk and Opportunity.”

    In this virtual conversation, part of Fortune Brainstorm AI, we will discuss the risks and potential harms of AI, centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    Amazon strikes a deal to invest up to $4 billion in Anthropic. The tech giant will initially invest $1.25 billion for a minority stake in Anthropic, with the option to invest up to $4 million, as Fortune’s David Meyer reported on Monday. Anthropic is the maker of chatbot Claude 2, a rival to ChatGPT and similar tools, and is already one of the most-funded AI startups, including prior backing from Google. As part of the announcement, Anthropic also said it’s expanding support for Amazon Bedrock, which will surely be a boost to AWS as the companies begin working more closely. 

    OpenAI unveils DALL-E 3 with ChatGPT integration, along with voice capabilities. This latest iteration of the company’s generative AI image model “understands significantly more nuance and detail than our previous systems,” said OpenAI on a landing page for the product, where it offers side-by-side comparisons of images DALL-E 2 and DALL-E 3 each generated from the same prompt. DALL-E 3 is currently in research preview and will be available to ChatGPT Plus and Enterprise customers in October. And in a separate announcement, the company yesterday rolled out the ability for paying ChatGPT users to prompt the LLM using photos and voice prompts, plus other voice-related capabilities.

    Microsoft rounds out the Big Tech AI copilot announcements. Following Google, Zoom, Salesforce, and others, the company this past week unveiled its own “AI companion” called Microsoft Copilot. Its rollout begins today as part of the company’s Windows 11 update, which Microsoft called one of its “most ambitious updates yet” with the introduction of over 150 new features. The copilot rollout will continue across Edge, Microsoft 365, and Bing throughout the fall, including adding support for DALL-E 3 to Bing. 

    Prominent authors team up with The Authors Guild to sue OpenAI for copyright infringement. Authors cited in the complaint include Game of Thrones author George R.R. Martin, prolific novelist Jodi Picoult, and 15 others. The lawsuit cites specific ChatGPT searches for each author and calls ChatGPT a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale,” according to the Associated Press. While it’s just the latest lawsuit of this sort against OpenAI, it’s perhaps the most specific and wide-reaching yet. 

    Amazon limits authors to self-publishing three books per day as it continues to navigate the influx of wonky generative AI-created content on its platform. That’s according to the Guardian. Amazon has been dealing with potentially dangerous AI-generated uploads (like the mushroom foraging books we wrote about a few weeks ago), removed AI-generated books falsely listed as written by a human, and most recently announced a requirement for authors to disclose if they used any generative AI tools.

    EYE ON AI RESEARCH

    The hysteria of it all. Sequoia Capital this past week published a report on generative AI, listing two of the firm’s partners as well as GPT-4 as coauthors. Given that Sequoia is a venture capital firm with investments in the space, it’s of course important to point out that the firm has a vested interest in making sure this technology booms. However, the report contains an interesting overview of the current landscape, exploring what Sequoia sees as “cracks” starting to show in the generative AI “hysteria” and what the firm got right and wrong in its original thesis about the market. 

    “These early signs of success don’t change the reality that a lot of AI companies simply do not have product-market fit or a sustainable competitive advantage, and that the overall ebullience of the AI ecosystem is unsustainable,” it reads. 

    FORTUNE ON AI

    Generative AI could be Europe’s shot at gaining a competitive edge against the U.S., Accenture’s AI chief for Europe says —Prarthana Prakash

    Mergers and acquisitions are becoming more science than art as CEOs turn to AI for answers —Andrea Guerzoni

    Researchers asked ChatGPT to rate which job skills it performs best. Its answers show what roles are most at risk for AI disruption —Paige Mcglauflin And Joseph Abrams

    Morgan Stanley debuts a new tool for employees: an AI assistant to answer common investing and personal finance queries —Sheryl Estrada

    Indeed CEO: ‘AI is changing the way we find jobs and how we work. People like me should not be alone in making decisions that affect millions’ —Chris Hyams

    Cathie Wood steered clear of Arm IPO frenzy because there was ‘too much emphasis on AI’ —Chloe Taylor

    BRAINFOOD

    Emailing with Bard. Google last week unveiled Bard integrations across its various apps, and users quickly got to trying them out. This includes New York Times tech columnist Kevin Roose, who reviewed his time emailing with Bard on the latest episode of the publication’s Hard Fork podcast. The results? So hallucinatory that it’s kind of hilarious. 

    For his first test, Roose asked Bard to "analyze all of my Gmail and tell me, with reasonable certainty, what my biggest psychological issues are." Now, this prompt is obviously meant to poke at the chatbot’s capabilities (remember, Roose is the same reporter who made headlines for getting ChatGPT to declare it was in love with him). But how Bard answered and cited its sources is telling.

    Bard replied that Roose worries about the future and that this could indicate an anxiety disorder, citing an email Roose sent in which he said he was stressed about work and “afraid of failing.” But Roose had no recollection of ever saying that, so he asked Bard to show him the email. What Bard presented was not an email written by Roose, but rather an email newsletter he had received — a review of a book about Elon Musk (presumably the new biography by Walter Isaacson). As if this wasn’t already wrong enough, the newsletter didn’t even contain the quote! Only one that was loosely similar.

    “So Bard made up a quote from this email that I had received and wrongly attributed it to me. A mistake on top of a mistake,” Roose summarized on the podcast. 

    Roose went on to try more straightforward tasks, such as the travel planning use case Google presented with its announcement last week, which he said also failed. Lastly, he described asking the chatbot to pick five emails from his primary tab, draft responses in his voice, and show him the drafts. In response, Bard went to his promotions tabs and wrote a “very formal, very polite” email to Nespresso thanking the company for its offer of a 25% discount. And that’s the part where I fully laughed out loud. 

    Overall, it’s worth listening to in full. This segment starts around 9:00 minutes into the show.

    This is the online version of Eye on AI, a free newsletter delivered to inboxes on Tuesdays. Sign up here.