AI’s big shift from ‘model-forward’ innovation to ‘product-forward’

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Chat GPT logo is displayed on a smartphone with Artificial Intelligence (AI) symbol on the background.
    New data shows that companies are not building models from scratch anymore, with most tapping into existing models like OpenAI's and Anthropic's, and focusing on adapting the models to their product roadmaps.
    Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images

    Hello and welcome to Eye on AI. 

    Today we’re starting with some breaking news in the world of generative AI. The U.S. FTC this afternoon announced it’s launching an inquiry into generative AI partnerships between Big Tech and startups. Specifically, the agency is investigating three multi-billion dollar deals that have shaped the AI landscape as we know it: Microsoft and OpenAI, Google and Anthropic, and Amazon and Anthropic. The FTC issued orders to all of the involved companies, seeking specifics about their agreements, the practical implications of these partnerships, analysis of the transactions’ competitive impact, competition for AI inputs and resources, and more information. This investigation could have major ramifications for these companies and the AI and technology landscape—and we’ll be paying close attention as it pans out. 

    Now, let’s get into our main story, which is in some sense related to the FTC’s concerns about concentration of power in the nascent generative AI market: We’re talking about models, and more specifically, why most companies have quit building them and what this means for gaining a competitive advantage in AI. To start, here’s a staggering statistic: according to Menlo Ventures’ recent survey of more than 450 enterprise executives, almost 95% of AI spend is now on inference, or running AI models, as opposed to training them.

    “People are not building models from scratch for the most part anymore,” Tim Tully, an engineer and partner at Menlo Ventures told Eye on AI. “We see it empirically. You see it through the survey data. We see it from talking to companies. It’s plainly obvious.”

    Creating an end-to-end model from scratch is massively resource intensive and requires deep expertise, whereas plugging into OpenAI or Anthropic’s API is as simple as it gets. This has prompted a massive shift from an AI landscape that was “model-forward” to one that’s “product-forward,” where companies are primarily tapping existing models and skipping right to the product roadmap. By 2027, the total value of APIs to AI software specifically will reach an estimated $5.4 trillion, representing 76% growth in five years, according to a report from open-source API company Kong. 

    This shift has been incredibly lucrative for companies like OpenAI, but for everyone else, it’s a massive sea change that’s forced many companies to quickly turn the ship around. For example, Menlo portfolio company TrueEra, which provided machine learning observability capabilities for companies training models in-house, had to completely pivot its product strategy when its customers started using existing models instead of building their own, according to Tully. 

    There’s an argument to be made that this shift levels the playing field, making it possible for any company of any size to access and deploy advanced AI. After all, everyone is now just an API away from best-in-class models. But if everyone is using the same models, what will be the competitive differentiator?

    One aspect is how well you can prompt engineer, which explains the mad rush of prompt engineering research, training, and the creation of high-priority prompt engineering roles that are fetching salaries over $300K. But as it goes in AI, it really comes back to the data, particularly the proprietary data you have access to and how well you can incorporate it. 

    “It’s what kind of documents can you come up with? How well can you parse and extract data from those documents? How can you convert the unstructured data to structured?” Tully said. 

    A recent article in Harvard Business Review on how companies can turn generative AI into a competitive advantage similarly boils down to 1. adopt publicly available tools and 2. supercharge them with your own data. 

    Last fall, I wrote about RAG (retrieval-augmented generation), a now incredibly popular technique for getting an existing AI model to work with new information it was never trained on. RAG is currently a major driver of how companies can incorporate their own data and get the most out of their off-the-shelf models, but AI practitioners are already wondering what comes next. 

    “We have models, but then what? How do you use them in increasingly interesting ways that build sophisticated applications over time?” Tully said. “I think the question that I have is, how do you evolve RAG in a way that helps applications continue to differentiate? That’ll be something to keep an eye on.”

    And with that, here’s more AI news. Today’s issue also premiers two new sections for Eye on AI—one to help you get the most out of LLMs like ChatGPT, and another to keep you up-to-date on the most important AI-related events coming up soon.

    Thanks for reading!

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    Microsoft forms a new team to build cheaper generative AI. The new team, called GenAI, brings together top AI researchers from the company and aims to develop conversational AI that requires less computing power than OpenAI’s models, according to The Information. Exploding costs have become one of the top AI-related concerns for companies, and the formation of the new team reflects Microsoft’s intent to offer AI to Office customers and app developers through Azure. Microsoft also briefly hit a $3 trillion valuation yesterday, signaling investor optimism over AI and making the software giant the second company ever to cross the $3 billion threshold (Apple became the first last June).

    Google cuts ties with data-labeling firm Appen, links up Hugging Face. Google tapped Appen to help train Bard, AI-powered search, and the company’s other AI products, accounting for a whopping $82.8 million of Appen's $273 million in revenue in 2023. The termination is a major hit for Appen—which has also supported AI-related data efforts at Microsoft, Apple, Meta, and Amazon—as it struggles to pivot amid the Generative AI boom. “Companies are spending far more on processors from Nvidia and less on Appen,” wrote CNBC. Separately, Google and Hugging Face today announced a strategic partnership. Google Cloud will now host the startup's open-source AI models and act as the preferred destination for Hugging Face training and inference workloads.

    White House science chief signals cooperation with China on AI safety. “Steps have been taken to engage in that process. We have to try to work [with Beijing],” Arati Prabhakar told the Financial Times, stating that the countries will work together in the coming months. It’s a rare show of cooperation amid increasing tensions between the countries, including new U.S. export controls on chips aimed at preventing China from advancing in AI on the back of U.S. technologies. 

    Nearly 90% of top U.S. news outlets are now blocking AI web crawlers. That’s according to Wired and data from Originality AI. OpenAI’s GPTBot is the most widely-blocked crawler overall, but the data also surfaced an interesting trend: None of the top right-wing news sites have blocked AI web crawlers, including Fox News, the Daily Caller, and Breitbart. Researchers theorize this comes down to ideological divide on copyright and attempts to include right-wing content in LLMs; however, two right-wing publications told Wired this was an oversight they’ll correct.

    FORTUNE ON AI

    Mind the gap: Workers are desperate for AI upskilling, but bosses aren’t meeting their needs, by Emma Burleigh

    AI is ready to start changing health care, but people are holding it back, by Peter Vanham

    Travel companies are using AI to better customize trip itineraries, by Stephanie Cain

    AI Calendar

    Jan. 30: Microsoft and Alphabet report quarterly earnings

    Feb. 1: Meta and Amazon report earnings

    Feb. 21: Nvidia reports earnings 

    March 18 - 21: Nvidia GTC AI conference in San Jose, Calif.

    June 25 - 27: 2024 IEEE Conference on Artificial Intelligence in Singapore

    AI PROMPT SCHOOL

    Disaster preparedness. Creating a plan for what to do in case of an emergency or natural disaster has been on my to-do list for a while, especially as events like flooding become all the more frequent. But since it keeps falling to the wayside, I thought: let’s see if ChatGPT can do it for me. In short, I was impressed with what ChatGPT turned up.

    To start, I prompted the chatbot with the following:

    “Create a home safety guide with important information to know across categories like fire safety, emergency preparedness, medical emergency preparedness, and whatever else you think would be important to know. Where it would be helpful to supplement the information with a video, please do so by including a YouTube link. Be sure to include information relevant to my specific area: U.S. zip code [redacted].”

    Almost instantly, ChatGPT created a plan with categories for fire safety, emergency medical preparedness, home security, child safety, cybersecurity, and emergency/natural disaster preparedness for my local area. Each section contained a handful of bullet points and link to a YouTube video instructing on basic fire safety tips, how to build an emergency kit, first aid basics, hurricane preparedness tips, etc. ChatGPT additionally provided links to the Red Cross, my city’s office of emergency preparedness, and also directed me to where I can sign up for alerts and warnings for my local area. It also provided insight into where it was getting this information, showing that it searched resources including FEMA's National Risk Index and information from the American Society of Civil Engineers (ASCE).

    If I attempted to do this on my own, it probably would’ve taken several Google searches, comparing and compiling multiple resources, and maybe some writing/editing to personalize it and put it all together into an easy-to-read format. Not only did ChatGPT—GPT-4, in this case—do all this almost instantly, but it made it incredibly easy for me to accomplish this task I've been putting off. And while I’m clearly not an expert in disaster preparedness, I’d say the guide is quite good. The tips all seem important, helpful, and relevant, and it’s written in a very straightforward way that’s easy to digest and understand. 

    In the GPT Store, I also discovered a dozen or so GPTs aimed at emergencies and emergency preparedness, including several positioned to offer guidance in real-time during emergencies. I dropped the same prompt into a few and received guides that were similar, though significantly less thorough. The original ChatGPT definitely delivered, but I can’t say the same for the custom GPTs.

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.