Hello and welcome to Eye on AI.
Yes, AI might be coming for your job. But not just yet.
With a couple of exceptions, the technology is either not yet good enough, or your company is still trying to work out how to deploy the technology at scale. We know this because both big surveys of Fortune 500 executives and anecdotal stories coming out of conferences, such as Davos or Fortune’s own Global Forum, tell us so. Top execs say they have been rushing to experiment with generative AI, but have been wary about putting it into production at scale because of worries about accuracy, bias, data security, and cost. Either way, your job is probably safe—for this year.
Which is why I am highly skeptical of companies that have started attributing recent rounds of layoffs to AI. The Wall Street Journal on Monday ran a story that was all too credulous of executives who have begun saying they are shedding jobs due to the technology.
That story, and several like it that have run in outlets such as Bloomberg and Axios, have all cited a report from recruiting firm Challenger, Gray & Christmas that says 4,600 jobs have been lost due to AI since May, a figure that Challenger said almost certainly undercounted the true job losses.
But the more we poke at this AI job loss narrative, the less convincing it looks. Even when AI is part of the equation, the jobs are generally not being lost because AI is actually replacing the need for human workers.
Take UPS, which said it is planning to cut 12,000 staff, and then warned that the jobs were unlikely to ever come back because it was starting to use AI to make pricing decisions and some back office tasks. But don’t be fooled. That little AI mention was a nice way for UPS’s leaders to put lipstick on a pig in an earnings call—to make it seem like management is cutting-edge and tech-savvy.
Because without that AI razzle-dazzle, you just might conclude UPS’s management is inept. After all, the reason UPS is shedding jobs is that its business is in the tank for reasons that have nothing to do with AI. Package delivery volumes fell 7.5% in the fourth quarter. The company missed its revenue and earnings targets and just surprised Wall Street with sharply lower guidance for the year ahead too. Its stock was hammered as a result. That is why the company is shedding jobs, not because of AI.
What’s happening with AI-related losses in the tech sector is a bit more real. But here too, the dynamic is not, for the most part, the one that people have always feared, namely that AI would directly substitute for people’s labor. Instead, what is happening is that tech companies are shedding jobs in order to find the money to invest in expensive AI talent and expensive GPU computer chips to train and run AI applications. This has been the case with some of the job cuts announced at Google and Spotify, for instance.
So yes, AI is leading to job losses. But not for the reasons we always worried about. This dynamic is being lost in a lot of the coverage of AI-related job losses, in part because we’ve been so primed by dire predictions over the past decade of a coming wave of white-collar job losses due to automation.
Now, there are a few cases where AI is directly taking jobs. Some of the job losses at Google came from an ad sales division where the company said AI software was now able to service customers more often, leading to less need for human sales reps. And language learning app Duolingo has shed contractors who produced some of its content because it is now using generative AI to produce content for lessons (although it was quick to point out that humans still need to check this AI-generated content).
I do think we will see, in the coming years, companies being much more cautious about hiring as AI makes existing employees more efficient. And I do think the configuration and career pathways for many professions is going to change. Some people are going to lose their jobs. But the bogeyman of mass unemployment is still nowhere in sight.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Correction, Feb. 13: In an earlier version of this newsletter, in one of the news items below on the Allen Institute for Artificial Intelligence’s new LLM, the item inaccurately identified the creator of the Falcon LLM model. It is the Technology Innovation Institute in Abu Dhabi, not G42.
AI IN THE NEWS
OpenAI’s Altman looks for up to $7 trillion for Nvidia rival. Yes, you heard that right. The Wall Street Journal reported that Altman has been trying to raise the outlandish amount—more than the U.S. spent on the entirety of World War II—from sovereign wealth funds in place like the UAE. He wants to create a rival to AI chipmaker Nvidia, whose GPUs dominate the market for AI applications, but which are expensive to purchase and energy-intensive (and thus expensive) to use. Supplies of the most advanced Nvidia chips have also been in short supply, holding back some companies from building large AI models. But whether Altman can actually raise that much money, which is six times more than all the U.S. corporate debt issued last year, remains to be seen.
Nvidia tries to undercut AI cloud providers’ plans to build their own AI chips. Altman is not the only one thinking about ways to sidestep Nvidia’s stranglehold on GPUs. Google already deploys its own AI chips, as does Amazon, Apple, and a few other large cloud operators. Almost all of those chips, including Nvidia’s, are actually manufactured by TSMC, the Taiwanese semiconductor giant, but the tech companies rely on companies such as Broadcom and Marvell to help with development and design. Now Nvidia telling its biggest customers, which include Microsoft and OpenAI, that it will help them design and deliver custom silicon, leveraging its own expertise and relationship with TSMC, according to Reuters. It’s an interesting strategic bit of self-cannibalization. We will see if the hyperscalers go for it.
And while we’re talking about Nvidia, CEO Jensen Huang is also trying to interest countries in building ‘sovereign LLMs.’ The Nvidia chief said every country should build its own, government-funded large language model (LLM) as a strategic asset so that it could be sure to preserve its own linguistic and cultural heritage, Reuters said. Huang made the remarks at the World Government Summit in Dubai. Of course, Huang would say that. Right now, it's Nvidia that stands to benefit if every country in the world has to build out a giant data center on GPUs to train a sovereign LLM.
G42 pivots to the U.S. amid concerns about close China ties. Abu Dhabi-based AI company G42 is scaling back its business in China and sold off investments there in an effort to address concerns over its connections with Beijing that had led some U.S. lawmakers to propose trade restrictions on the company, Bloomberg reported. The company, led by CEO Peng Xiao, is shifting its investment focus to Western markets. G42 is also expanding its partnerships with companies such as OpenAI and AI chipmaker Cerebras Systems.
Allen Institute releases new family of open-source LLMs. The Allen Institute for Artificial Intelligence (AI2 for short) has released a family of open-source LLMs that it bills as the most open in the world. AI2 makes the claim because it is publishing not just the model and the weights, but also all of the training data and how the model performs if training is stopped at various points, as well as information on how to implement the model on different training platforms. It is calling the models OLMo (for Open Large Language Models). Its OLMo 7B—which has 7 billion parameters, or adjustable variables—performs about as well as a similarly-sized Falcon LLM, from the Technology Innovation Institute in Abu Dhabi. You can read more here.
Biden Administration appoints head of new AI Safety Institute. The White House has named Elizabeth Kelly, who has been a top advisor to President Joe Biden on AI, to lead the new U.S. AI Safety Institute. The institute is part of the National Institute of Standards and Technology which the White House has tasked, under Biden's AI Executive Order, with developing standards for developing powerful AI systems safely and for testing and "red-teaming" these AI models to ensure they don't pose undue risks. You can read more on her appointment in this VentureBeat story.
EYE ON AI RESEARCH
I second that emotion. Well, you can’t do that if you can’t recognize what emotion it is. Over the years, researchers have tried to create emotion detection AI software, with mixed success. Now, a group of researchers from KU Leuven, in Belgium, have created a dataset cleverly named FindingEmo to help AI nail the difficult skill, which could have applications in everything from designing more empathetic chatbots and better personal assistants or humanoid robots, to, more ominously, surveillance tools. The dataset consists of more than 25,000 images of people in complicated photos, often pictured with other people. Each person is labeled as expressing one of eight primary emotions—joy, trust, fear, surprise, sadness, disgust, anger, and anticipation—which are further graded on three scales of intensity. Then there is a holdout test set of 1,500 similarly labeled images being made available too. You can read more about the new dataset here on the non-peer-reviewed research repository arxiv.org.
FORTUNE ON AI
Ex-Salesforce Co-CEO Bret Taylor and longtime Googler Clay Bavor raised $110 million to bring AI ‘agents’ to business —by Kylie Robison
If Bill Gates could ask a time traveler anything, he’d want to know whether AI eventually doomed or helped humanity —by Orianna Rosa Royale
Google is investing €25 million in training EU’s workers on AI. Its next target: Capturing €1.2 trillion in AI-fueled growth —by Peter Vanham and Joseph Abrams
Nanotronics CFO explains using AI to produce more chips for AI: ‘We’re able to do everything from R&D to production’ —by Sheryl Estrada
Top labor economist says don’t believe the AI doom narrative—and his reason why is the ‘underpopulation crisis’ Elon Musk talks about —by Irina Ivanova
AI CALENDAR
Feb. 21: Nvidia reports earnings
March 18-21: Nvidia GTC AI conference in San Jose, Calif.
April 15-16: Fortune Brainstorm AI London (Register here.)
May 7-11: International Conference on Learning Representations (ICLR) in Vienna, Austria
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
March 11-15: SXSW artificial intelligence track in Austin, Texas
PROMPT SCHOOL
Be kind of your model. Seriously. This little prompt hack was making the rounds on social media last week after OpenAI’s head of developer relations, Logan Kilpatrick, went on the podcast that Lenny Rachitsky, a former high-level Airbnb exec, hosts and said that if people were polite in their prompts they would perform better. Kilpatrick also said that telling the model “to take a break,” from time to time, would also result in better performance.
As many people pointed out, that’s crazy (just remember, it has learned these things from analyzing human data, so maybe not that crazy). It's also kind of not what you really want out of a piece of software (wasn’t a selling point of AI supposed to be that it was less fickle and emotionally driven than people?).
But hey, if it gets better answers out of ChatGPT or your other favorite chatbot, why not try it?
This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.