Wanted: AI with common sense

Sharon GoldmanBy Sharon GoldmanAI Reporter
Sharon GoldmanAI Reporter

Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

Silvio Savarese
Silvio Savarese, chief scientist of Salesforce AI research.
Courtesy of Salesforce

Welcome to Eye on AI! In this edition…OpenAI announces Stargate UAE, its first OpenAI for Countries partnership…JPMorgan to lend more than $7 Billion for OpenAI data center…Meta introduces program to support early-stage U.S. startups.

AI models are a paradox. 

With all the chatter about the brilliance of AI models and the potential for AI agents to tackle tasks on our behalf, it’s fascinating to remember that for all their superhuman capabilities, they sometimes lack common sense. They may be able to pass the bar exam, for example, but can’t answer some simple riddles correctly. 

Last year, Andrej Karpathy, a former OpenAI researcher and director of AI at Tesla, came up with a phrase to describe this strange phenomenon: jagged intelligence. This week, I spoke with Silvio Savarese, chief scientist at Salesforce AI Research, about jagged intelligence and what it means for enterprise companies that need to make sure AI agents are not just capable, but consistent and accurate in their responses and actions. 

AI agents, he explained, need four critical components: memory, reasoning, interactions with the real world, and a way to communicate—through voice or text, for example. While large language models (LLMs) are getting more and more powerful in the number of tasks and types of research they can do, they still can’t reason very well. That is, they don’t have much common sense. 

One example Savarese noted from his Salesforce team’s research is a famous riddle: 

A man has to get a fox, a chicken, and a sack of corn across a river.
He has a rowboat, and it can only carry him and three other things.
If the fox and the chicken are left together without the man, the fox will eat the chicken.
If the chicken and the corn are left together without the man, the chicken will eat the corn.
How does the man do it in the minimum number of steps?

The answer, if you haven’t already figured it out, is that the man can take the fox, chicken and sack of corn with him in one trip, since the boat can carry him and three other things. 

For some reason, this is confounding to even the most advanced LLM. Testing the riddle on a ChatGPT model released last year, Savarese and his team found that the model could not come up with the right answer. Instead, it said:

1. Take the chicken across
2. Go back alone
3. Take the fox across
4. Bring the chicken back
5. Take the corn across
6. Go back alone
7. Finally, take the chicken across again

This “common sense” issue with LLMs is why, Savarese said, he doesn’t believe getting to AGI (artificial general intelligence, generally defined as when AI can match or surpass human capabilities across virtually all cognitive tasks) will be the most important metric—particularly for companies that don’t need a “genius” AI agent but desperately need a reliable one. 

“AGI is a moving target–it’s very hard to define exactly what it means,” he said. “Every time, there is some new task being introduced so they can move the finish line further ahead.” 

For large companies adopting AI agents, he proposed a better benchmark for AI capabilities, which he calls Enterprise General Intelligence (EGI). Intelligence, he explained, is not the only important metric. The other one is consistency: “For the enterprise, you need to have an agent that is very stable in performing.” Salesforce defines EGI, therefore, as AI designed for business: highly capable, consistently reliable, and built to integrate seamlessly with existing systems—even in complex scenarios.”

That is far easier to establish than AGI, Saverse maintained, with the finish line measured by two axes: One, the model’s capability to solve complex business problems, and the other its consistency in doing so. “It’s not about solving STEM questions and theorems,” he said. “It’s about really addressing those critical business challenges.” 

If you want to build a useful AI agent that can assist a sales representative, for example, it needs to remember previous steps that it took. It needs to take into account previous conversations and outcomes. It also needs to remain consistent and accurate in a way that is trusted. “As we achieve both, we can achieve EGI,” he said. 

That said, for now agents are still a work in progress, he cautioned, which is why on Salesforce’s Agentforce platform—for helping companies design, build, and deploy anonymous AI agents— customers can access trust and security filters that can block agents from performing certain tasks and actions. 

But going forward, Saverese said that his research is investing in figuring out how AI models at their core can develop more common sense in the first place. After all, no company wants its AI agent to make three trips instead of one!

With that, here’s the rest of the AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

AI IN THE NEWS

Anthropic unveils its most powerful models yet. Anthropic unveiled its latest generation of “frontier,” or cutting-edge, AI models, Claude Opus 4 and Claude Sonnet 4, during its first conference for developers on Thursday in San Francisco. The AI startup, valued at over $61 billion, said in a blog post that the new, highly-anticipated Opus model is “the world’s best coding model,” and “delivers sustained performance on long-running tasks that require focused effort and thousands of steps.” AI agents powered by the new models can analyze thousands of data sources and perform complex actions. Dianne Penn, a member of Anthropic’s technical staff, told Fortune that “this is actually a very large change and leap in terms of what these AI systems can do,” particularly as the models advance from serving as “copilots,” or assistants, to “agents,” or virtual collaborators that can work autonomously on behalf of the user. 

OpenAI announces Stargate UAE, its first OpenAI for Countries partnership. OpenAI announced Stargate UAE, the first partnership involving its OpenAI for Countries global initiative announced nearly three weeks ago that's aimed at developing AI infrastructure globally. This partnership, made in close coordination with the U.S. government, is intended to bolster global AI capabilities while aligning with democratic values and open markets. Stargate UAE will feature a 1-gigawatt AI computing cluster, with an initial 200 megawatts expected to be operational by 2026.

JPMorgan to lend more than $7 Billion for OpenAI data center. JPMorgan has agreed to provide over $7 billion in financing to the companies building OpenAI’s massive AI data center campus in Abilene, Tex., according to two people with direct knowledge of the arrangement, The Information reported today. The previously undisclosed deal underscores the growing appetite among lenders and investors to fund the physical infrastructure powering next-generation AI development. 

Meta introduces program to support early-stage U.S. startups. This week Meta introduced the Llama Startup Program, a new initiative designed to support early-stage U.S. startups building generative AI applications with Llama. The goal is to help startups accelerate development, reduce costs, and leverage Llama’s capabilities to deliver innovative solutions across industries. The program offers selected startups up to $6,000 monthly in credits to access cloud servers for up to six months, along with direct technical support from Llama experts. Eligible startups must be incorporated in the U.S. with less than $10 million in funding and have at least one developer in order to apply. 

FORTUNE ON AI

OpenAI’s hiring of legendary former Apple design boss Jony Ive is a $6.5 billion move to dominate the AI age by creating the next iPhone —by Verne Kopytoff, Jeremy Kahn and Alexei Oreskovic

Gemini Diffusion was the sleeper hit of Google I/O and some say its blazing speed could reshape the AI model wars —by Sharon Goldman

Andy Jassy makes the case for Amazon’s extraordinary AI spending, promising shareholders they will end up ‘very happy’ —by Jason Del Rey

Microsoft’s and Google’s dueling developer conferences reveal opposite AI strategies—and a big weakness for one company —by Jeremy Kahn

AI CALENDAR

June 9-13: WWDC, Cupertino, Calif.

July 13-19: International Conference on Machine Learning (ICML), Vancouver

July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Dec. 2-7: NeurIPS, San Diego

EYE ON AI NUMBERS

39%

That’s how many U.S. workers believe businesses could benefit from using AI avatars to attend meetings on employees’ behalf, according to a new survey from Colossyan’s State of AI Avatars Report. If you’re surprised by how many people seem to be open to having an AI avatar replace them at a meeting, here are some other tidbits from the study: 

—47% would choose an AI tutor over a human one.
—61% prefer real-time help from an AI avatar over a human agent.
—16% are open to dating an AI avatar in a virtual setting, rising to 23% among Gen Z.

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.