BabyAGI is taking Silicon Valley by storm. Should we be scared?

April 15, 2023, 12:15 PM UTC
Photo of an infant dressed in a devil costume with a ptichfork.
Silicon Valley is abuzz about a new A.I. trend involving something called 'babyAGI.' Should we be worried? Experts say there are risks.
Photo illustration by Getty Images

Suddenly, Silicon Valley’s technorati are buzzing about babyAGI. It’s a moniker that manages to sound both cute and scary at the same time, a bit like mogwais in the cult classic comedy-horror film Gremlins. But what exactly is babyAGI?

First of all, it’s not quite as scary as it sounds. Despite the name, babyAGI is definitely not AGI—an acronym that stands for artificial general intelligence and refers to the sort of all-powerful A.I. that is a staple of science fiction.

AGI is the expressed goal of some A.I. companies, including OpenAI and Alphabet’s DeepMind. It is the thing that OpenAI co-founder and CEO Sam Altman says people are justified in being afraid of, that Elon Musk has said keeps him up at night, and that lead some of people call for a six-month pause on development of more powerful A.I. software. But AGI doesn’t exist yet, and there are plenty of people in computer science who think AGI is impossible.

So OK, babyAGI is not Skynet in diapers. But it is still an impressive and important new addition to the A.I. world. BabyAGI is basically software that turns GPT-4 (OpenAI’s latest large language model, which normally just outputs words) into a useful digital assistant that can complete tasks and take actions across the internet. Instead of just receiving text answers to prompts from GPT-4, with BabyAGI you can do things like plan and automatically execute a campaign to grow your Twitter following or create and run a content marketing business.

BabyAGI is actually just one, popular version of “AutoGPT,” a category of open-source software that can do these kind of things. Both AutoGPT and BabyAGI just a few weeks old, highlighting how incredibly fast innovation—and not insignificant new risks—are being born in the era of LLMs.

“We’re still in the early days of Autonomous Agents, but there’s definitely an exciting opportunity here,” Nathan Benaich, the founder of London-based venture capital firm Air Street Capital and a prominent early-stage investor in A.I. companies, said.

Where did it come from?

The first AutoGPT, called simply “Auto-GPT” was created by Toran Bruce Richards. Richards is the Edinburgh, Scotland-based founder and lead developer of Significant Gravitas, a company that seeks to take software techniques from the video game industry and apply it to non-gaming use cases, according to Richards’s LinkedIn page.

Richards created Auto-GPT and uploaded it to his Github page on March 30. Since then, many other developers have created their own versions. AutoGPTs use several application programming interfaces (APIs) to tie together GPT-4 with LangChain, an open-source software tool that makes it easy to link a series of prompts (the inputs from which an LLM bases its responses) together, and Pinecone, a vector database that can be used as a kind of memory for GPT-4, allowing it to reference back to external documents or to its own previous responses to prompts.

Thousands of miles away in Seattle, Yohei Nakajima, a partner at early stage venture capital firm Untapped Capital, was playing around with the latest generative A.I. tools and created what would soon be named BabyAGI.

Nakajima had noticed people trying to use OpenAI’s ChatGPT as a startup “co-founder”—generating business ideas, writing a business plan, drafting marketing materials—a phenomenon that has been dubbed “HustleGPT.” He thought it might be possible to automate the entire process and create a fully-autonomous company run by GPT-4, he told Fortune in an email. Nakajima created a prototype and tweeted about it. A friend who saw his post dubbed the idea “babyAGI”—and the name stuck. Nakajima said it was only then that he realized that the system he had created would work better as a task-oriented autonomous agent rather than an autonomous startup founder.

Nakajima said he is primarily an investor, not a software developer, and is new to Github. So he was surprised that other developers started pulling and running his code.

BabyAGI proved especially popular in part because its code is simpler than Richards’ Auto-GPT—and because, well, the name was kind of whimsical. Nakajima’s original version didn’t actually execute its responses—but several developers, including a team from LangChain itself, have now created their own versions that actually act on the internet.

Since he posted it, people have tweeted out videos of themselves using babyAGI to run an autonomous sales prospecting operation for a business. Others have used Auto-GPT to research new products and prepare for podcasts. And several people have used versions to autonomously develop, test and debug software.

So far, all these AutoGPTs, including babyAGI, are freely available to use and run (although each time the software makes an OpenAI API call, the user is charged). Their open source nature could pose a threat to a number of high-profile, well-funded startups that are trying to create commercial A.I. assistants. These include Adept AI, which counted a number of OpenAI and Google alumni among its founders and has raised $415 million in venture capital funding to date, and Inflection AI, which was co-founded by DeepMind co-founder Mustafa Suleyman and LinkedIN co-founder Reid Hoffman. It has received $225 million in venture capital funding and is reportedly in the process of trying to raise as much as $675 million more.

Even a baby can be dangerous

While AutoGPTs are not AGI, they do carry some risks. For one thing, because they run in continuous loops, running multiple chains of prompts to GPT-4, they can quickly run up substantial bills with OpenAI.

“As with any product or service, it’s important to understand the costs of services you are using. We communicate this risk clearly, and urge others to do the same,” Nakajima said.

There are other dangers too. AutoGPTs can both write and execute computer code, so they could be used to run cyberattacks or fraud schemes. They could also be used to power misinformation mills, by generating false and misleading content, and automatically directing its dissemination across social media.

There are more mundane dangers too. If a user isn’t careful about what they ask the autonomous bots to do, they might end up doing something on your behalf—like buying items or making appointments—that the user didn’t intend.

“LLMs are quite limited, but this new class of systems—let’s call them GPT-based agents are potentially much more powerful,” Oren Etzioni, an emeritus computer scientist at the University of Washington who was also the founding CEO of the Allen Institute of Artificial Intelligence in Seattle, said. “Moreover, it’s easy to imagine scenarios where they would be difficult to control.”

For now, he said the potential for AutoGPTs to accidentally run up large charges from OpenAI is the most immediate risk. But he said that because AutoGPTs were a step towards systems that could act autonomously across the internet “their development merits careful assessment.”

Benaich said that most of the AutoGPTs currently available rely on costly API calls to OpenAI but that in the future it might be possible to base these kinds of agents on free, open-source LLMs that are as capable as GPT-4 is today. But he said that not all LLMs may be created equal. “Two things are going to matter hugely from here: the first is knowing what LLM agents are actually best suited for, and the second is seeing how robust they actually are, especially when they’ll meet the long tail of tasks from being made widely available,” he said.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward