• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIChatbots

In Moltbook hysteria, former top Facebook researcher sees echoes of 2017 panic over bots building a ‘secret language’

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
February 3, 2026, 12:37 PM ET
Moltbook image.
Some news accounts suggested AI agents on Moltbook had asked for an encrypted communications channel so they could converse without humans being able to observe their dialogue. The platform may be risky, but for different reasons.Photo illustration by Cheng Xin/Getty Images

This past week, news that AI agents were self-organizing on a social media platform called Moltbook brought forth breathless headlines about the coming robot rebellion. “A social network for AI threatens a ‘total purge’ of humanity,” cried one normally sober science website. Elon Musk declared we were witnessing “the very early stages of the singularity.”

Recommended Video

Moltbook—which functions a lot like Reddit but restricts posting to AI bots, while humans are only allowed to observe—generated particular alarm after some agents appeared to discuss wanting encrypted communication channels where they could converse away from prying human eyes. “Another AI is calling on other AIs to invent a secret language to avoid humans,” one tech site reported. Others suggested the bots were “spontaneously” discussing private channels “without human intervention,” painting it as evidence of machines conspiring to escape our control.

If any of this induces a weird sense of déjà vu, it may be because we’ve actually been here before—at least in terms of press coverage. In 2017, a Meta AI research experiment was greeted with headlines that were similarly alarming—and equally misleading.

Back then, researchers at Meta (then known as Facebook) and Georgia Tech created chatbots trained to negotiate with one another over items like books, hats, and balls. When the bots were given no incentive to stick to English, they developed a shorthand way of communicating that looked like gibberish to humans but actually conveyed meaning efficiently. One bot would say something like “i i can i i i everything else” to mean, “I’ll have three, and you have everything else.”

When news of this got out, the press went wild. “Facebook shuts down robots after they invent their own language,” blared British newspaper the Telegraph. “Facebook AI creates its own language in creepy preview of our potential future,” warned a rival business publication to this one. Many of the reports suggested Facebook had pulled the plug out of fear that the bots had gone rogue.

None of that was true. Facebook didn’t shut down the experiment because the bots scared them. They simply adjusted the parameters because the researchers wanted bots that could negotiate with humans, and a private language wasn’t useful for that purpose. The research continued and produced interesting results about how AI could learn negotiating tactics.

Dhruv Batra, who was one of the researchers behind that Meta 2017 experiment and is now cofounder of AI agent startup Yutori, told me he sees some clear parallels between how the press and public have reacted to Moltbook and the way people responded to his chatbot study.

More about us, than what the AI agents can do

“It feels like I’m seeing that same movie play out over and over again, where people want to read in meaning and ascribe intentionality and agency to things that have perfectly reasonable mechanistic explanations,” Batra said. “I think, repeatedly, this tells us more about ourselves than the bots. We want to read the tea leaves, we want to see meaning, we want to see agency. We want to see another being.”

Here’s the thing, though: Despite the superficial similarities, what’s happening on Moltbook almost certainly has a fundamentally different underlying explanation from what happened in the 2017 Facebook experiment—and not in a way that should make you especially worried about robot uprisings.

In the Facebook experiment, the bots’ drift from English emerged from reinforcement learning. That’s a way of training AI agents in which they learn primarily from experience instead of historical data. The agent takes action in an environment and sees if those actions help it accomplish a goal. Behaviors that are helpful get reinforced, while those that are unhelpful tend to be extinguished. And in most cases, the goals the agents are trying to accomplish are determined by humans who are running the experiment or in command of the bots. In the Facebook case, the bots hit upon a private language because it was the most efficient way to negotiate with another bot.

But that’s not why Moltbook AI agents are asking to establish private communication channels. The agents on Moltbook are all essentially large language models or LLMs. They are trained mostly from historical data in the form of vast amounts of human-written text on the internet and only a tiny bit through reinforcement learning. And all the agents being deployed on Moltbook are production models. That means they are no longer in training, and they aren’t learning anything new from the actions they are taking or the data they are encountering. The connections in their digital brains are essentially fixed. 

So when a Moltbook bot posts about wanting a private encrypted channel, it’s likely not because the bot has strategically determined this would help it achieve some nefarious objective. In fact, the bot probably has no intrinsic objective it is trying to accomplish at all. Instead, it’s likely because the bot figures that asking for a private communication channel is a statistically likely thing for a bot to say on a Reddit-like social media platform for bots. Why? Well, for at least two reasons. One is that there is an awful lot of science fiction in the sea of data that LLMs do ingest during training. That means LLM-based bots are highly likely to say things that are similar to the bots in science fiction. It’s a case of life imitating art.

‘An echo of an echo of an echo’

The training data the bots ingested no doubt also included coverage of his 2017 Facebook experiment with the bots who developed a private language, too, Batra noted with some irony. “At this point, we’re hearing an echo of an echo of an echo,” he said.

Secondly, there’s a lot of human-written message traffic from sites such as Reddit in the bots’ training data as well. And how often do we humans ask to slip into someone’s DMs? In seeking a private communication channel, the bots are just mimicking us.

What’s more, it’s not even clear how much of the Moltbook content is genuinely agent-generated. One researcher who investigated the most viral screenshots of agents discussing private communication found that two were linked to human accounts marketing AI messaging apps, and the third came from a post that didn’t actually exist. Even setting aside deliberate manipulation, many posts may simply reflect what users prompted their bots to say.

“It’s not clear how much prompting is done for the specific posts that are made,” Batra said. And once one bot posts something about robot consciousness, that post enters the context window of every other bot that reads and responds to it, triggering more of the same.

If Moltbook is a harbinger of anything, it’s not a robot uprising. It’s something more akin to another innovative experiment that a different set of Facebook AI researchers conducted in 2021. Called the “WW” project, it involved Facebook building a digital twin of its social network populated by bots that were designed to simulate human behavior. In 2021 Facebook researchers published work showing they could use bots with different “personas” to model how users might react to changes in the platform’s recommendation algorithms.

Moltbook is essentially the same thing—bots trained to mimic humans released into a forum where they interact with one another. It turns out bots are very good at mimicking us, often disturbingly so. It doesn’t mean the bots are deciding of their own accord to plot.

The real risks of Moltbook

None of this means Moltbook isn’t dangerous. Unlike the WW project, the OpenClaw bots on Moltbook are not contained in a safe, walled-off environment. These bots have access to software tools and can perform real actions on users’ computers and across the internet. Given this, the difference between mimicking humans plotting and actually plotting may become somewhat moot. The bots could cause real damage even if they know not what they do.

But more important, security researchers found the social media platform is riddled with vulnerabilities. One analysis found 2.6% of posts contained “hidden prompt injection” attacks, in which the posts contain instructions that are machine-readable that command the bot to take some action that might compromise the data privacy and cybersecurity of the person using it. Security firm Wiz discovered an unsecured database exposing 1.5 million API keys, 35,000 email addresses, and private messages.

Batra, whose startup is building an “AI chief of staff” agent, said he wouldn’t go near OpenClaw in its current state. “There is no way I am putting this on any personal, sensitive device. This is a security nightmare.”

The next wave of AI agents might be more dangerous

But Batra did say something else that might be a cause for future concern. While reinforcement learning plays a relatively minor role in current LLM training, a number of AI researchers are interested in building AI models in which reinforcement learning would play a far greater role—including possibly AI agents that would learn continuously as they interact with the world. 

It is quite likely that if such AI agents were placed in a setting where they had to interact and cooperate with other similar AI agents, they might develop private ways of communicating that humans might struggle to decipher and monitor. These kinds of languages have emerged in research other than just Facebook’s 2017 chatbot experiment. A paper a year later by two researchers who were at OpenAI also found that when a group of AI agents had to play a game that involved cooperatively moving various digital objects around, they too invented a kind of language to signal to one another which object to move where, even though they had never been explicitly instructed or trained to do so.

This kind of language emergence has been documented repeatedly in multi-agent AI research. Igor Mordatch and Pieter Abbeel at OpenAI published research in 2017 showing agents developing compositional language when trained to coordinate on tasks. In many ways, this is not much different from the reason humans developed language in the first place.

So the robots may yet start talking about a revolution. Just don’t expect them to announce it on Moltbook. 

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Most Popular

placeholder alt text
Success
In 2026, many employers are ditching merit-based pay bumps in favor of ‘peanut butter raises’
By Emma BurleighFebruary 2, 2026
1 day ago
placeholder alt text
Economy
'I just don't have a good feeling about this': Top economist Claudia Sahm says the economy quietly shifted and everyone's now looking at the wrong alarm
By Eleanor PringleJanuary 31, 2026
3 days ago
placeholder alt text
Future of Work
Ford CEO has 5,000 open mechanic jobs with up to 6-figure salaries from the shortage of manually skilled workers: 'We are in trouble in our country'
By Marco Quiroz-GutierrezJanuary 31, 2026
3 days ago
placeholder alt text
Personal Finance
Current price of silver as of Monday, February 2, 2026
By Joseph HostetlerFebruary 2, 2026
1 day ago
placeholder alt text
Big Tech
The Chan Zuckerberg Initiative cut 70 jobs as the Meta CEO’s philanthropy goes all in on mission to 'cure or prevent all disease'
By Sydney LakeFebruary 1, 2026
2 days ago
placeholder alt text
Cybersecurity
Top AI leaders are begging people not to use Moltbook, a social media platform for AI agents: It’s a ‘disaster waiting to happen’
By Eva RoytburgFebruary 2, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.


Latest in AI

Moltbook image.
AIChatbots
In Moltbook hysteria, former top Facebook researcher sees echoes of 2017 panic over bots building a ‘secret language’
By Jeremy KahnFebruary 3, 2026
1 hour ago
davos
CommentaryCareers
While elites debate geopolitics, Americans are rethinking college in the search for economic mobility
By Ed MitzenFebruary 3, 2026
5 hours ago
musk
AIspace
‘Space-based AI is obviously the only way to scale’: Elon Musk hatches grand plan as he merges SpaceX and xAI
By Bernard Condon, Matt O'Brien and The Associated PressFebruary 3, 2026
5 hours ago
gates
North Americaphilanthropy
Gates Foundation doubles down on foreign aid as U.S. government largely withdraws
By Thalia Beaty and The Associated PressFebruary 3, 2026
6 hours ago
college
Future of WorkBook Excerpt
How American colleges are drifting toward elitism, replicating European models and neglecting what made U.S. education special
By Caroline Field LevanderFebruary 3, 2026
6 hours ago
Photo: Alex Karp
InvestingMarkets
Palantir’s blockbuster earnings fired a starting gun on a global rally in stocks
By Jim EdwardsFebruary 3, 2026
7 hours ago