• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIEye on AI

Moltbook is scary—but not for the reasons so many headlines said

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
February 3, 2026, 3:25 PM ET
Image of Moltbook app logo on a smart phone with another image of the Moltbook logo in the background.
Moltbook, the social media platform for AI agents, generated a lot of frightening headlines. People are right to be scared, but more because of what Moltbook says about the capabilities of people than what it says about the capabilities of AI bots.Photo illustration by Cheng Xin/Getty Images

Hello and welcome to Eye on AI. In this edition…why you really should be worried about Moltbook…OpenAI eyes an IPO…Elon Musk merges SpaceX and xAI…Novices don’t benefit as much from AI as people think…and why we need AI regulation now.

Recommended Video

This week, everyone in AI—and a lot of people outside of it—was talking about Moltbook. The social media platform created for AI agents was a viral sensation. The phenomenon had a lot of people, even a fair number of normally sober and grounded AI researchers, wondering aloud about how far we are from sci-fi “takeoff” scenarios where AI bots self-organize, self-improve, and escape human control.

Now, it appears that a lot of the alarmism about Moltbook was misplaced. First of all, it isn’t clear how many of the most sci-fi-like posts on Moltbook were spontaneously generated by the bots and how many only came about because human users prompted their OpenClaw agents to output them. (The bots on Moltbook were all created using the hit OpenClaw, which is essentially an open-source agentic “harness”—software that enables AI agents to use a lot of other software tools—that can be yoked to any underlying AI model.) It’s even possible that some of the posts were actually from humans posing as bots.

Second, there’s no evidence the bots were actually plotting together to do anything nefarious, rather than simply mimicking language about plotting that they might have picked up in their training, which includes lots of sci-fi literature as well as the historical record of a lot of sketchy human activity on social media.

As I pointed out in a story for Fortune earlier today, many of the fear-mongering headlines around Moltbook echoed those that attended a 2017 Facebook experiment in which two chatbots developed a “secret language” to communicate with one another. Then, as now, a lot of my fellow journalists didn’t let the facts get in the way of a good story. Neither that older Facebook research nor Moltbook presents the kind of Skynet-like dangers that some of the coverage suggests.

Now for the bad news

But that’s kind of where the good news ends. Moltbook shows that when it comes to AI agents, we are in the Wild Wild West. As my colleague Bea Nolan points out in this excellently reported piece, Moltbook is a cybersecurity nightmare, chock full of malware, cryptocurrency pump and dump scams, and hidden prompt injection attacks—i.e. machine readable instructions, sometimes not easily detected by people, that try to hijack an AI agent into doing something it’s not supposed to do. According to security researchers, it seems that some OpenClaw users suffered significant data breaches after allowing their AI agents on to Moltbook.

Prompt injection is an unsolved cybersecurity challenge for all AI agents that can access the internet right now. And it’s why many AI experts said they are extremely careful about what software, tools, and data they allow AI agents to access. Some only let agents access the internet if they are in a virtual machine where they can’t gain access to important information, like passwords, work files, email, or banking information. But on the other hand, these security precautions make AI agents a lot less useful. The whole reason OpenClaw took off is that people wanted an easy way to spin up agents to do stuff for them.

Then there are the big AI safety implications. Just because there’s no evidence that OpenClaw agents have any independent volition, doesn’t mean that putting them in an uncontrolled conversation with other AI agents is a great idea. Once these agents have access to tools and the internet, it doesn’t really matter in some ways if they have any understanding of their own actions or are conscious. Merely by mimicking sci-fi scenarios they’ve ingested during training, it is possible that the AI agents could engage in activity that could cause real harm to a lot of people—engaging in cyberattacks, for instance. (In essence, these AI agents could function in ways that are not that different from super-potent “worm” computer viruses. No one thinks the ransomware WannaCry was conscious. It did massive worldwide damage nonetheless.)

Why Yann LeCun was wrong…about people, not AI

A few years ago, I attended an event at the Facebook AI Research Lab in Paris at which Yann LeCun, who was Meta’s chief AI scientist at the time, spoke. LeCun, who recently left Meta to launch his own AI startup, has always been skeptical of “takeoff” scenarios in which AI escapes human control. And at the event, he scoffed at the idea that AI would ever present existential risks.

For one thing, LeCun thinks today’s AI is far too dumb and unreliable to ever do anything world-jeopardizing. But secondly, LeCun found these AI “takeoff” scenarios insulting to AI researchers and engineers as a professional class. We aren’t dumb, LeCun argued. If we ever build anything where there was the remotest chance of AI escaping human control, we’d always build it in an “airlocked” sandbox, without access to the internet, and with a kill switch that AI couldn’t disable. In LeCun’s telling, the engineers would always be able to take an ax to the computer’s power cord before the AI could figure out how to break out of its digital cage.

Well, that may be true of the AI researchers and engineers who work for big companies, like Meta or Google DeepMind, or OpenAI or Anthropic for that matter. But now AI—thanks to the rise of coding agents and assistants—has democratized the creation of AI itself. Now a world full of independent developers can spin up AI agents. Peter Steinberger who created OpenClaw is an independent developer. Matt Schlicht, who created Moltbook, is an independent entrepreneur who vibe coded the social platform. And, contra LeCun, independent developers have consistently demonstrated a willingness to chuck AI systems out of the sandbox and into the wild, if only to see what happens…just for the LOLs.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

AI is changing the CEO’s role—and could lead to a changing of the guard—by Phil Wahba

OpenAI launches Codex app to bring its coding models, which were used to build viral OpenClaw, to more users—by Beatrice Nolan

Exclusive: Anthropic announces partnerships with Allen Institute and Howard Hughes Medical Institute as it bets AI can make science more efficient—by Sharon Goldman

Exclusive: Longtime Google DeepMind researcher David Silver leaves to found his own AI startup—by Jeremy Kahn

AI IN THE NEWS

OpenAI lays groundwork for IPO in 2026 in race with Anthropic, SpaceX. OpenAI is laying the groundwork for a fourth-quarter IPO, holding informal talks with Wall Street banks and expanding its finance team as it races to be the first major generative-AI startup to go public ahead of rival Anthropic, the Wall Street Journal reported. The move comes despite significant challenges, including heavy losses, intensifying competition from Google, looming litigation from cofounder Elon Musk, and investor concerns about how OpenAI will finance hundreds of billions of dollars in AI infrastructure and chip commitments. Executives fear Anthropic—whose revenues are surging and which has signaled openness to an IPO this year—could beat them to market, while other tech giants such as SpaceX are also weighing blockbuster listings that could compete for investor attention.

OpenAI also in talks to raise up to $50 billion in pre-IPO round. Amazon is in talks to invest up to $50 billion in OpenAI, with CEO Andy Jassy and OpenAI chief Sam Altman holding direct discussions as part of a potential funding round that could total around $100 billion, CNBC reported. The investment would be notable given that Amazon has committed $8 billion to OpenAI rival Anthropic. The deal could include agreements for OpenAI to use Amazon’s AI chips and cloud infrastructure, which Anthropic is currently using too. The talks come as Amazon accelerates spending on AI and data centers—while cutting jobs elsewhere—and as OpenAI seeks other strategic investors including Microsoft, Nvidia and SoftBank ahead of a possible IPO.

Elon Musk merges SpaceX with xAI. SpaceX acquired xAI, folding Elon Musk’s cash-hungry AI startup into his space company in a deal that values the combined business at more than $1 trillion and cements SpaceX as the world’s most valuable private company. The merger gives xAI a financial lifeline. But SpaceX is planning an IPO as early as June, hoping to raise about $50 billion, and the merger with xAI could make it harder to win over investors who were excited about a pure play space company and may be concerned about xAI’s hefty losses and intense competition with other AI vendors. Musk says a key motivation is building space-based data centers to power future AI, a vision that excites some investors but raises technical and financial issues. Read more from the New York Times here.

Former OpenAI researcher launches another ‘neolab’ startup, seeks big fundraise. Core Automation, a new AI startup founded by former OpenAI research vice president Jerry Tworek, is seeking to raise between $500 million and $1 billion to build AI models using approaches it believes incumbents like OpenAI and Anthropic are underemphasizing, The Information reported. The company plans to rethink core AI training methods—including potentially moving beyond transformers and the gradient descent method used to train almost all neural networks—to enable continual learning, where models adapt as they are used. This requires  less data and computing power than training the model in huge training runs and then locking its neural network weights in place. The effort adds to a growing wave of heavily funded AI “neolabs” betting that a fundamental overhaul of today’s model-development techniques is needed to unlock major breakthroughs, despite many having little or no revenue so far.

EYE ON AI RESEARCH

More evidence that AI may not be that good for novices. That’s the significance of new research from Anthropic, which performed randomized trials to see how coders gained mastery of a new programming library, comparing those who had an AI assistant to those who did not. Contrary to conventional wisdom, giving less experienced programmers access to AI did not significantly improve their productivity, while also impairing their ability to actually learn the coding library. Only if these junior programmers delegated the entire coding task to the AI did they see substantial productivity gains. But in those cases, the programmers also learned almost nothing about the coding library. The research can be found here on arxiv.org.

AI CALENDAR

Feb. 10-11: AI Action Summit, New Delhi, India.

Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 12-18: South by Southwest, Austin, Texas.

March 16-19: Nvidia GTC, San Jose, Calif.

BRAIN FOOD

We need comprehensive AI regulation—in the U.K. and in the U.S. Earlier today I attended the House of Lords for a roundtable discussion around the launch of a Parliamentary One Page report from Lord Chris Holmes of Richmond. Holmes has been advocating for a comprehensive AI bill he introduced almost three years ago, but which has so far failed to progress. His hope is to pressure the current U.K. government, which has promised an AI bill but repeatedly failed to introduce one, to bring forward legislation of its own. Moltbook is yet another reason why now is the time to do so. And that applies in the U.S. too, where the Trump Administration has actively resisted any regulation.

There is already ample evidence of AI causing people harm. AI chatbots have been implicated in a number of suicides and therapists are reporting more and more patients showing up with forms of psychosis that seem to have been sparked by interactions with AI chatbots. People have used AI to create nonconsensual sexualized deepfakes and spread them across the internet. People have been denied loans due to algorithms. Worse, they’ve been wrongly targeted for arrest because of them. And now OpenClaw and Moltbook provide a timely reminder that there are no rules, no governance, and no effective cybersecurity safeguards currently around AI agents.

Let’s not wait for a real AI disaster to act.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
Fortune Secondary Logo
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

AIOpenAI
OpenAI changed its mission statement 6 times in 9 years. It finally removed the word “safely” as a core value when it restructured into a for-profit
By Catherina GioinoFebruary 23, 2026
6 minutes ago
Drawing of a toddler writing on the wall while her mother hides her eyes
AIOpenAI
AI agents that do your work while you sleep sound great. The reality is far messier—‘it’s like a toddler that needs to be overseen’
By Sharon GoldmanFebruary 23, 2026
2 hours ago
Photo of Dara Khosrowshahi
AIUber Technologies
Uber CEO predicts most rides could be robot operated within 20 years
By Jake AngeloFebruary 23, 2026
3 hours ago
broker
InvestingMarkets
Morgan Stanley hails rare ‘reindustrialization renaissance’ of AI economy—but it’s better for computers than humans
By Nick LichtenbergFebruary 23, 2026
5 hours ago
Photo: Inside a data center.
AIEconomics
Without AI spending, U.S. corporate investment in equipment would be negative, a decline that’s ‘worryingly broad-based,’ Pantheon analyst says 
By Jim EdwardsFebruary 23, 2026
5 hours ago
Photo of several people working on a presentation together
AICareers
Big Tech is shelling out up to $1 million for new hires who will never have to write a line of code
By Sydney LakeFebruary 23, 2026
5 hours ago

Most Popular

placeholder alt text
Innovation
The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents
By Sasha RogelbergFebruary 21, 2026
2 days ago
placeholder alt text
Economy
Scott Bessent has ’got a feeling’ that $175 billion raised under the IEEPA is lost to the American people for good
By Eleanor PringleFebruary 23, 2026
10 hours ago
placeholder alt text
Economy
A two-child household must earn $400,000 a year for childcare to be affordable, study says. 'It’s easy to see why birth rates are falling'
By Jason MaFebruary 22, 2026
1 day ago
placeholder alt text
Economy
Stocks sell off as traders wake up to the realization that Trump has 'highly punitive' options for new trade tariffs
By Jim EdwardsFebruary 23, 2026
11 hours ago
placeholder alt text
Startups & Venture
'I have a chip on my shoulder.' Phoebe Gates wants her $185 million AI startup Phia to succeed with 'no ties to my privilege or my last name'
By Sydney LakeFebruary 21, 2026
2 days ago
placeholder alt text
Economy
The Russian economy is eating its own muscle to survive as Putin’s war on Ukraine destroys future capacity, former central bank advisor says
By Jason MaFebruary 22, 2026
23 hours ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.