• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CybersecurityEye on AI

OpenClaw is the bad boy of AI agents. Here’s why security experts say you should beware

Sharon Goldman
By
Sharon Goldman
Sharon Goldman
AI Reporter
Down Arrow Button Icon
Sharon Goldman
By
Sharon Goldman
Sharon Goldman
AI Reporter
Down Arrow Button Icon
February 12, 2026, 12:31 PM ET
A laptop displaying the OpenClaw logo
OpenClaw gives AI agents real autonomy — and raises new security risks.Jakub Porzycki—NurPhoto via Getty Images)

Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: The wild side of OpenClaw…Anthropic’s new $20 million super PAC counters OpenAI…OpenAI releases its first model designed for super-fast output…Anthropic will cover electricity price increases from its AI data centers…Isomorphic Labs says it has unlocked a new biological frontier beyond AlphaFold.

Recommended Video

OpenClaw has spent the past few weeks showing just how reckless AI agents can get — and attracting a devoted following in the process.

The free, open-source autonomous artificial intelligence agent, developed by Peter Steinberger and originally known as ClawdBot, takes the chatbots we know and love — like ChatGPT and Claude — and gives them the tools and autonomy to interact directly with your computer and others across the internet. Think sending emails, reading your messages, ordering tickets for a concert, making restaurant reservations, and much more — presumably while you sit back and eat bonbons.

The problem with giving OpenClaw extraordinary power to do cool things? Not surprisingly, it’s the fact that it also gives it plenty of opportunity to do things it shouldn’t, including leaking data, executing unintended commands, or being quietly hijacked by attackers, either through malware or through so-called “prompt injection” attacks. (Where someone includes malicious instructions for the AI agent in data that an AI agent might use.)

The excitement about OpenClaw, say two cybersecurity experts I spoke to this week, is that it has no restrictions, basically giving users largely unfettered power to customize it however they want.

“The only rule is that it has no rules,” said Ben Seri, cofounder and CTO at Zafran Security, which specializes in providing threat exposure management to enterprise companies. “That’s part of the game.” But that game can turn into a security nightmare, since rules and boundaries are at the heart of keeping hackers and leaks at bay.

Classic security concerns

The security concerns are pretty classic ones, said Colin Shea-Blymyer, a research fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Permission misconfigurations — who or what is allowed to do what — mean humans could accidentally give OpenClaw more authority than they realize, and attackers can take advantage.

For example, in OpenClaw, much of the risk comes from what developers call “skills,” which are essentially apps or plugins the AI agent can use to take actions — like accessing files, browsing the web, or running commands. The difference is that, unlike a normal app, OpenClaw decides on its own when to use these skills and how to chain them together, meaning a small permission mistake can quickly snowball into something far more serious.

“Imagine using it to access the reservation page for a restaurant and it also having access to your calendar with all sorts of personal information,” he said. “Or what if it’s malware and it finds the wrong page and installs a virus?”

OpenClaw does have security pages in its documentation and is trying to keep users alert and aware, Shea-Blymyer said. But the security issues remain complex technical problems that most average users are unlikely to fully understand. And while OpenClaw’s developers may work hard to fix vulnerabilities, they can’t easily solve the underlying issue of the agent being able to act on its own — which is what makes the system so compelling in the first place.

“That’s the fundamental tension in these kinds of systems,” he said. “The more access you give them, the more fun and interesting they’re going to be — but also the more dangerous.”

Enterprise companies will be slow to adopt

Zafran Security’s Seri admitted that there is little chance of squashing user curiosity when it comes to a system like OpenClaw, though he emphasized that enterprise companies will be much slower to adopt such an uncontrollable, insecure system. For the average user, he said, they should experiment as though they were working in a chemistry lab with a highly explosive material.

Shea-Blymyer pointed out that it’s a positive thing that OpenClaw is happening first at the hobbyist level. “We will learn a lot about the ecosystem before anybody tries it at an enterprise level,” he said. “AI systems can fail in ways we can’t even imagine,” he explained. “[OpenClaw] could give us a lot of info about why different LLMs behave the way they do and about newer security concerns.”

But while OpenClaw may be a hobbyist experiment today, security experts see it as a preview of the kinds of autonomous systems enterprises will eventually feel pressure to deploy.

For now, unless someone wants to be the subject of security research, the average user might want to stay away from OpenClaw, said Shea-Blymyer. Otherwise, don’t be surprised if your personal AI agent assistant wanders into very unfriendly territory.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Matt Shumer’s viral blog about AI’s looming impact on knowledge workers is based on flawed assumptions – by Jeremy Kahn

The CEO of Capgemini has a warning. You might be thinking about AI all wrong – by Kamal Ahmed

Google’s Nobel-winning AI leader sees a ‘renaissance’ ahead—after a 10- or 15-year shakeout – by Nick Lichtenberg

X-odus: Half of xAI’s founding team has left Elon Musk’s AI company, potentially complicating his plans for a blockbuster SpaceX IPO – by Beatrice Nolan

OpenAI disputes watchdog’s claim it violated California’s new AI safety law with latest model release – by Beatrice Nolan

AI IN THE NEWS

Anthropic's new $20 million super PAC counters OpenAI. According to the New York Times, Anthropic has pledged $20 million to a super PAC operation designed to back candidates who favor stronger AI safety and regulation, setting up a direct clash ahead of the midterm elections. The funding will flow through the dark-money nonprofit Public First Action and allied PACs, in opposition to Leading the Future, a super PAC backed by primarily by OpenAI president and cofounder Greg Brockman and venture firm  Andreessen Horowitz. While Anthropic avoided naming OpenAI directly, it warned that “vast resources” are being deployed to oppose AI safety efforts, highlighting a deepening divide within the AI industry over how tightly powerful models should be regulated — and signaling that the battle over AI governance is now playing out not just in labs and boardrooms, but at the ballot box.

Mustafa Suleyman plots AI ‘self-sufficiency’ as Microsoft loosens OpenAI ties. The Financial Times reported that Microsoft is pushing toward what its AI chief Mustafa Suleyman calls “true self-sufficiency” in artificial intelligence, accelerating efforts to build its own frontier foundation models and reduce long-term reliance on OpenAI, even as it remains one of the startup’s largest backers. In an interview, Suleyman said the shift follows a restructuring of Microsoft’s relationship with OpenAI last October, which preserved access to OpenAI’s most advanced models through 2032 but also gave the ChatGPT maker more freedom to seek new investors and partners — potentially turning it into a competitor. Microsoft is now investing heavily in gigawatt-scale compute, data pipelines, and elite AI research teams, with plans to launch its own in-house models later this year, aimed squarely at automating white-collar work and capturing more of the enterprise market with what Suleyman calls “professional-grade AGI.” 

OpenAI releases its first model designed for super-fast output. OpenAI has released a research preview of GPT-5.3-Codex-Spark, the first tangible product of its partnership with Cerebras, using the chipmaker’s wafer-scale AI hardware to deliver ultra-low-latency, real-time coding in Codex. The smaller model, a streamlined version of GPT-5.3-Codex, is optimized for speed rather than maximum capability, generating responses up to 15× faster so developers can make targeted edits, reshape logic, and iterate interactively without waiting for long runs to complete. Available initially as a research preview to ChatGPT Pro users and a small set of API partners, the release signals OpenAI’s growing focus on interaction speed as AI agents take on more autonomous, long-running tasks — with real-time coding emerging as an early test case for what faster inference can unlock.

Anthropic will cover electricity price increases from its AI data centers. Following a similar announcement by OpenAI last month, Anthropic announced yesterday that as it expands AI data centers in the U.S., it will take responsibility for any increases in electricity costs that might otherwise be passed on to consumers, pledging to pay for all grid connection and upgrade costs, bring new power generation online to match demand, and work with utilities and experts to estimate and cover any price effects; it also plans to invest in power-usage reduction and grid optimization technologies, support local communities around its facilities, and advocate for broader policy reforms to speed up and lower the cost of energy infrastructure development, arguing that building AI infrastructure shouldn’t burden everyday ratepayers.

Isomorphic Labs says it has unlocked a new biological frontier beyond AlphaFold. Isomorphic Labs, the Alphabet- and DeepMind-affiliated AI drug discovery company, says its new Isomorphic Labs Drug Design Engine represents a significant leap forward in computational medicine by combining multiple AI models into a unified engine that can predict how biological molecules interact with unprecedented accuracy. A blog post said that it more than doubled previous performance on key benchmarks and outpaced traditional physics-based methods for tasks like protein–ligand structure prediction and binding affinity estimation — capabilities the company argues could dramatically accelerate how new drug candidates are designed and optimized. The system builds on the success of AlphaFold 3, an advanced AI model released in 2024 that predicts the 3D structures and interactions of all life's molecules, including proteins, DNA and RNA. But the company says it goes further by identifying novel binding pockets, generalizing to structures outside its training data, and integrating these predictions into a scalable platform that aims to bridge the gap between structural biology and real-world drug discovery, potentially reshaping how pharmaceutical research tackles hard targets and expands into complex biologics.

EYE ON AI NUMBERS

77%

That's how many security professionals report at least some comfort with allowing autonomous AI systems to act without human oversight, though they are still cautious, according to a new survey of 1,200 security professionals by Ivanti, a global enterprise IT and security software company. In addition, the report found that adopting agentic AI is a priority for 87% of security teams. 

However, Ivanti's chief security officer, Daniel Spicer, says security teams should not be so comfortable with the idea of deploying autonomous AI.  Although defenders are optimistic about the promise of AI in cybersecurity,  the findings also show companies are falling further behind in terms of how well-prepared they are to defend against a variety of threats. 

"This is what I call the 'Cybersecurity Readiness Deficit,'" he wrote in a blog post, "a persistent, year-over-year widening imbalance in an organization's ability to defend their data, people and networks against the evolving tech landscape." 

AI CALENDAR

Feb. 10-11: AI Action Summit, New Delhi, India.

Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 16-19: Nvidia GTC, San Jose, Calif.

April 6-9: HumanX, San Francisco. 

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Sharon Goldman
By Sharon GoldmanAI Reporter
LinkedIn icon

Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

See full bioRight Arrow Button Icon

Latest in Cybersecurity

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Cybersecurity

A laptop displaying the OpenClaw logo
CybersecurityEye on AI
OpenClaw is the bad boy of AI agents. Here’s why security experts say you should beware
By Sharon GoldmanFebruary 12, 2026
1 hour ago
gelernter
LawYale
The same Yale professor who got wounded by a Unabomber attack wrote to Epstein about a ‘v small good-looking blonde’
By Dave Collins and The Associated PressFebruary 12, 2026
5 hours ago
nest
LawCrime
Law enforcement thought Nancy Guthrie’s smart camera was disconnected, but Google Nest still had the tape
By Safiyah Riddle, Michael Liedtke and The Associated PressFebruary 11, 2026
1 day ago
EconomyAerospace and defense
France’s Thales ‘extensively’ ramps up production to meet a global boom in defense spending, says international CEO Pascale Sourisse
By Angelica AngFebruary 10, 2026
3 days ago
CybersecurityJeffrey Epstein
FBI found little evidence Epstein ran a sex trafficking ring for powerful men and concluded a ‘client list’ doesn’t exist
By Michael R. Sisak, David B. Caruso, Larry Neumeister and The Associated PressFebruary 8, 2026
4 days ago
RetailEurope
Trump’s Greenland crisis triggered a surge in apps designed to help shoppers boycott U.S. goods, though few American imports are on store shelves
By James Brooks and The Associated PressFebruary 8, 2026
4 days ago

Most Popular

placeholder alt text
Crypto
Bitcoin reportedly sent to wallet associated with Nancy Guthrie’s ransom letter providing potential clue in investigation
By Carlos GarciaFebruary 11, 2026
1 day ago
placeholder alt text
Economy
America borrowed $43.5 billion a week in the first four months of the fiscal year, with debt interest on track to be over $1 trillion for 2026
By Eleanor PringleFebruary 10, 2026
2 days ago
placeholder alt text
Economy
America’s national debt borrowing binge means interest payments will rocket to $2 trillion a year by 2036, CBO says
By Eleanor PringleFebruary 11, 2026
1 day ago
placeholder alt text
Commentary
Something big is happening in AI — and most people will be blindsided
By Matt ShumerFebruary 11, 2026
1 day ago
placeholder alt text
Economy
‘Nothing short of self-sabotage’: Watchdog warns about national debt setting new record in just 4 years
By Tristan BoveFebruary 11, 2026
1 day ago
placeholder alt text
Law
Law enforcement thought Nancy Guthrie's smart camera was disconnected, but Google Nest still had the tape
By Safiyah Riddle, Michael Liedtke and The Associated PressFebruary 11, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.