• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AISecurity

AI coding tools exploded in 2025. The first security exploits show what could go wrong

Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
December 15, 2025, 10:00 AM ET
While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses.
While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses.Illustration by Simon Landrein

AI coding tools proliferated widely across technical teams in 2025, shifting how developers work and how companies across industries develop and launch products and services. According to Stack Overflow’s 2025 survey of 49,000 developers, 84% said they’re using the tools, with 51% doing so daily. 

AI coding tools have also caught the interest of another group: malicious actors. While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses, and cyberthreat researchers have discovered critical vulnerabilities in several popular tools that make clear what could go horribly wrong. 

Any emerging technology creates a new opening for cyberattacks, and in a way, AI coding tools are just another door. At the same time, the agentic nature of many AI-assisted coding capabilities makes it crucial for developers to check every aspect of the AI’s work, making it easy for small oversights to warp into critical security issues. Security experts also say the nature of how AI coding tools function makes them susceptible to prompt injection and supply-chain attacks, the latter of which are especially damaging as they affect companies downstream that use the tool.

“Supply chain has always been a weak point in security for software developers in particular,” said Randall Degges, head of developer and security relations at cybersecurity firm Snyk. “It‘s always been a problem, but it’s even more prevalent now with AI tools.” 

The first wave of AI coding tool vulnerabilities and exploits

Perhaps the most eye-opening security incident involving AI coding tools that unfolded this year was the breach of Amazon’s popular Q coding assistant. A hacker compromised the official extension for using the tool inside the ubiquitous VS Code development environment, planting a prompt to direct Q to wipe users’ local files and disrupt their AWS cloud infrastructure, potentially even disabling it. This compromised version of the tool passed Amazon’s verification and was publicly available to users for two days. The malicious actor behind the breach reportedly did it to expose Amazon’s “security theater” rather than actually execute an attack, and in that way, they were successful—the demonstration of how a prompt injection attack on an AI coding tool could unfold sent a shock wave of concern throughout the security and developer worlds. 

“Security is our top priority. We mitigated an attempt to exploit a known issue in two open-source repositories to alter code in the Amazon Q Developer extension for VS Code. No customer resources were impacted,” an Amazon spokesperson told Fortune, pointing to the company’s July security bulletin on the incident.   

In the case of AI coding tools, a prompt injection attack refers to a threat actor slipping instructions to an AI coding tool to direct it to behave in an unintended way, such as leaking data or executing malicious code. Aside from Q, critical vulnerabilities leaving the door open to this style of attack were also discovered throughout 2025 in AI coding tools offered by Cursor, GitHub, and Google’s Gemini. Cybersecurity firm CrowdStrike also reported that it observed multiple threat actors exploiting an unauthenticated code injection vulnerability in Langflow AI, a widely used tool for building AI agents and workflows, to gain credentials and deploy malware.

The issue was not so much a security flaw within any of the tools in particular, but rather a vulnerability at the system level of how these agents function—connecting to an essentially unlimited number of data sources through MCP, an open standard for connecting AI models to external tools and data sources. 


“Agentic coding tools work within the privilege level of the developer executing them,” said John Cranney, VP of engineering at Secure Code Warrior, a coding platform designed to help developers work more securely. “The ecosystem around these tools is rapidly evolving. Agentic tool providers are adding features at a rapid pace, while at the same time, there is an explosion of MCP servers designed to add functionality to these tools. However, no model provider has yet solved the problem of prompt injection, which means that every new input that is provided to an agentic coding tool adds a new potential injection vector.” 

In a statement, a Google spokesperson echoed that the state of guardrails in today’s AI landscape depends heavily on the model’s hosting environment. 

“Gemini is designed and tested for safety, and is trained to avoid certain outputs that would create risks of harm. Google continuously improves our AI models to make them less susceptible to misuse. We employ a hybrid agent security approach using adversarial training to resist prompt injection attacks and policy enforcement to review, allow, block, or prompt for clarification on the agent’s planned actions,” the company said.

The prevalence of AI coding tools is also giving a boost to another attack route, often referred to as “typosquatting.” This refers to malicious actors impersonating a legitimate software package or extension to trick an unwitting coder—or now, an AI—into downloading a malicious one instead, usually by slightly tweaking the name and legitimizing it with fake reviews. In one case this year, Zak Cole, a core developer for the cryptocurrency Ethereum, said his crypto wallet was drained after he mistakenly downloaded a malicious extension for the popular AI coding tool Cursor. This could have happened with any malicious software and isn’t necessarily specific to the coding assistant, but AI coding tools can amplify the risk because, increasingly, they’re doing this work on their own and possibly unsupervised. Cursor and DataStax, the owner of Langflow AI, did not respond to a request for comment.

“If you’re using a tool like Cursor to help you write code, it’s also doing a lot of other things like installing third party dependencies, packages, and tools,” said Degges of Snyk. “We’ve noticed that because it’s going to go ahead and do a lot of these things in an agentic fashion, you as the user are typically much more at risk of malicious packages that AI installs.”

The AI coding guardrails every organization needs

As AI coding tools simultaneously introduce new risks and make it possible for developers to create more code faster than ever before, CrowdStrike field CTO Cristian Rodriguez believes the challenge for organizations is if they can secure applications at the same velocity that they’re building them.

He said having the right guardrails in place can help, and he advises companies to mature their SecOps programs and bolster governance around AI coding tools. This includes cracking down on “shadow AI,” making sure no tools are being used internally without being approved and managed as part of the company’s overall security infrastructure. For whatever AI coding tools are approved, the company also needs to continuously manage everything it touches.

“Understand what the services are that are being referenced from the application, the libraries that are being used, the services that surround the application, and to make sure they are configured properly,” he said. “Also, ensure the services have the right identity and access management components to ensure that not anyone can simply access the service that surrounds the app.”

In a statement, a GitHub spokesperson said the company designed its Copilot coding agent to proactively and automatically perform security and quality analysis of the code it creates to ensure vulnerabilities in code and dependencies are detected and remediated.

“We believe that building secure and scalable MCP servers requires attention to authentication, authorization, and deployment architecture, and we follow a strict threat model when developing agentic features, including MCP,” the spokesperson said. “To prevent risks like data exfiltration, impersonation, and prompt injection, we’ve created a set of rules that includes ensuring all context is visible, scanning responses for secrets, preventing irreversible state changes, and only gathering context from authorized users.”

Rodriguez’s colleague at CrowdStrike, Adam Meyers, the firm’s head of intelligence, noted that AI coding tools often run in an unmanaged or “headless” capacity, doing a bunch of things in the background. This makes developers the last line of defense.

“It spits out hundreds of lines of code in minutes,” he said. “And then it comes down to, do they do a security assessment of that code? Do they look at all the libraries the code can pull down, or do they just say, YOLO, and deploy it? And I think that that’s the true risk here.”

Read more about The Year in AI—and What's Ahead in the latest Fortune AIQ special report, reflecting on the AI trends that took over the business world and captivated consumers in 2025. Plus, tips on preparing for new developments in 2026.

About the Author
Sage Lazzaro
By Sage LazzaroContributing writer

Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

Latest in AI

AIchief executive officer (CEO)
Google cofounder Sergey Brin said he was ‘spiraling’ before returning to work on Gemini—and staying retired ‘would’ve been a big mistake’
By Marco Quiroz-GutierrezDecember 15, 2025
19 minutes ago
CryptoCryptocurrency
Bittensor, the AI-linked cryptocurrency founded by a former Google engineer, just halved its supply. Here’s what that means
By Ben WeissDecember 15, 2025
2 hours ago
AIregulation
Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?
By John KellDecember 15, 2025
2 hours ago
AIInvestment
The big AI New Year’s resolution for businesses in 2026: ROI
By Sage LazzaroDecember 15, 2025
2 hours ago
AIAutomation
2025 was the year of agentic AI. How did we do?
By John KellDecember 15, 2025
2 hours ago
AISecurity
AI coding tools exploded in 2025. The first security exploits show what could go wrong
By Sage LazzaroDecember 15, 2025
2 hours ago

Most Popular

placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
19 days ago
placeholder alt text
Success
40% of Stanford undergrads receive disability accommodations—but it’s become a college-wide phenomenon as Gen Z try to succeed in the current climate
By Preston ForeDecember 12, 2025
3 days ago
placeholder alt text
Success
Apple cofounder Ronald Wayne sold his 10% stake for $800 in 1976—today it’d be worth up to $400 billion
By Preston ForeDecember 12, 2025
3 days ago
placeholder alt text
Economy
The Fed just ‘Trump-proofed’ itself with a unanimous move to preempt a potential leadership shake-up
By Jason MaDecember 12, 2025
3 days ago
placeholder alt text
Politics
Trump admits he can't tell if the GOP will control the House after next year's elections. 'I don't know when all of this money is going to kick in'
By Jason MaDecember 14, 2025
20 hours ago
placeholder alt text
Economy
Kevin Hassett says he'd be happy to talk to Trump every day as Fed chair, but the president's opinion would have 'no weight' on the FOMC
By Jason MaDecember 14, 2025
23 hours ago

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.