• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AISecurity

AI coding tools exploded in 2025. The first security exploits show what could go wrong

Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
December 15, 2025, 10:00 AM ET
While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses.
While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses.Illustration by Simon Landrein

AI coding tools proliferated widely across technical teams in 2025, shifting how developers work and how companies across industries develop and launch products and services. According to Stack Overflow’s 2025 survey of 49,000 developers, 84% said they’re using the tools, with 51% doing so daily. 

AI coding tools have also caught the interest of another group: malicious actors. While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses, and cyberthreat researchers have discovered critical vulnerabilities in several popular tools that make clear what could go horribly wrong. 

Any emerging technology creates a new opening for cyberattacks, and in a way, AI coding tools are just another door. At the same time, the agentic nature of many AI-assisted coding capabilities makes it crucial for developers to check every aspect of the AI’s work, making it easy for small oversights to warp into critical security issues. Security experts also say the nature of how AI coding tools function makes them susceptible to prompt injection and supply-chain attacks, the latter of which are especially damaging as they affect companies downstream that use the tool.

“Supply chain has always been a weak point in security for software developers in particular,” said Randall Degges, head of developer and security relations at cybersecurity firm Snyk. “It‘s always been a problem, but it’s even more prevalent now with AI tools.” 

The first wave of AI coding tool vulnerabilities and exploits

Perhaps the most eye-opening security incident involving AI coding tools that unfolded this year was the breach of Amazon’s popular Q coding assistant. A hacker compromised the official extension for using the tool inside the ubiquitous VS Code development environment, planting a prompt to direct Q to wipe users’ local files and disrupt their AWS cloud infrastructure, potentially even disabling it. This compromised version of the tool passed Amazon’s verification and was publicly available to users for two days. The malicious actor behind the breach reportedly did it to expose Amazon’s “security theater” rather than actually execute an attack, and in that way, they were successful—the demonstration of how a prompt injection attack on an AI coding tool could unfold sent a shock wave of concern throughout the security and developer worlds. 

“Security is our top priority. We mitigated an attempt to exploit a known issue in two open-source repositories to alter code in the Amazon Q Developer extension for VS Code. No customer resources were impacted,” an Amazon spokesperson told Fortune, pointing to the company’s July security bulletin on the incident.   

In the case of AI coding tools, a prompt injection attack refers to a threat actor slipping instructions to an AI coding tool to direct it to behave in an unintended way, such as leaking data or executing malicious code. Aside from Q, critical vulnerabilities leaving the door open to this style of attack were also discovered throughout 2025 in AI coding tools offered by Cursor, GitHub, and Google’s Gemini. Cybersecurity firm CrowdStrike also reported that it observed multiple threat actors exploiting an unauthenticated code injection vulnerability in Langflow AI, a widely used tool for building AI agents and workflows, to gain credentials and deploy malware.

The issue was not so much a security flaw within any of the tools in particular, but rather a vulnerability at the system level of how these agents function—connecting to an essentially unlimited number of data sources through MCP, an open standard for connecting AI models to external tools and data sources. 


“Agentic coding tools work within the privilege level of the developer executing them,” said John Cranney, VP of engineering at Secure Code Warrior, a coding platform designed to help developers work more securely. “The ecosystem around these tools is rapidly evolving. Agentic tool providers are adding features at a rapid pace, while at the same time, there is an explosion of MCP servers designed to add functionality to these tools. However, no model provider has yet solved the problem of prompt injection, which means that every new input that is provided to an agentic coding tool adds a new potential injection vector.” 

In a statement, a Google spokesperson echoed that the state of guardrails in today’s AI landscape depends heavily on the model’s hosting environment. 

“Gemini is designed and tested for safety, and is trained to avoid certain outputs that would create risks of harm. Google continuously improves our AI models to make them less susceptible to misuse. We employ a hybrid agent security approach using adversarial training to resist prompt injection attacks and policy enforcement to review, allow, block, or prompt for clarification on the agent’s planned actions,” the company said.

The prevalence of AI coding tools is also giving a boost to another attack route, often referred to as “typosquatting.” This refers to malicious actors impersonating a legitimate software package or extension to trick an unwitting coder—or now, an AI—into downloading a malicious one instead, usually by slightly tweaking the name and legitimizing it with fake reviews. In one case this year, Zak Cole, a core developer for the cryptocurrency Ethereum, said his crypto wallet was drained after he mistakenly downloaded a malicious extension for the popular AI coding tool Cursor. This could have happened with any malicious software and isn’t necessarily specific to the coding assistant, but AI coding tools can amplify the risk because, increasingly, they’re doing this work on their own and possibly unsupervised. Cursor and DataStax, the owner of Langflow AI, did not respond to a request for comment.

“If you’re using a tool like Cursor to help you write code, it’s also doing a lot of other things like installing third party dependencies, packages, and tools,” said Degges of Snyk. “We’ve noticed that because it’s going to go ahead and do a lot of these things in an agentic fashion, you as the user are typically much more at risk of malicious packages that AI installs.”

The AI coding guardrails every organization needs

As AI coding tools simultaneously introduce new risks and make it possible for developers to create more code faster than ever before, CrowdStrike field CTO Cristian Rodriguez believes the challenge for organizations is if they can secure applications at the same velocity that they’re building them.

He said having the right guardrails in place can help, and he advises companies to mature their SecOps programs and bolster governance around AI coding tools. This includes cracking down on “shadow AI,” making sure no tools are being used internally without being approved and managed as part of the company’s overall security infrastructure. For whatever AI coding tools are approved, the company also needs to continuously manage everything it touches.

“Understand what the services are that are being referenced from the application, the libraries that are being used, the services that surround the application, and to make sure they are configured properly,” he said. “Also, ensure the services have the right identity and access management components to ensure that not anyone can simply access the service that surrounds the app.”

In a statement, a GitHub spokesperson said the company designed its Copilot coding agent to proactively and automatically perform security and quality analysis of the code it creates to ensure vulnerabilities in code and dependencies are detected and remediated.

“We believe that building secure and scalable MCP servers requires attention to authentication, authorization, and deployment architecture, and we follow a strict threat model when developing agentic features, including MCP,” the spokesperson said. “To prevent risks like data exfiltration, impersonation, and prompt injection, we’ve created a set of rules that includes ensuring all context is visible, scanning responses for secrets, preventing irreversible state changes, and only gathering context from authorized users.”

Rodriguez’s colleague at CrowdStrike, Adam Meyers, the firm’s head of intelligence, noted that AI coding tools often run in an unmanaged or “headless” capacity, doing a bunch of things in the background. This makes developers the last line of defense.

“It spits out hundreds of lines of code in minutes,” he said. “And then it comes down to, do they do a security assessment of that code? Do they look at all the libraries the code can pull down, or do they just say, YOLO, and deploy it? And I think that that’s the true risk here.”

Read more about The Year in AI—and What's Ahead in the latest Fortune AIQ special report, reflecting on the AI trends that took over the business world and captivated consumers in 2025. Plus, tips on preparing for new developments in 2026.

About the Author
Sage Lazzaro
By Sage LazzaroContributing writer

Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

Latest in AI

AItech stocks
Is the AI boom a bubble waiting to pop? Here’s what history says
By Henry Ren, Carmen Reinicke and BloombergJanuary 4, 2026
10 hours ago
AsiaTariffs and trade
Countries must move beyond seeing AI as a race, where one side must beat the other
By Boris Babic and Brian WongJanuary 3, 2026
1 day ago
data center
AIData centers
Angry town halls nationwide find a new villain: the data center driving up your electricity bill while fueling job-killing AI
By Marc Levy and The Associated PressJanuary 3, 2026
2 days ago
Sweden
CommentarySweden
Meet Sweden, the unicorn factory chasing America in the AI race
By Oscar TäckströmJanuary 3, 2026
2 days ago
Eric Schmidt sat in a white chair, speaking on a stage.
AIGoogle
How former Google CEO Eric Schmidt is motivated by Henry Kissinger to keep working past 70
By Jordan BlumJanuary 2, 2026
3 days ago
Eric Schmidt, former Google CEO, speaks during the Collision 2022 conference at Enercare Centre in Toronto, Canada.
AIElectricity
Google ex-CEO Eric Schmidt jumps into the AI data center business with a failed, 150-year-old Texas railroad turned oil giant
By Jordan BlumJanuary 2, 2026
3 days ago

Most Popular

placeholder alt text
C-Suite
CEO of $90 billion Waste Management hauled trash and went to 1 a.m. safety briefings—‘It’s not always just dollars and cents’
By Amanda GerutJanuary 3, 2026
2 days ago
placeholder alt text
Economy
Mitt Romney says the U.S. is on a cliff—and taxing the rich is now necessary 'given the magnitude of our national debt'
By Dave SmithDecember 22, 2025
14 days ago
placeholder alt text
Future of Work
Bosses are fighting a new battle in the RTO wars: It's not about where you work, but when you work
By Nick LichtenbergJanuary 4, 2026
20 hours ago
placeholder alt text
Future of Work
Bank of America CEO says he hired 2,000 recent Gen Z grads from 200,000 applications, and many are scared about the future
By Ashley LutzJanuary 3, 2026
2 days ago
placeholder alt text
Future of Work
Meet the 'empowered non-complier': A certain kind of valuable worker who flouts return to office whenever they feel like it
By Nick LichtenbergJanuary 3, 2026
2 days ago
placeholder alt text
Politics
People in Venezuela didn't celebrate Maduro's capture out of fear of government repression, construction worker says
By Regina Garcia Cano, Megan Janetsky, Juan Arraez and The Associated PressJanuary 4, 2026
10 hours ago

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.