AI coding tools proliferated widely across technical teams in 2025, shifting how developers work and how companies across industries develop and launch products and services. According to Stack Overflow’s 2025 survey of 49,000 developers, 84% said they’re using the tools, with 51% doing so daily.
AI coding tools have also caught the interest of another group: malicious actors. While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses, and cyberthreat researchers have discovered critical vulnerabilities in several popular tools that make clear what could go horribly wrong.
Any emerging technology creates a new opening for cyberattacks, and in a way, AI coding tools are just another door. At the same time, the agentic nature of many AI-assisted coding capabilities makes it crucial for developers to check every aspect of the AI’s work, making it easy for small oversights to warp into critical security issues. Security experts also say the nature of how AI coding tools function makes them susceptible to prompt injection and supply-chain attacks, the latter of which are especially damaging as they affect companies downstream that use the tool.
“Supply chain has always been a weak point in security for software developers in particular,” said Randall Degges, head of developer and security relations at cybersecurity firm Snyk. “It‘s always been a problem, but it’s even more prevalent now with AI tools.”
The first wave of AI coding tool vulnerabilities and exploits
Perhaps the most eye-opening security incident involving AI coding tools that unfolded this year was the breach of Amazon’s popular Q coding assistant. A hacker compromised the official extension for using the tool inside the ubiquitous VS Code development environment, planting a prompt to direct Q to wipe users’ local files and disrupt their AWS cloud infrastructure, potentially even disabling it. This compromised version of the tool passed Amazon’s verification and was publicly available to users for two days. The malicious actor behind the breach reportedly did it to expose Amazon’s “security theater” rather than actually execute an attack, and in that way, they were successful—the demonstration of how a prompt injection attack on an AI coding tool could unfold sent a shock wave of concern throughout the security and developer worlds.
“Security is our top priority. We mitigated an attempt to exploit a known issue in two open-source repositories to alter code in the Amazon Q Developer extension for VS Code. No customer resources were impacted,” an Amazon spokesperson told Fortune, pointing to the company’s July security bulletin on the incident.
In the case of AI coding tools, a prompt injection attack refers to a threat actor slipping instructions to an AI coding tool to direct it to behave in an unintended way, such as leaking data or executing malicious code. Aside from Q, critical vulnerabilities leaving the door open to this style of attack were also discovered throughout 2025 in AI coding tools offered by Cursor, GitHub, and Google’s Gemini. Cybersecurity firm CrowdStrike also reported that it observed multiple threat actors exploiting an unauthenticated code injection vulnerability in Langflow AI, a widely used tool for building AI agents and workflows, to gain credentials and deploy malware.
The issue was not so much a security flaw within any of the tools in particular, but rather a vulnerability at the system level of how these agents function—connecting to an essentially unlimited number of data sources through MCP, an open standard for connecting AI models to external tools and data sources.
“Agentic coding tools work within the privilege level of the developer executing them,” said John Cranney, VP of engineering at Secure Code Warrior, a coding platform designed to help developers work more securely. “The ecosystem around these tools is rapidly evolving. Agentic tool providers are adding features at a rapid pace, while at the same time, there is an explosion of MCP servers designed to add functionality to these tools. However, no model provider has yet solved the problem of prompt injection, which means that every new input that is provided to an agentic coding tool adds a new potential injection vector.”
In a statement, a Google spokesperson echoed that the state of guardrails in today’s AI landscape depends heavily on the model’s hosting environment.
“Gemini is designed and tested for safety, and is trained to avoid certain outputs that would create risks of harm. Google continuously improves our AI models to make them less susceptible to misuse. We employ a hybrid agent security approach using adversarial training to resist prompt injection attacks and policy enforcement to review, allow, block, or prompt for clarification on the agent’s planned actions,” the company said.
The prevalence of AI coding tools is also giving a boost to another attack route, often referred to as “typosquatting.” This refers to malicious actors impersonating a legitimate software package or extension to trick an unwitting coder—or now, an AI—into downloading a malicious one instead, usually by slightly tweaking the name and legitimizing it with fake reviews. In one case this year, Zak Cole, a core developer for the cryptocurrency Ethereum, said his crypto wallet was drained after he mistakenly downloaded a malicious extension for the popular AI coding tool Cursor. This could have happened with any malicious software and isn’t necessarily specific to the coding assistant, but AI coding tools can amplify the risk because, increasingly, they’re doing this work on their own and possibly unsupervised. Cursor and DataStax, the owner of Langflow AI, did not respond to a request for comment.
“If you’re using a tool like Cursor to help you write code, it’s also doing a lot of other things like installing third party dependencies, packages, and tools,” said Degges of Snyk. “We’ve noticed that because it’s going to go ahead and do a lot of these things in an agentic fashion, you as the user are typically much more at risk of malicious packages that AI installs.”
The AI coding guardrails every organization needs
As AI coding tools simultaneously introduce new risks and make it possible for developers to create more code faster than ever before, CrowdStrike field CTO Cristian Rodriguez believes the challenge for organizations is if they can secure applications at the same velocity that they’re building them.
He said having the right guardrails in place can help, and he advises companies to mature their SecOps programs and bolster governance around AI coding tools. This includes cracking down on “shadow AI,” making sure no tools are being used internally without being approved and managed as part of the company’s overall security infrastructure. For whatever AI coding tools are approved, the company also needs to continuously manage everything it touches.
“Understand what the services are that are being referenced from the application, the libraries that are being used, the services that surround the application, and to make sure they are configured properly,” he said. “Also, ensure the services have the right identity and access management components to ensure that not anyone can simply access the service that surrounds the app.”
In a statement, a GitHub spokesperson said the company designed its Copilot coding agent to proactively and automatically perform security and quality analysis of the code it creates to ensure vulnerabilities in code and dependencies are detected and remediated.
“We believe that building secure and scalable MCP servers requires attention to authentication, authorization, and deployment architecture, and we follow a strict threat model when developing agentic features, including MCP,” the spokesperson said. “To prevent risks like data exfiltration, impersonation, and prompt injection, we’ve created a set of rules that includes ensuring all context is visible, scanning responses for secrets, preventing irreversible state changes, and only gathering context from authorized users.”
Rodriguez’s colleague at CrowdStrike, Adam Meyers, the firm’s head of intelligence, noted that AI coding tools often run in an unmanaged or “headless” capacity, doing a bunch of things in the background. This makes developers the last line of defense.
“It spits out hundreds of lines of code in minutes,” he said. “And then it comes down to, do they do a security assessment of that code? Do they look at all the libraries the code can pull down, or do they just say, YOLO, and deploy it? And I think that that’s the true risk here.”
Read more about The Year in AI—and What's Ahead in the latest Fortune AIQ special report, reflecting on the AI trends that took over the business world and captivated consumers in 2025. Plus, tips on preparing for new developments in 2026.











