Google’s Gemini is helping hackers work faster but hasn’t unlocked new attacks—yet

Sage LazzaroBy Sage LazzaroContributing writer
Sage LazzaroContributing writer

    Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

    Photo of a mobile phone screen displaying the words Google Gemini
    Google's own security researchers say hackers are using the company's Gemini AI model to help streamline aspects of cyberattacks.
    Andrey Rudakov/Bloomberg—Getty Images

    Hello and welcome to Eye on AI. In today’s edition…A Google report reveals how malicious actors are using Gemini to hack faster and more efficiently; Microsoft quickly moves to offer DeepSeek’s R1 to Azure customers; OpenAI launches ChatGPT for government; and much more. 

    If a technology can be at all useful for cyber attacks, hackers will without a doubt add it to their toolbox. So it’s no surprise that hacking groups have jumped on AI, just as everyone else has. But now we have some details about exactly how government-backed malicious actors are leveraging the technology, including AI tools built by U.S. companies. 

    Google’s Threat Intelligence Group yesterday published a report detailing how hacking groups associated with China, North Korea, Iran, Russia, and over a dozen other countries have been using the company’s Gemini chatbot to assist with their operations. The researchers found the AI chatbot is being used for both hacking activity (like espionage and computer network attacks) and coordinated efforts to influence online audiences.

    Overall, Gemini was helpful in supporting several phases of an attack, including research, creating malicious content, and planning evasion strategies. But, as of now, hackers have not been able to use Gemini to generate novel attack methods, according to the report. Hacking groups from Iran and China used Gemini the most, relying on the chatbot for a wide variety of tasks from researching military targets to malicious scripting—over 20 Chinese groups and 10 Iranian groups were observed using Gemini.

    The findings come as geopolitical concerns around AI are reaching new heights, sparked by the release of R1 from DeepSeek, which appears to have overcome the roadblock of U.S. sanctions and built a model with similar capabilities to leading U.S. AI systems while being trained without top-of-the-line AI hardware and at only a fraction of the cost. 

    Hackers tap AI for research, coding, content generation, and more 

    According to the report, the vast majority of activity observed by the researchers involved using AI to accelerate their campaigns. This includes actions like using Gemini to troubleshoot code for malware, generate phishing emails, and create and localize content. 

    They also used Gemini for research, including investigating potential infrastructure, vulnerabilities, target organizations, evasion techniques, and more. For example, the report describes how China-backed groups used Gemini to research U.S. military and U.S.-based IT organizations, U.S. government network ranges, and publicly available information about U.S. intelligence personnel. North Korean groups were observed researching nuclear power plants in South Korea, cyber forces of foreign militaries, historic cyber events, and malware development. 

    In fewer cases, the Google researchers observed malicious actors instructing Gemini to take malicious actions and attempting to circumvent its guardrails. In one example described in the report, a group input different publicly available jailbreak prompts in an attempt to get Gemini to output Python code for a distributed denial-of-service (DDoS) tool. Others sought to use Gemini to abuse Google products, including researching techniques for Gmail phishing and bypassing Google’s account verification methods. Google says its safety responses restricted such content and that attempts to use Gemini to abuse Google products were unsuccessful. 

    No new threats—for now 

    The other point made clear in the report is that while government-backed hacking groups are finding plenty of ways to hack more efficiently with AI, they were not observed using AI to discover new code vulnerabilities or develop unprecedented ways of orchestrating attacks.

    “Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” reads the report, also noting that it allows less-skilled actors to more quickly develop their tools and skills. 

    This, of course, could change as AI further develops, becomes more integrated into the world, and hacking groups gain more experience experimenting with it. The cloud completely upended the cyberthreat landscape, greatly expanding how malicious actors could hack and exploit. AI will be probably even more transformative in changing how companies and governments operate, how data is exchanged, how information is learned, and how we interact with the internet, software, and our devices. 

    This does mean that there is a high risk of AI being used for hacking and espionage operations in new ways. Even if we’re not seeing evidence of this right now, we shouldn’t get complacent about this threat.

    And with that, here’s more AI news. 

    Sage Lazzaro
    sage.lazzaro@consultant.fortune.com
    sagelazzaro.com

    AI IN THE NEWS

    Alibaba releases Qwen 2.5-Max, suggesting Chinese AI makers are also feeling pressure from DeepSeek. The Chinese-based company claims the AI model outperforms OpenAI’s GPT-4o, DeepSeek-V3, and Meta’s Llama-3.1-405B on several performance benchmarks. The release comes on the first day of the Chinese Lunar New Year (usually a day off from work) and just days after the highly lauded R1 launch from DeepSeek. Along with TikTok’s launch of a new algorithm, the rush of new AI releases from Chinese companies suggests U.S. incumbents aren’t the only ones feeling the heat in the race to develop more powerful AI. You can read more from Reuters.

    OpenAI says it has evidence DeepSeek used its model to train R1. The company had noted instances of “distillation” and suspects Chinese firm DeepSeek was behind it. Distillation refers to a technique wherein developers use outputs from a larger, more capable model to obtain better performance on smaller models and specific tasks. While it’s a common practice in the industry, doing so to build rival models violates OpenAI’s terms of service. OpenAI and Microsoft investigated and blocked API access to accounts on suspicion of distillation last year, which they believed belonged to DeepSeek. You can read more in the Financial Times.

    Microsoft quickly moves to offer DeepSeek’s R1 to Azure customers. Just days after the release of R1 rocked the AI world, Microsoft has already added it to its model catalog on Azure AI Foundry and GitHub. This means Azure customers can now start integrating the model into their applications. Microsoft is also set to make a smaller version of R1 available to run locally on Copilot Plus PCs soon. You can read more from The Verge

    OpenAI launches ChatGPT Gov for U.S. government use. The company is framing the platform as even more secure than its enterprise product and says users can input “non-public, sensitive information” while operating within their own secure hosting environments. OpenAI says more than 90,000 local, state, and federal government employees have used the product since the start of 2024. You can read more from CNBC.

    An investigation into MrDeepFakes, where millions view nonconsensual explicit deepfakes created with AI. The website hosts tens of thousands of highly explicit, nonconsensual videos and photos altered to depict real people. It has close to 650,000 members and content on the site has been viewed over two billion times. The site’s administrators have gone through great lengths to keep their identities obscured, but the website offers some clues about companies advertising prominently on the site. One app called Deepswap that’s permanently linked to the top advertises the ability to “Deepfake anyone you want” and “Make AI porn in a sec.” You can read more from Bellingcat

    FORTUNE ON AI

    China’s DeepSeek AI is full of misinformation and can be tricked into generating bomb instructions, researchers warn —by David Meyer

    OpenAI ex-board member Helen Toner says revoking ban on Nvidia AI chip exports would be a ‘huge victory’ for China —by Sharon Goldman

    Microsoft’s AI business has topped an annual revenue run rate of $13 billion —by Andrew Nusca

    Amid the AI arms race, these IPOs could be poised for long-term investor success —by Leo Schwartz and Greg McKenna

    Why DeepSeek is great news for Microsoft and software stocks —by Greg McKenna

    AI CALENDAR

    Feb. 10-11: AI Action Summit, Paris, France

    March 3-6: MWC, Barcelona

    March 7-15: SXSW, Austin

    March 10-13: Human [X] conference, Las Vegas

    March 17-20: Nvidia GTC, San Jose

    April 9-11: Google Cloud Next, Las Vegas

    May 6-7: Fortune Brainstorm AI, London. Apply to attend here.

    EYE ON AI NUMBERS

    3.1 million

    That’s at least how many downloads Chinese AI app DeepSeek has garnered in the U.S. iOS and Google Play App stores, according to data from AppFigures. The actual number of downloads is likely higher—TechCrunch notes the Google Play Store has a label indicating over 5 million downloads for the app. And of course, the number of downloads is likely continuously rising. DeepSeek is currently the most downloaded app in both stores. 

    This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.