Cybercriminals are using large language models including Google’s Gemini to generate and modify malicious code in real time, allowing attacks to continuously evolve and evade detection, according to a Google Threat Intelligence Group report published this month.
At least five malware families now query AI models during runtime to dynamically adjust their behavior, marking a significant shift in how cyberattacks are conducted.
“Large language models are being co-opted to serve malicious functions that were once too resource-intensive or complex for small-scale actors,” — Shane Huntley, Director, Google Threat Intelligence Group.
A malware notification warning of a computer threat.
The evolution of adaptive malware
Among the five malware families identified, two, PROMPTFLUX and PROMPTSTEAL demonstrate how AI powered malwares can evolve continuously.
PROMPTFLUX employs what Google calls a “Thinking Robot” mechanism, querying Gemini’s API hourly to rewrite its VBScript code. Each iteration slightly alters its behavior, complicating traditional antivirus detection methods.
PROMPTSTEAL, linked to Russia’s state-aligned APT28 group, goes further. It uses Qwen2.5-Coder, an LLM hosted on Hugging Face, to generate Windows commands on demand. This “just-in-time” capability allows hackers to execute customized attacks without pre-coding them.
“Unlike previous generations of malware that relied on static logic, these threats use real-time AI queries to morph their signatures,” — Maya Horowitz, VP of Research, Check Point Software Technologies.
The adaptive nature of these AI powered malwares makes them especially difficult to contain, as they no longer need to update from a central server. Instead, they continuously evolve using public AI APIs, exploiting models’ open access and computational scale.
Crypto assets in the crosshairs
Google’s report also details how AI powered malwares are being weaponized to target cryptocurrency holders and platforms. North Korea’s UNC1069 group also known as Masan has been using AI-driven scripts to probe wallets, develop phishing pages, and tailor spear-phishing messages that mimic legitimate crypto exchange alerts.
These AI powered malwares automate previously manual tasks, enabling attackers to compromise assets faster. The report estimates that North Korean groups have already stolen tens of millions of dollars’ worth of digital currency in 2025 alone using such methods.
Google says it has already disabled several developer accounts tied to the misuse of its Gemini API and implemented stricter safeguards, including real-time prompt filtering and anomaly-based API monitoring.
“AI brings tremendous innovation but also amplifies the scale and speed of cyber threats,” — Phil Venables, CISO, Google Cloud. “We’re taking decisive action to ensure our platforms are resilient against this new wave of attacks.”
Global implications and next steps
Cybersecurity analysts warn that AI powered malwares represent a fundamental shift in digital threat models. By automating adaptation and concealment, these tools could overwhelm existing defense systems unless governments, developers, and enterprises coordinate responses.
Industry experts are urging cloud providers to establish stricter access controls for high-risk AI endpoints and to build audit trails for AI-generated code. The report emphasizes that regulation alone will not suffice; continuous technical oversight is essential.
Google has invited collaboration with cybersecurity agencies, including CISA and Europol, to monitor how generative AI technologies are being misused across jurisdictions. The company also advocates transparency in how LLMs are trained and deployed to prevent inadvertent exploitation.
As AI integration deepens in both enterprise and consumer technologies, the line between innovation and exploitation grows thinner. The rise of AI powered malwares serves as a warning that the same intelligence driving productivity gains can also be repurposed for deception and theft.
In the words of Huntley: “We’re witnessing the beginning of a new cybersecurity era as one where defenders must think as dynamically as the attackers they face.”