What is the story about?
For years, cybersecurity experts have been saying one particular thing at conferences and in reports that it's only a matter of time before hackers start
using artificial intelligence to build their own weapons. Most people listened, nodded, and moved on.
Time has now arrived
Google's Threat Intelligence Group published a report on Monday confirming what the security world had long feared. A known cybercrime group used AI to develop a zero-day exploit a hacking tool built around a software flaw that the developer doesn't even know exists yet. It is the first time Google researchers have caught a criminal group doing this, and the people involved say it won't be the last.
The exploit targeted a popular open-source web-based system administration tool and was designed to let attackers bypass two-factor authentication the extra security step that asks you to confirm your identity beyond just a password. The catch was that the attackers still needed valid usernames and passwords to get in. But once they had those, the tool would let them sail straight past the second layer of protection that millions of people rely on every day.
The flaw itself came from a logic error a case where a developer had hardcoded a trust assumption that contradicted how authentication was supposed to work. It was the kind of subtle, buried mistake that could sit unnoticed for years. An AI, apparently, found it.
How did Google know AI was involved?
The code gave it away. The exploit script contained educational-style documentation strings, heavily annotated code, and even a hallucinated CVSS score a severity rating that referenced a vulnerability number that doesn't actually exist. These are the fingerprints of a large language model,no human hacker writes code quite like that.
Google said it has high confidence AI was used to help discover and weaponise the exploit, though the company declined to name the cybercrime group, the affected software, or which AI model was involved. A spokesperson confirmed it wasn't Google's own Gemini or Anthropic's Mythos.
Google worked with the affected software vendor quickly enough to patch the flaw before the attackers could launch their mass exploitation campaign. This time, the system worked. But John Hultquist, chief analyst at Google's Threat Intelligence Group, was careful not to let anyone feel too comfortable about that.
"It's a taste of what's to come," he said. "We believe this is the tip of the iceberg."The report also flagged North Korean hacking group APT45 using AI to churn through thousands of exploit checks, Chinese state-linked operators experimenting with AI for vulnerability hunting, and Russian influence operations using AI-generated audio stitched into real news footage.
The lesson is uncomfortable but clear. AI is no longer just helping hackers write better phishing emails. It is now helping them find and build weapons that didn't exist before. The defence industry has to catch up and fast.














