AI Security Tools Vulnerable to Prompt Injection Attacks, Researchers Warn
A recent study by security engineer Aonan Guan, with assistance from researchers at Johns Hopkins University, has revealed vulnerabilities in several AI code security and automation tools. The attack method, termed 'Comment and Control', exploits prompt injection vulnerabilities in AI agents such as Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and GitHub Copilot Agent. These vulnerabilities allow attackers to use specially crafted GitHub comments to hijack AI agents, execute arbitrary commands, and extract sensitive credentials. The attack is particularly concerning as it can be triggered automatically by GitHub Actions workflows, posing a significant security threat.