What's Happening?
A new prompt injection attack method, 'Comment and Control,' has been identified, affecting several AI code security and automation tools, including Claude Code, Gemini CLI, and GitHub Copilot. Discovered by security engineer Aonan Guan and researchers
from Johns Hopkins University, the attack exploits GitHub comments to hijack AI agents, potentially leading to unauthorized command execution and credential exposure. The vulnerabilities have been reported to the affected companies, which have acknowledged the issues and implemented some mitigations.
Why It's Important?
The discovery of the 'Comment and Control' attack highlights significant security vulnerabilities in AI code security tools, which could be exploited to compromise sensitive data and systems. As AI tools become increasingly integrated into software development and security processes, ensuring their robustness against such attacks is critical. The findings underscore the need for continuous security assessments and improvements to protect against evolving threats in the AI landscape.












