What's Happening?
GreyNoise, a threat intelligence firm, has reported that threat actors are targeting misconfigured proxy servers to access large language model (LLM) APIs. Between October 2025 and January 2026, GreyNoise's honeypots recorded over 91,000 attack sessions.
The attacks are part of two campaigns, one exploiting server-side request forgery vulnerabilities and another probing LLM model endpoints for misconfigurations. The latter campaign involved over 80,000 attack sessions targeting models from companies like OpenAI, Anthropic, Meta, and Google. The attackers used innocuous test queries to identify which models respond without triggering security alerts.
Why It's Important?
The targeting of LLM APIs highlights the growing interest of threat actors in exploiting AI technologies. Misconfigured APIs can lead to unauthorized access and potential data breaches, posing significant risks to companies relying on these models. The attacks underscore the need for robust security measures to protect AI infrastructure. As AI becomes more integrated into various sectors, ensuring the security of these systems is crucial to prevent exploitation by malicious actors.












