What's Happening?
GreyNoise, a threat intelligence firm, has reported a significant increase in attacks targeting large language model (LLM) APIs through misconfigured proxy servers. Between October 2025 and January 2026,
over 91,000 attack sessions were recorded, involving two major campaigns. The first campaign, starting in October, exploited server-side request forgery (SSRF) vulnerabilities using ProjectDiscovery’s OAST infrastructure. The second campaign, beginning in late December, involved over 80,000 attack sessions probing more than 70 LLM model endpoints for misconfigurations. These attacks targeted models from major companies like OpenAI, Anthropic, Meta, and Google. The attackers used innocuous test queries to avoid triggering security alerts, suggesting a reconnaissance effort to identify vulnerable systems for future exploitation.
Why It's Important?
The rise in attacks on LLM APIs highlights the growing interest of threat actors in exploiting artificial intelligence technologies. As LLMs become integral to various applications, their security becomes paramount. Misconfigured APIs can provide unauthorized access to sensitive data and functionalities, posing significant risks to businesses and users. The attacks underscore the need for robust security measures and vigilant monitoring of AI systems. Organizations using LLMs must ensure proper configuration and implement security best practices to protect against potential breaches. The findings also emphasize the importance of threat intelligence in identifying and mitigating emerging cyber threats.
What's Next?
Organizations utilizing LLMs should review their security protocols and ensure that their APIs are properly configured to prevent unauthorized access. Security teams may need to conduct regular audits and employ advanced monitoring tools to detect and respond to suspicious activities. The cybersecurity community could see increased collaboration to develop standardized guidelines for securing AI systems. Additionally, further research into the methods and motivations of attackers targeting LLMs could inform future security strategies. As AI technologies continue to evolve, ongoing vigilance and adaptation will be crucial in safeguarding against sophisticated cyber threats.








