What's Happening?
Recent penetration tests have uncovered that AI-based systems exhibit a higher percentage of high-risk security flaws compared to legacy software. According to Cobalt's State of Pentesting Report, 32% of AI and large language model (LLM) vulnerabilities
are rated as high risk, significantly higher than the 13% found in traditional enterprise security tests. These AI vulnerabilities also have the lowest resolution rate, with only 38% of high-risk issues being addressed. The report highlights the challenges in securing AI systems, which often introduce new attack surfaces due to their non-deterministic nature and integration with sensitive data.
Why It's Important?
The findings underscore the urgent need for improved security measures in AI systems, which are increasingly being adopted across various industries. The high prevalence of severe vulnerabilities in AI applications poses significant risks, as these systems often handle sensitive data and perform autonomous actions. The report suggests that organizations may be rushing to implement AI technologies without adequately addressing security concerns, potentially exposing themselves to cyber threats. This situation calls for a reevaluation of security practices, emphasizing the need for robust security frameworks tailored to the unique challenges posed by AI systems.
What's Next?
In response to these findings, cybersecurity experts recommend that organizations treat AI systems as production environments rather than experimental projects. This involves implementing strict security protocols, such as tool call schemas, output validation, and human approval gates for high-consequence operations. Companies may also need to invest in specialized security training for their IT teams to better understand and mitigate AI-specific vulnerabilities. As AI continues to evolve, ongoing research and collaboration among cybersecurity professionals will be crucial in developing effective strategies to protect these systems from emerging threats.












