What's Happening?
The SANS Institute has raised concerns about the rapid integration of AI into enterprise workflows, which is outpacing security measures. According to the 2026 SANS State of Identity Threats & Defenses Survey, there is a significant increase in non-human
identities (NHIs) such as service accounts, API keys, and automation bots. The survey, which involved over 500 security professionals globally, found that 76% of organizations report growth in NHIs, with 74% using AI agents that require credentials. These AI agents, or agentic AI, pose new security risks as they can take unpredictable actions, unlike traditional NHIs. The report highlights a lack of governance, with 92% of organizations failing to rotate machine credentials on a 90-day cycle. The SANS Institute recommends adopting secrets vaults, automated rotation, and least-privilege access to mitigate risks.
Why It's Important?
The rapid deployment of AI agents without adequate security measures poses significant risks to organizations. As AI agents gain more decision-making power, they could potentially lead to data breaches and other security incidents. The lack of governance frameworks means that many organizations are unprepared to manage the security challenges posed by these technologies. This situation could have widespread implications for industries relying on AI, as they may face increased vulnerability to cyber-attacks. The need for robust security measures is critical to protect sensitive data and maintain trust in AI-driven systems.
What's Next?
Organizations are expected to enhance their security frameworks to address the challenges posed by agentic AI. This includes implementing automated credential rotation and adopting a minimum viable security approach. As AI technologies continue to evolve, companies will need to stay ahead of potential risks by integrating security measures into their AI deployment strategies. The SANS Institute's recommendations may guide organizations in developing more effective governance structures to manage AI-related security risks.











