What's Happening?
As the adoption of AI in enterprises grows, security leaders are recognizing AI agents as insiders with access to sensitive data and decision-making authority. Meghan Maneval, director of community and education
at Safe Security, emphasized the need for AI agents to undergo security awareness training similar to human employees. Speaking at the ISACA Europe 2025 conference, Maneval highlighted the importance of treating AI agents as high-risk identities and outlined a framework for AI auditing. Her recommendations include creating an inventory of AI tools and understanding their use, as well as examining the underlying algorithms and training data for biases and weaknesses.
Why It's Important?
The integration of AI agents into enterprise systems poses significant security challenges. Without proper training and auditing, these agents could inadvertently expose sensitive information or make unauthorized decisions. By extending security protocols to AI agents, organizations can mitigate risks associated with data breaches and unauthorized access. This approach not only protects company assets but also ensures compliance with regulatory standards. As AI becomes more prevalent, the need for robust security measures will grow, impacting industries reliant on AI-driven processes.
What's Next?
Organizations are expected to implement Maneval's recommendations by developing comprehensive AI auditing programs. This includes conducting background checks on AI agents and ensuring they adhere to company policies. As AI technology evolves, security experts will likely refine auditing practices to address emerging threats. Companies may also collaborate with third-party vendors to enhance AI security measures, ensuring that all stakeholders are aligned in protecting sensitive data.
Beyond the Headlines
The ethical implications of AI agents acting as insiders raise questions about accountability and transparency. As AI systems gain more autonomy, organizations must consider the moral responsibilities associated with their use. This includes addressing potential biases in AI decision-making and ensuring equitable treatment of all data subjects. Long-term, the integration of AI into security protocols could lead to shifts in how companies approach risk management and governance.











