US Government and Allies Issue Guidance on Secure AI Deployment in Critical Sectors
Cybersecurity agencies from the United States, Australia, Canada, New Zealand, and the United Kingdom have jointly released guidance on the secure deployment of autonomous artificial intelligence (AI) systems. These systems, known as agentic AI, are increasingly being integrated into critical infrastructure and defense sectors. The guidance emphasizes that these AI systems, which can autonomously plan, make decisions, and take actions, should be treated as a core cybersecurity concern. The document outlines five categories of risk associated with agentic AI, including excessive privilege, design flaws, behavioral unpredictability, structural risks, and accountability issues. The agencies recommend integrating these AI systems into existing cybersecurity frameworks, applying principles such as zero trust and least-privilege access. The guidance also highlights the need for cryptographically secured identities for AI agents and human oversight for high-impact actions.