What's Happening?
Agentic AI, a form of artificial intelligence that combines software and language models to autonomously make decisions, is being considered for adoption by various industries. Diana Kelley, Chief Information Security Officer at Noma Security, discusses the potential benefits and risks associated with agentic AI. The technology promises efficiency gains, improved accuracy, and faster problem resolution, but also poses risks such as over-reliance on autonomous systems, data loss, and ethical violations. Kelley emphasizes the importance of building security and governance into AI systems to mitigate these risks.
Why It's Important?
The adoption of agentic AI could significantly impact industries like financial services and manufacturing by enhancing fraud detection and supply chain optimization. However, the risks associated with autonomous decision-making systems necessitate careful implementation to avoid potential data breaches and reputational damage. Organizations must balance the benefits of increased automation with the need for transparency and oversight to ensure ethical and safe AI deployment.
What's Next?
Organizations are advised to conduct inventories of AI usage and align governance frameworks with standards like NIST’s AI RMF and the EU AI Act. Pilot deployments with ongoing monitoring and red-team testing can help identify weaknesses early. The success of agentic AI will depend on technical safeguards, AI-aware processes, and organizational readiness for cultural change.
Beyond the Headlines
Agentic AI offers remarkable promise but requires trust-building with employees, customers, and regulators. Inclusion of diverse perspectives in design and oversight can reduce blind spots and strengthen resilience. AI should amplify human judgment rather than replace it, ensuring that systems operate in line with organizational values and safety requirements.