What's Happening?
A recent study by ISACA reveals that while AI is increasingly integrated into organizational operations, less than half of these organizations have established comprehensive AI safety or security policies. The research, published on May 5, indicates that 90%
of digital trust professionals acknowledge the use of AI tools within their organizations. However, only 38% have a formal policy to manage AI tool usage, and 25% lack any AI-related policies. This gap has led to the rise of 'Shadow AI,' where employees use unapproved AI tools, potentially compromising sensitive company data. The study also highlights that 56% of respondents are uncertain about their ability to quickly shut down AI systems during security incidents, with only 20% having processes to override AI systems if necessary. The lack of governance and understanding of AI risks among leadership is a significant concern, as noted by Ulrika Dellrud from ISACA's Emerging Trends Working Group.
Why It's Important?
The absence of robust AI governance frameworks poses significant risks to organizations, including data breaches and privacy violations. As AI tools become more prevalent, the potential for AI-driven cyber threats, such as phishing and social engineering attacks, increases. The study found that 71% of respondents believe AI has made these attacks harder to detect, and 58% find it more challenging to authenticate digital information. This situation underscores the need for organizations to develop comprehensive AI policies and enhance their cybersecurity measures. The findings suggest that while AI can enhance cybersecurity defenses, its unchecked use can also introduce vulnerabilities, highlighting the importance of informed leadership and responsible data management.
What's Next?
Organizations are likely to face increasing pressure to establish and enforce AI governance policies to mitigate risks associated with AI usage. This may involve developing processes to quickly disable AI systems during security breaches and enhancing data privacy and security measures. As AI continues to evolve, companies will need to balance innovation with disciplined governance to ensure trust and unlock sustainable value. The study suggests that effective AI governance will require a strong foundation in data and privacy management, as well as leadership that understands AI risks.












