Rapid Read    •   6 min read

AI Use Raises Concerns Over Pre-existing Risks, Says Cybersecurity Expert

WHAT'S THE STORY?

What's Happening?

Kat McCrabb, managing director of Flame Tree Cyber, warns that AI use is exposing pre-existing risks within organizations, such as poor access management and low data quality. Overreliance on AI can lead to poor decision-making and behavior that contradicts organizational policies. McCrabb emphasizes the importance of user education and building strong governance frameworks to manage AI risks. Organizations are advised to ask tech vendors about data storage and processing, and to understand third-party risks associated with AI tools. McCrabb advocates for ISO 42001 and the Voluntary AI Safety Standard to guide responsible AI use.
AD

Why It's Important?

As AI becomes more integrated into business operations, it is crucial for organizations to address the risks associated with its use. Poor decision-making and data management can lead to significant challenges, impacting productivity and compliance. By implementing strong governance frameworks and educating users, organizations can mitigate these risks and ensure responsible AI use. Understanding third-party risks and data management practices is essential for maintaining data security and protecting organizational interests.

What's Next?

Organizations will need to continue developing strategies to manage AI risks, focusing on user education and governance frameworks. As AI technology evolves, there may be increased regulatory scrutiny, requiring businesses to adapt their practices to comply with new standards. Companies should engage with external experts to ensure they are operating legally and effectively managing AI risks.

AI Generated Content

AD
More Stories You Might Enjoy