What's Happening?
Recent findings indicate that artificial intelligence data poisoning intrusions have affected 26% of organizations in the U.S. and UK this year, suggesting a higher prevalence than previously anticipated. Despite a decline in deepfake-related attacks from 33% in 2024 to 20% in 2025, unauthorized generative AI use among employees was reported by 37% of enterprises. The third annual State of Information Security Report by IO highlights AI-generated misinformation, phishing, and shadow AI as primary cybersecurity threats for the upcoming year. In response, 75% of surveyed organizations are planning to implement AI usage policies to combat shadow AI.
Why It's Important?
The increase in AI data poisoning attacks poses significant risks to both technical systems and the integrity of services relied upon by businesses and the public. As AI technologies become more integrated into organizational operations, the potential for misuse and security breaches grows. The decline in deepfake attacks is a positive trend, but the rise in unauthorized AI use and data poisoning underscores the need for robust governance and security measures. Organizations that fail to address these threats may face operational disruptions and reputational damage, highlighting the importance of proactive cybersecurity strategies.
What's Next?
Organizations are expected to enhance their cybersecurity frameworks by developing comprehensive AI usage policies and investing in technologies that can detect and mitigate AI-related threats. The cybersecurity industry may see increased collaboration between companies and regulatory bodies to establish standards and best practices for AI governance. As awareness of these threats grows, businesses will likely prioritize training and education to ensure employees understand the risks associated with AI misuse.