What's Happening?
In 2025, 38% of Americans received scam calls impersonating healthcare providers, highlighting a significant rise in AI-driven scams. These scams often involve multi-modal campaigns, starting with a text followed by a phone call or email to appear more
legitimate. The American Hospital Association has warned healthcare organizations about deepfake scams targeting staff. These scams use AI-generated audio, video, and text to impersonate healthcare staff, creating legal and financial liabilities for organizations. The attack on Kettering Health exemplifies the disruption caused by such scams, where a ransomware group caused a systemwide IT outage, leading to chaos and fraudulent requests for payments.
Why It's Important?
The rise of AI-driven scams poses a significant threat to the healthcare industry, which is already a prime target for fraudsters. These scams can lead to substantial financial losses and regulatory fines for non-compliance with data protection laws. The use of AI in scams is increasing consumer mistrust in communications from healthcare providers, with 77% of Americans concerned about AI impersonation. This situation necessitates healthcare organizations to enhance their security measures, including call authentication and spoof protection, to protect their brand reputation and customer trust.
What's Next?
Healthcare organizations are expected to prioritize securing their communication channels, particularly voice calls, which remain a preferred method of contact for patients. Implementing comprehensive voice security strategies, such as branding calls and verifying call origins, will be crucial. As AI technology continues to evolve, healthcare providers must stay vigilant and adapt their security measures to mitigate the risks of AI-powered scams.











