What's Happening?
Senator Maggie Hassan has formally requested information from four AI voice-cloning companies—ElevenLabs, LOVO, Speechify, and VEED—regarding their measures to prevent misuse of their technology for scams. This action follows a report from the FBI's Internet
Crime Complaint Center, which highlighted $893 million in losses due to AI-related scams in 2025. The senator's inquiry focuses on how these companies monitor for fraudulent activities, ensure consent for voice cloning, and prevent impersonation of public figures or minors. The inquiry is part of a broader effort to address the growing threat of AI-driven scams, which have been used to impersonate loved ones and defraud individuals, particularly in cases like the New Hampshire grandparent scam.
Why It's Important?
The inquiry by Senator Hassan underscores the increasing concern over the use of AI in financial scams, which have significant implications for consumer protection and cybersecurity. The $893 million loss reported by the FBI highlights the scale of the problem, affecting thousands of individuals and businesses. The senator's actions could lead to stricter regulations and accountability measures for AI companies, potentially reducing the risk of such scams. This is particularly important as AI technology becomes more sophisticated and accessible, posing challenges for law enforcement and financial institutions in detecting and preventing fraud.
What's Next?
Senator Hassan has set a deadline for the companies to respond to her inquiries, which will inform potential legislative actions. The responses will help determine whether current safeguards are sufficient or if new regulations are needed. The AI Fraud Accountability Act, a bipartisan bill, is already in discussion, aiming to make digital impersonation a federal crime. The outcomes of these inquiries and legislative efforts could shape the future of AI regulation and consumer protection in the U.S., influencing how companies develop and implement AI technologies.
Beyond the Headlines
The ethical implications of AI voice-cloning technology are significant, as it raises questions about privacy, consent, and the potential for misuse. The ability to clone voices can undermine trust and security, making it crucial for companies to implement robust safeguards. The broader cultural impact includes increased skepticism towards digital communications and the need for public awareness about the risks of AI-driven scams. Long-term, this could lead to shifts in how technology is perceived and integrated into daily life, emphasizing the importance of ethical standards in AI development.












