What's Happening?
A class action lawsuit against Otter.ai, a prominent AI transcription tool, is highlighting potential legal risks associated with AI-powered meeting notetakers in the workplace. The case, currently before
Judge Eumi K. Lee in the U.S. District Court for the Northern District of California, alleges that Otter.ai recorded private conversations without the consent of all participants and used these recordings to train its AI models without proper disclosure. This lawsuit is drawing attention to compliance gaps, particularly in states with all-party consent laws. Employment attorneys are advising HR leaders to be vigilant about these issues, as federal wiretap laws and state counterparts can impose significant statutory damages for non-compliance.
Why It's Important?
The outcome of this lawsuit could have significant implications for HR practices and the use of AI in the workplace. If the courts rule against Otter.ai, it could set a precedent that impacts how AI transcription tools are used, particularly in states with stringent consent laws. This could lead to increased compliance costs for businesses and necessitate changes in how AI tools are deployed in employment settings. Additionally, the case underscores the broader issue of data privacy and the potential for AI tools to inadvertently create legal liabilities for employers, especially in jurisdictions with specific biometric and data protection laws.
What's Next?
HR leaders are advised to proactively address these compliance issues by vetting AI vendors for data security and consent features, and by establishing clear policies on the use of AI notetakers. As the case progresses, businesses will need to stay informed about any legal developments and be prepared to adjust their practices accordingly. The introduction of the EU AI Act in August 2026, which classifies certain AI systems as high-risk, may also influence how multinational companies approach AI compliance.






