What's Happening?
A class action lawsuit has been filed against Otter.ai, a popular AI transcription tool, alleging that it recorded private conversations without consent and used these recordings to train its AI models
without proper disclosure. The case, known as In re Otter.AI Privacy Litigation, is currently before Judge Eumi K. Lee in the U.S. District Court for the Northern District of California. Although no substantive rulings have been issued yet, the lawsuit is drawing attention to potential compliance gaps for HR teams regarding the use of AI-powered meeting notetakers. Employment attorneys suggest that this case could signal where liability might fall for employers using such technology.
Why It's Important?
The lawsuit highlights significant legal risks associated with AI notetakers, particularly concerning privacy and consent. Federal wiretap laws and state regulations vary, with some requiring all-party consent for recording conversations. This complexity can lead to overlapping and inconsistent consent obligations for employers, especially in virtual meetings involving participants from multiple jurisdictions. Additionally, the use of AI transcription tools may pose risks related to biometrics, accuracy, discrimination, and data retention. Employers must navigate these legal frameworks carefully to avoid potential liabilities, including statutory damages under laws like the Illinois Biometric Information Privacy Act.
What's Next?
HR leaders are advised to proactively address the use of AI notetakers by selecting and configuring vetted tools, rather than allowing employees to use unapproved applications. This includes vetting vendors for data security, setting up consent notices, and establishing clear policies on AI notetaker usage. The ongoing litigation against Otter.ai has not yet set binding precedents, but HR teams should monitor developments closely to ensure compliance with emerging legal standards.
Beyond the Headlines
The case underscores broader implications for AI technology in the workplace, including potential discrimination issues if AI transcription tools misinterpret accents or speech impediments. This could lead to disparate impacts in employment decisions, triggering additional legal requirements in jurisdictions like New York City, Illinois, and California. Multinational employers face even more complex compliance challenges under international regulations such as the GDPR and the upcoming EU AI Act, which may classify certain AI systems as high-risk.






