Privacy Breach Allegations
Meta is entangled in a significant privacy controversy concerning its AI-enhanced smart glasses, developed in collaboration with Ray-Ban's parent company,
EssilorLuxottica. A new class-action lawsuit has been initiated in the United States, stemming from allegations that sensitive and intimate user footage captured by these glasses is being exposed to human data annotators located in countries like Kenya. These individuals are tasked with training AI models, a process that has drawn sharp criticism. The lawsuit claims that Meta has not adequately informed users about the full extent of these data-sharing practices, leading to potential privacy violations and a breach of consumer trust. The Information Commissioner’s Office in the UK is also reportedly investigating Meta's practices. The growing popularity of these smart glasses, with an estimated seven million customers reportedly purchasing the device in 2025, up from two million in 2023-2024, amplifies these concerns about everyday surveillance and the normalization of constant data collection.
Functionality and Transparency
Meta's smart glasses integrate advanced AI capabilities, allowing wearers to capture first-person video and audio recordings and analyze their surroundings. A notable feature is the privacy indicator light, designed to alert others when recording is active. However, critics and users have pointed out that this light can be easily overlooked, particularly in bright outdoor settings or crowded environments. A more significant concern highlighted in the lawsuit and investigative reports is the practice of sending recorded footage to offshore contractors for data labeling. This preprocessing step is crucial for training AI models, but the lawsuit argues that users are not fully aware of or explicitly consenting to this level of human review. While Meta asserts that consent is obtained before data enters the pipeline, the clarity and effectiveness of this opt-out process remain subjects of debate, potentially leaving users unaware that intimate moments could be viewed by third parties.
Content Reviewed by Annotators
Reports emerging from a joint investigation by Swedish newspapers have revealed the deeply personal nature of the content being reviewed by Meta's third-party contractors. These workers, based in locations like Nairobi, Kenya, have described encountering highly sensitive and intimate user data. This includes footage of individuals in private moments, such as going to the toilet or undressing, as well as instances of people changing clothes or handling personal financial documents like bank cards. The content also reportedly encompasses users watching pornography and even filming explicit encounters. Anonymous contractors have expressed discomfort and a sense of obligation to review this material, fearing job loss if they refuse. The extent of this data exposure has led to questions about whether users truly understand the privacy implications when they record with these devices, suggesting a significant disconnect between advertised privacy features and actual data handling practices.
Lawsuit's Core Claims
The class-action lawsuit, filed by Gina Bartone and Mateo Canu and represented by the Clarkson Law Firm, targets both Meta and its manufacturing partner, EssilorLuxottica. The suit centers on allegations of violating US consumer protection laws through deceptive advertising. Plaintiffs highlight Meta's marketing claims for the smart glasses, which often emphasize 'designed for privacy,' 'controlled by you,' and 'built for your privacy.' These statements, the lawsuit contends, lead consumers to believe their data is secure and that they have full control over shared content. The complaint points to advertisements that tout privacy settings and an 'added layer of security,' suggesting wearers actively choose what content is shared. Crucially, the plaintiffs argue that no disclaimers or information were provided that contradicted these privacy assurances, creating a misleading impression about how their footage would be handled, particularly the involvement of human reviewers overseas.
Meta's Defense and Policies
Meta has issued statements addressing the allegations, explaining that the review of user-shared content by contractors is intended to enhance user experience with the glasses. The company maintains that faces in images and footage are blurred before being shared with reviewers, although some sources have questioned the consistency of this blurring process. Meta's official privacy policy and terms of service, as stated by the company, indicate that content is shared with human reviewers only upon user consent. A version of their policy applicable to US users explicitly mentions that interactions with AIs, including conversations and messages, may be reviewed manually or automatically. The AI terms of use also advise users against sharing sensitive information they do not want retained. Meta spokesperson Christopher Sgro stated that unless users opt to share captured media, it remains on their device, and when content is shared with Meta AI, contractors may review it for improvement purposes, with steps taken to protect privacy and filter identifying information.
Past Privacy Incidents
This lawsuit is not the first instance of privacy concerns surrounding Meta's smart glasses, as the company has a history of facing scrutiny over data protection. In October 2024, Harvard students demonstrated how Meta's smart glasses, in combination with large language models and public databases, could be used to identify individuals and locate their residences in real-time. Furthermore, recent reports indicate Meta's ongoing development of new features for its AI glasses, including 'Name Tag,' a facial recognition capability intended to allow wearers to identify people and gather information about them via the AI assistant. The company is also reportedly working on features that would enable continuous camera and sensor operation to record and summarize a person's day, drawing parallels to AI-powered note-taking tools for meetings. These developments suggest a continuing trend of integrating advanced AI and data collection capabilities into wearable devices, further fueling ongoing privacy debates.














