Mandatory ID Verification
A significant development has emerged concerning Anthropic, a prominent AI firm known for its emphasis on user privacy. The company has begun requiring
government-issued identification and selfie verification for a subset of its users interacting with its services, including its advanced AI model, Claude. This move, detailed in a recent company blog post, involves users submitting a photograph of a valid government-issued ID, such as a passport or driver's license, alongside a live selfie taken via webcam or phone. The verification process is designed to be relatively swift, aiming for completion within five minutes. Anthropic has indicated that these checks are being rolled out for specific use cases and are part of routine platform integrity assessments, safety protocols, and compliance measures. This departure from its previous practices has raised eyebrows, particularly given Anthropic's prior stance on data privacy, which included a zero data retention policy where user data and generated responses were not stored on their servers. This contrasts sharply with competitors who may utilize user data for model training.
Privacy Backlash and Competitor Impact
The introduction of these stringent identity verification measures by Anthropic has ignited a wave of criticism, with many users and observers viewing it as a significant blow to the company's hard-earned reputation for prioritizing user privacy. Critics argue that this policy effectively undermines the very foundation of trust that drew many users to Anthropic's platform, especially those concerned about data security and anonymity in AI interactions. This shift is seen by some as a strategic misstep, potentially handing a considerable advantage to rival AI providers. Social media commentary highlights this sentiment, with some users directly stating that the company has 'handed their competitors a gift.' The comparison is often made with other leading AI services, such as ChatGPT and Gemini, which currently do not enforce such rigorous identification protocols. Furthermore, the move comes after Anthropic had previously seen a surge in free subscriptions following its refusal to engage in a deal with the US government for its classified networks, suggesting that privacy and ethical considerations have been key differentiators for its user base.
Third-Party Verification and Security Risks
For its identity verification process, Anthropic has partnered with Persona Identities, a third-party service provider. This partnership necessitates users providing a physical, undamaged government document like a passport, driver's license, or state/provincial ID card. Anthropic has stated that the verification data will remain confidential, confined to interactions between the user, Anthropic, and Persona Identities, and will not be utilized for marketing or advertising purposes unrelated to verification and compliance. A company spokesperson clarified that these checks would be applied to a 'small number' of cases where user activity suggests potential fraudulent or abusive behavior. However, this reliance on a third-party service introduces inherent security risks, especially in light of recent data breaches involving other service providers. Examples such as the Tata Consultancy Services hack in April 2025 and a Discord data breach in October 2025, which exposed personal data submitted for verification for 70,000 users, underscore the vulnerability of such systems and raise questions about the ultimate security of user verification data.















