What is the story about?
For a company that has long positioned itself as a privacy-first alternative in the AI race, Anthropic has taken a step that is likely to test that reputation.
The firm has begun rolling out identity verification requirements for its chatbot Claude, asking some users to provide a government-issued photo ID and, in certain cases, a live selfie to access parts of the platform.
The change, introduced quietly via an update to its help centre this week, applies only to select scenarios for now. Anthropic says users may encounter verification prompts when accessing “certain capabilities”, during routine platform integrity checks, or as part of broader safety and compliance measures. It has not specified which features are affected or what triggers the checks.
Anthropic described the move as a targeted measure rather than a universal requirement. “We are rolling out identity verification for a few use cases,” the company said, adding that the data would be used solely to confirm identity.
The verification process requires users to submit a valid, physical and undamaged passport, driving licence or national identity card. Photocopies, mobile IDs and student credentials are not accepted. In some cases, users may also be asked to complete a live selfie check.
The company has partnered with Persona to handle the process. According to Anthropic, identity data is processed on Persona’s systems rather than its own infrastructure. It says the data is encrypted in transit and at rest, will not be used for model training, and will not be shared with third parties for marketing purposes.
The rollout has prompted criticism from some users, particularly those drawn to Anthropic for its emphasis on privacy. Critics note that the requirement appears to be a company-led decision rather than a response to regulatory mandates.
The move comes months after a surge in user growth for Claude , partly driven by concerns around competitors such as OpenAI. Earlier this year, Anthropic reported a sharp increase in sign-ups after OpenAI entered a deal involving AI deployment on Pentagon classified networks, a contract Anthropic declined.
Questions are also being raised about the risks of storing sensitive identity data with third-party providers. While Persona is widely used in financial services, past incidents have highlighted potential vulnerabilities.
A breach at Discord in October 2025 exposed tens of thousands of government IDs submitted for age verification, underscoring the risks associated with centralised data storage.
The verification push also aligns with Anthropic’s broader efforts to tighten platform controls. In recent months, the company introduced systems to detect underage users, though some adults reported being incorrectly flagged and temporarily losing access to their accounts while appealing decisions.
For now, the identity checks remain limited in scope. However, the lack of clarity around their application and expansion has left users watching closely, as Anthropic balances safety measures with the privacy expectations that helped fuel its rise.
The firm has begun rolling out identity verification requirements for its chatbot Claude, asking some users to provide a government-issued photo ID and, in certain cases, a live selfie to access parts of the platform.
The change, introduced quietly via an update to its help centre this week, applies only to select scenarios for now. Anthropic says users may encounter verification prompts when accessing “certain capabilities”, during routine platform integrity checks, or as part of broader safety and compliance measures. It has not specified which features are affected or what triggers the checks.
Limited rollout, unclear triggers
Anthropic described the move as a targeted measure rather than a universal requirement. “We are rolling out identity verification for a few use cases,” the company said, adding that the data would be used solely to confirm identity.
The verification process requires users to submit a valid, physical and undamaged passport, driving licence or national identity card. Photocopies, mobile IDs and student credentials are not accepted. In some cases, users may also be asked to complete a live selfie check.
The company has partnered with Persona to handle the process. According to Anthropic, identity data is processed on Persona’s systems rather than its own infrastructure. It says the data is encrypted in transit and at rest, will not be used for model training, and will not be shared with third parties for marketing purposes.
Privacy concerns and past precedent
The rollout has prompted criticism from some users, particularly those drawn to Anthropic for its emphasis on privacy. Critics note that the requirement appears to be a company-led decision rather than a response to regulatory mandates.
The move comes months after a surge in user growth for Claude , partly driven by concerns around competitors such as OpenAI. Earlier this year, Anthropic reported a sharp increase in sign-ups after OpenAI entered a deal involving AI deployment on Pentagon classified networks, a contract Anthropic declined.
Questions are also being raised about the risks of storing sensitive identity data with third-party providers. While Persona is widely used in financial services, past incidents have highlighted potential vulnerabilities.
A breach at Discord in October 2025 exposed tens of thousands of government IDs submitted for age verification, underscoring the risks associated with centralised data storage.
The verification push also aligns with Anthropic’s broader efforts to tighten platform controls. In recent months, the company introduced systems to detect underage users, though some adults reported being incorrectly flagged and temporarily losing access to their accounts while appealing decisions.
For now, the identity checks remain limited in scope. However, the lack of clarity around their application and expansion has left users watching closely, as Anthropic balances safety measures with the privacy expectations that helped fuel its rise.















