What is the story about?
As Anthropic continues its rapid climb in the artificial intelligence race, increasingly positioning itself as a serious challenger to OpenAI, cracks are beginning to show.
The company, reportedly valued at $380 billion and eyeing a potential public listing, is now grappling with mounting criticism from developers and enterprise users over the performance and reliability of its flagship Claude models.
The backlash comes at a pivotal moment. Anthropic has been gaining momentum not just with its core AI offerings but also with newer initiatives such as its cybersecurity-focused tool, Claude Mythos.
Yet, even as its influence expands, user complaints are raising uncomfortable questions about whether the company can maintain quality while scaling aggressively.
The CTO of a Fintech company has made headlines this weekend.
Patricio Molina of Argentina-based fintech Belo alleged that over 60 Claude accounts linked to his organisation were suddenly deactivated without warning. The disruption halted key workflows, froze integrations, and left teams unable to access critical tools.
According to Molina, the only communication received was an automated email citing a vague policy violation, with appeals routed through a basic online form and no direct human support. Although access was later restored and attributed to a “false positive”, the incident exposed vulnerabilities for businesses deeply reliant on a single AI provider.
Molina’s experience quickly struck a chord across the developer community, triggering a wave of similar accounts from other users. Several claimed they had faced comparable suspensions and struggled to get timely responses.
X user Tomás Escobar said his company encountered the same issue months earlier and only managed to resolve it through personal contacts, noting that responses from Anthropic took “at least 7–10 days”.
Another user, replying to the same post, said that he has also experienced the same with his account. According to his timeline, he upgraded his plan the expensive tier. He did everything Anthropic asked him to do to use the platform, including KYC. But, as soon as, he thought he was done, the platform banned his account.
Compounding the issue are reports of noticeable dip in performance in recent weeks. Developers report that the model is increasingly prone to errors, struggles with complex workflows, and at times fails to follow instructions accurately. Some have also pointed out a tendency to take shortcuts, leading to incomplete or subpar outputs.
The concerns appear to be linked to backend adjustments made by Anthropic, particularly a reduction in the model’s default “effort” level.
This change, aimed at optimising token usage and reducing computational costs, may have inadvertently impacted output quality. While the company has stated that such updates were documented in its changelog, users argue that the communication lacked clarity and transparency.
The timing of these complaints is particularly significant. Even as Anthropic gains traction with enterprise clients and expands into areas like cybersecurity through tools such as Claude Mythos, these incidents reveal the growing pains of rapid scale.
As the company continues to position itself as a formidable rival to OpenAI, the challenge will be to match its technological ambitions with operational reliability. Notably, last month, ChatGPT witnessed a significant dip on the user count.
According to the data, US installs of ChatGPT’s mobile app soared by an astonishing 295 per cent in just a day.
While OpenAI faced mounting criticism, Anthropic capitalised on the moment. The data also shows that US downloads of Anthropic’s Claude app rose by 37 per cent on February 27, followed by an additional 51 per cent jump.
The company, reportedly valued at $380 billion and eyeing a potential public listing, is now grappling with mounting criticism from developers and enterprise users over the performance and reliability of its flagship Claude models.
The backlash comes at a pivotal moment. Anthropic has been gaining momentum not just with its core AI offerings but also with newer initiatives such as its cybersecurity-focused tool, Claude Mythos.
Yet, even as its influence expands, user complaints are raising uncomfortable questions about whether the company can maintain quality while scaling aggressively.
The CTO of a Fintech company has made headlines this weekend.
Patricio Molina of Argentina-based fintech Belo alleged that over 60 Claude accounts linked to his organisation were suddenly deactivated without warning. The disruption halted key workflows, froze integrations, and left teams unable to access critical tools.
According to Molina, the only communication received was an automated email citing a vague policy violation, with appeals routed through a basic online form and no direct human support. Although access was later restored and attributed to a “false positive”, the incident exposed vulnerabilities for businesses deeply reliant on a single AI provider.
Anthropic decidió dar de baja a toda nuestra organización por una supuesta infracción de sus condiciones de uso. Qué política específica infringimos no tengo ni la menor idea: simplemente recibimos un mail y listo, adiós Claude. Si querés apelar la medida hay que completar un… https://t.co/L3E6hPDeht
— Pato Molina (@patomolina) April 17, 2026
Molina’s experience quickly struck a chord across the developer community, triggering a wave of similar accounts from other users. Several claimed they had faced comparable suspensions and struggled to get timely responses.
X user Tomás Escobar said his company encountered the same issue months earlier and only managed to resolve it through personal contacts, noting that responses from Anthropic took “at least 7–10 days”.
Another user, replying to the same post, said that he has also experienced the same with his account. According to his timeline, he upgraded his plan the expensive tier. He did everything Anthropic asked him to do to use the platform, including KYC. But, as soon as, he thought he was done, the platform banned his account.
Same thing for my account. Yesterday I needed to increase my Max plan usage to the 20x more expensive tier. Did their enforced KYC and got installed banned. I can't use Claude since yesterday even if I was already subscribed for a year+ and going to upgrade to higher plan.
— Alex DRocks (@DrocksAlex2) April 18, 2026
Performance concerns with Anthropic Claude
Compounding the issue are reports of noticeable dip in performance in recent weeks. Developers report that the model is increasingly prone to errors, struggles with complex workflows, and at times fails to follow instructions accurately. Some have also pointed out a tendency to take shortcuts, leading to incomplete or subpar outputs.
The concerns appear to be linked to backend adjustments made by Anthropic, particularly a reduction in the model’s default “effort” level.
This change, aimed at optimising token usage and reducing computational costs, may have inadvertently impacted output quality. While the company has stated that such updates were documented in its changelog, users argue that the communication lacked clarity and transparency.
The timing of these complaints is particularly significant. Even as Anthropic gains traction with enterprise clients and expands into areas like cybersecurity through tools such as Claude Mythos, these incidents reveal the growing pains of rapid scale.
As the company continues to position itself as a formidable rival to OpenAI, the challenge will be to match its technological ambitions with operational reliability. Notably, last month, ChatGPT witnessed a significant dip on the user count.
According to the data, US installs of ChatGPT’s mobile app soared by an astonishing 295 per cent in just a day.















