A chief technology officer has alleged that Anthropic’s chatbot Claude abruptly shut down his firm’s operations without any prior warning or clear explanation, raising fresh concerns over the risks of relying
heavily on artificial intelligence systems.
Pato Molina, chief technology officer of Argentina-based fintech firm Belo, claimed that Anthropic shut down more than 60 of its Claude accounts for “no apparent reason”, alleging that they didn’t get any prior warning or explanation, and expressed frustration over the action taken by Claude.
In a post on X (formerly Twitter), the tech entrepreneur slammed the customer service and asked why filling out a Google Form is the only way to appeal the decision. “Very bad UX and customer service,” he wrote.
He also shared a screenshot of the official communication from Anthropic, which read, “Our automated systems detected a high volume of signals associated with your account which violate our Usage Policy. These signals were, in turn, reviewed by our team to validate our system’s findings. As a result, we have revoked your access to Claude.”
@claudeai you took down our entire organization with 60+ accounts belonging to a legitimate company for no apparent reason, without any explanations. The only way to appeal the decision is by filling out a Google Form? Very bad UX and customer service. pic.twitter.com/lV4IXiI3B5
— Pato Molina (@patomolina) April 17, 2026
He also posted a screenshot of the official communication from Anthropic, which stated, “Our automated systems detected a high volume of signals associated with your account which violate our Usage Policy. These signals were, in turn, reviewed by our team to validate our system’s findings. As a result, we have revoked your access to Claude.”
“To appeal our decision, please fill out this form or learn more about the appeals process here,” the message sent from Anthropic’s safeguards team added.
However, he later shared an update saying the access had been restored. “Apparently, it was a false positive. And as always: Twitter is a service,” he said.
In a follow-up post, Pato Molina explained the sequence of events, saying Anthropic had shut down his entire organisation over an alleged violation of its terms of use. However, he said the firm had no clarity on which specific policy was breached and only received an email informing them of the suspension.
“If you want to appeal the decision, you have to fill out a Google Form – ridiculous as it sounds,” he wrote.
He added that the move left more than 60 employees without access to a critical work tool, including integrations, skills and conversation histories, claiming that everything was either lost or placed on indefinite hold.
Issuing a warning, Molina wrote: “A huge lesson for any software company that relies on AI tools in critical processes. Never put all your eggs in one basket.”
He also weighed the pros and cons of using multiple AI platforms, noting that while diversification can ensure continuity during outages, it also increases operational complexity through higher training costs and fragmented conversation histories. At the same time, he said many firms prefer “marrying” a single reliable provider to streamline workflows, but the lack of transparency or support during unexpected disruptions remains a major concern.
The post drew widespread attention on social media, prompting varied reactions from users. “The issue is that the company, like OpenAI (ChatGPT), is a fad. They thrive on manufactured demand driven by hype, then they adopt excessive and unrealistically restrictive “policies” not supported by actual reasonable judgment. When they enforce those policies, and you complain,” one user wrote.
“Having your company rely on a single provider is bad; all my products are orchestrator/LLM idempotent to avoid that kind of dependency. As a shareholder, I’d replace this CEO asap if he didn’t plan for a contingency plan. people with no real business experience become CEOs now,” another commented.
“The same thing happened with my companies. No warning no explanation. Twice already we lost all our customers’ information. This is ridiculous,” a third user added.














