Rapid Limit Exhaustion
A growing number of individuals utilizing Anthropic's advanced AI, Claude, are reporting an unexpected and rapid depletion of their allocated token limits.
This phenomenon has surfaced despite recent feature enhancements to Claude, including expanded capabilities in coding, dispatch, and remote work functionalities. Users express dismay that even simple interactions, such as a basic greeting like 'hello,' can consume a noticeable percentage of their available tokens, particularly on paid tiers like Claude Pro and Claude Max. This rapid consumption is rendering the AI less accessible and usable for many, prompting significant discussion and concern across user forums and social media platforms. The implication is that the AI's utility is being curtailed by these unforeseen rate limitations, affecting both free and subscribed users, though paying customers report the issue is still problematic even with higher allowances.
Anthropic's Response
In light of the escalating user complaints, Anthropic has publicly acknowledged the problem. Lydia Hallie, representing the company, stated on X that they are "aware people are hitting usage limits in Claude Code way faster than expected." She assured the community that the issue is being treated as a top priority and is under active investigation by the team. Despite this acknowledgment, Anthropic has yet to pinpoint the definitive cause for this accelerated token usage. Hallie indicated that more information would be shared as soon as it becomes available, underscoring the complexity and immediate impact of the situation on a significant portion of their user base who rely on Claude for various tasks.
Potential Causes Explored
While Anthropic diligently investigates, potential explanations are emerging from the user community. One user on Reddit suggested that two specific bugs might be at the heart of the rapid token consumption. These purported bugs could be disrupting the conversation's cache history, leading to an unusually high demand for tokens. One bug is suspected to be tied to the standalone Claude Code application, while the other might be triggered by the use of specific commands, namely '--resume' and '--continue'. Anthropic's Thariq Shihipar has indicated that these claims are being looked into, noting that "prompt cache bugs can be quite subtle," suggesting that identifying and resolving such issues requires meticulous examination and a deep understanding of the AI's internal processes.
Recent Limit Adjustments
It's worth noting that Anthropic had recently implemented a temporary doubling of Claude's limits for a two-week period, beginning March 15th. Following this period, however, there were adjustments to consumption limits, particularly for users accessing the AI during 'peak hours.' While the weekly token limit was stated to remain consistent, some users experienced reaching their session limits more quickly. This temporary boost and subsequent adjustment may have inadvertently coincided with or exacerbated the current complaints about faster-than-expected limit depletion, even though the overall weekly allowances were intended to remain unchanged, creating confusion and frustration among users.














