AI Service Restrictions
A significant number of users, including those subscribed to premium tiers like Gemini AI Ultra and Antigravity, have reported experiencing unexpected
account restrictions. These bans have not only blocked access to Google's advanced AI models, specifically Gemini 2.5 Pro, but in some instances, have also affected linked services such as Gmail and Google Workspace accounts. This abrupt action, which occurred without prior notification for many, has caused considerable disruption for affected subscribers. The core issue appears to be the utilization of third-party AI agent tools, with OpenClaw being specifically mentioned as a catalyst for these measures. Google has partially acknowledged these reports, indicating a need to swiftly address users who are not adhering to the intended product usage guidelines and terms of service. This situation underscores the growing importance of maintaining strict control over access and usage patterns within sophisticated AI ecosystems to ensure a stable and fair experience for all users.
Root Cause: 'Malicious Usage'
The underlying reason for these widespread account restrictions has been identified as a substantial rise in what Google describes as 'malicious usage' of its AI platforms' backend infrastructure. Varun Mohan, formerly of Google Deepmind, highlighted on X (formerly Twitter) that this surge has drastically degraded the quality of service for genuine users. This increased malicious activity is directly linked to the use of open-source AI agent tools, such as OpenClaw, which leverage Google's AI models. By using these external tools, which are reportedly against Google's terms of service, users have inadvertently triggered the security measures. Mohan clarified that the company needed a rapid mechanism to halt access for those exploiting the system, emphasizing that the intention was to protect the integrity and performance of their AI services for the broader user base. While acknowledging that some users may not have been aware of the policy violation, Google has indicated a limited capacity to review cases and aims to prioritize fairness for its legitimate subscribers by addressing the misuse.
Broader AI Industry Trends
This decisive action by Google mirrors a broader trend within the rapidly evolving artificial intelligence landscape, signaling a move towards more proprietary and tightly managed ecosystems. Companies are increasingly scrutinizing how their AI models are accessed and utilized, especially through third-party applications. Similar to Google's stance, Anthropic, another major AI developer, recently updated its terms of service to explicitly prohibit the use of OAuth tokens obtained from its AI accounts within external tools, including agent frameworks like OpenClaw. This move by Anthropic, akin to Google's, demonstrates a shared concern about security risks and potential misuse of AI capabilities. The development also brings into focus the challenges associated with AI agent tools, which are designed to automate a wide array of tasks, from managing emails to booking flights. While these tools offer significant convenience, their potential for improper configuration and the security risks they may introduce, such as data breaches and cyberattacks, are becoming increasingly apparent, leading to greater oversight and stricter adherence to usage policies across the industry.














