AI Tool Misuse Detected
Google has recently implemented stringent access restrictions for several users, including those subscribed to Gemini AI Ultra and utilizing the Antigravity
platform. These measures were put in place following the detection of a significant increase in what Google describes as 'malicious usage' on its AI coding platform, Antigravity. A former Google Deepmind engineer, Varun Mohan, highlighted that this surge in harmful activity has severely impacted the quality of service for legitimate users. This proactive measure was taken to swiftly block access for individuals whose actions deviate from the intended use of the platform.
Service Disruptions Unfold
The repercussions of Google's actions were first noted by several Gemini AI Ultra subscribers who reported an inability to access the Gemini 2.5 Pro model. In some instances, these restrictions extended to other linked Google services, such as Gmail and Google Workspace accounts. The sudden nature of these bans, often occurring without prior warning or detailed explanation, has led to user frustration. While Mohan acknowledged that some affected users might not have been aware of the terms of service violations, he emphasized the need for Google to manage its resources fairly for its genuine user base, suggesting a potential pathway for some to regain access.
Third-Party Tools Under Scrutiny
The core issue appears to stem from the use of third-party AI agents, with OpenClaw being specifically mentioned. Google's stance suggests that integrating such external tools, particularly those leveraging OAuth tokens, violates their terms of service. This aligns with recent updates from other AI providers, such as Anthropic, which have also explicitly banned the use of their OAuth tokens within third-party applications. These developments signal a trend towards more tightly controlled AI environments as companies aim to prevent misuse and maintain the integrity of their services.
Evolving AI Landscape
The crackdown underscores a growing competition in the AI sector and a movement towards more exclusive and regulated AI ecosystems. Tools like OpenClaw, designed as comprehensive AI assistants capable of managing emails, bookings, and other tasks, are facing increased scrutiny. While hailed for their utility, the potential for security risks, cyberattacks, and data breaches when these powerful agents are improperly configured has become a significant concern, prompting regulatory bodies and platform providers to take notice and implement stricter controls.














