Unwavering Takedown Mandate
The Indian government recently convened with leading internet corporations, including Meta and Google, to reinforce a critical regulatory requirement:
the mandatory removal of AI-generated deepfakes within a strict three-hour timeframe. This policy, a cornerstone of the nation's strategy to curb the proliferation of misleading information and safeguard citizens from deceptive content, necessitates swift action from platforms upon receiving any complaint regarding fabricated visual or audio material. The objective of this high-level discussion was to ensure robust compliance and to address the practical complexities and potential ramifications of enforcing this stringent timeline. The authorities underscored that adherence to this rapid response protocol is paramount for preserving the integrity of public discourse and preventing the dissemination of damaging falsehoods, particularly during sensitive periods like elections. It's understood that technical implementations and collaborative frameworks between the government and these technology providers were also part of the dialogue, underscoring India's proactive and decisive approach to managing the societal impacts of rapidly evolving digital technologies.
Tech Giants' Concerns Addressed
During a significant meeting involving major technology players and the Internet and Mobile Association of India, the issue of AI and deepfake content governance took center stage. Representatives from companies like Meta, Google, and OpenAI were present to discuss the implications of existing regulations. Specifically, a key point of contention raised by Meta's vice president of policy, Rob Sherman, was the operational difficulty of the three-hour takedown window, which he had previously described as challenging. Sherman indicated that Meta would have communicated these operational hurdles to the government had they been consulted prior to the mandate's issuance. While the Ministry of Electronics and Information Technology (MeitY) acknowledged that the three-hour period is indeed brief, they signaled no intention to alter this rule. This firm position comes despite the fact that MeitY had recently amended Section 79 of the IT Act, significantly shortening the compliance window for intermediaries from 36 hours to three hours, a change driven by the rapid viral potential of deepfake and misleading content.
Non-Negotiable Compliance
The government communicated unequivocally to the assembled internet platforms that compliance with these IT amendments is a top priority and is non-negotiable. According to sources privy to the discussions, the Centre made it clear that firms unable to adhere to these regulations would not be shielded from consequences. Furthermore, the government highlighted that the primary challenge for internet platforms lies in effectively prioritizing the identification and removal of harmful synthetic content, rather than merely labeling it as AI-generated or not. This distinction underscores the government's focus on proactive mitigation of harm caused by deepfakes, emphasizing a results-oriented approach to digital content regulation. The discussions aimed to ensure that platforms are adequately equipped and committed to enforcing these rules swiftly and effectively, reinforcing the government's commitment to a secure and trustworthy online environment for its citizens.














