AI Content Regulations Unveiled
India has significantly bolstered its digital content regulations with the introduction of the Information Technology (Intermediary Guidelines and Digital
Media Ethics Code) Amendment Rules, 2026. These amendments, effective from February 20, 2026, explicitly bring artificial-intelligence-generated, synthetic, and manipulated content under the legal framework. This broad scope encompasses deepfakes, voice cloning, and algorithmically altered text, images, audio, or video, leaving no room for ambiguity regarding the regulation of AI-generated material. The rules now firmly place such content on par with other forms of user-generated information governed by Indian law. Intermediaries, defined broadly to include social media platforms, messaging apps, video-sharing sites, and AI-driven content platforms, are now obligated to ensure users are not misled into believing synthetic or AI-generated content is real, particularly when it involves the likeness of individuals or could sway public opinion and trust. The core principle is to promote honesty about the authenticity of digital content.
The Critical 3-Hour Takedown Window
A pivotal element of these new rules is the '3-hour window' for emergency compliance. This isn't for routine complaints but for exceptional, high-risk scenarios where an intermediary receives a lawful directive concerning content that poses an immediate and serious threat. This includes situations involving national security, public order, potential violence, riots, mass harm, terrorism, child sexual abuse material, or severe impersonation. The rules also cover time-sensitive misinformation likely to cause real-world harm. In such critical instances, platforms are mandated to remove, block, or disable access to the offending content within a mere 3 hours of receiving the directive. The rationale behind this stringent timeline is that waiting the usual 24 hours could be far too late for rapidly spreading digital harm. This 3-hour window is non-negotiable; failure to act promptly is considered prima facie non-compliance, even if subsequent action is taken.
Platform Responsibilities and Compliance
The onus is on online platforms to take proactive measures. They are expected to deploy automated moderation tools and AI detection systems, complemented by human oversight, to identify harmful or synthetic content. However, the use of these tools must be proportionate, and over-removal without proper review can still be challenged. Platforms must not host or circulate unlawful content, and they need to regularly review their systems to mitigate misuse. Failure to act on detected violations, even if initially missed, is treated as non-compliance. The rules also mandate that content violating Indian laws, national security, public order, court orders, government directions, or user safety and dignity (including harassment, impersonation, and deception) must be acted upon. Any delay in removing content or disabling access when directed by courts or government authorities under valid legal powers will be considered a violation of the rules.
Timelines and Grievance Redressal
Beyond the emergency 3-hour window, several other timelines are established. A 24-hour period is designated for rapid response obligations, applicable to content affecting public order, safety, or sovereignty, as well as complaints involving illegal, deceptive, or harmful material and initial responses to serious user grievances. Platforms must acknowledge issues, take interim action such as removal or restriction, and commence formal reviews within this timeframe, recognizing they cannot wait for full internal evaluations before acting. For specific government directions, a 36-hour window is enforced, requiring content removal or access disabling and compliance reporting. A 72-hour limit applies to providing information assistance to authorities when lawfully required. On the user end, grievances must be acknowledged within 24 hours, with a final resolution communicated within 15 days, including clear reasons for any action or inaction. Unresolved complaints negatively impact a platform's compliance record.
Consequences of Non-Compliance
The penalties for failing to adhere to these amended IT rules are significant. For missing the critical 3-hour window, platforms face the immediate loss of 'safe harbour' protection, making them liable for criminal or civil action and providing strong grounds for enforcement. Violating the labelling requirement means content will be treated as misleading or unlawful, leading to mandatory takedown or restriction. Repeated failures can escalate to systemic non-compliance. If safe harbour protection is lost, platforms can be directly sued or prosecuted for user-generated content. Beyond these, intermediaries may suspend or terminate accounts, restrict posting or sharing features, or limit content visibility, particularly for repeat offenders or in cases of serious harm or deception. Non-response to user grievances also counts against a platform’s compliance record, underscoring the comprehensive nature of the new digital governance framework.




