The Centre has formally brought artificial intelligence (AI)–generated content under the legal ambit of India’s Information Technology Rules, tightening
platform obligations, mandating AI labels, and sharply reducing takedown timelines for unlawful content.
In a set of FAQs released alongside the amendments, the Ministry of Electronics and Information Technology (MeitY) said the increasing misuse of AI-generated content has emerged as a serious and evolving challenge.
The amendments, notified on Wednesday, February 11, are set to come into effect on February 20, 2026, and are aimed at countering the rapid spread of deepfakes, child sexual abuse material (CSAM), and non-consensual intimate imagery (NCII) across digital platforms. The government has defined “synthetic media” as content that can appear like a real person or real event in a manner that may deceive viewers into believing it is genuine.
For social media companies and AI service providers, the rules mark a decisive shift—from voluntary safeguards to time-bound legal compliance.
Takedown timelines slashed to hours, not days
Under the amended rules, social media platforms must take down unlawful content within three hours of being notified. This marks a sharp reduction from the earlier 36-hour deadline and applies equally to AI-generated and non-AI content that is barred under Indian law.
Also Read: India agrees to talks on easing digital trade curbs, barriers for US companies under trade deal
Officials clarified that if a government takedown order is issued by 11 am, platforms must comply by 2 pm the same day.
The timelines for grievance redressal have also been compressed. Platforms are now required to acknowledge user complaints within two hours and resolve them within seven days, compared to the earlier 15-day window. In cases involving non-consensual intimate imagery, content must be taken down within two hours, down from 24 hours earlier. If a user flags deceptive content or impersonation, platforms must act within 36 hours, compared to 72 hours earlier.
The rules also allow police authorities to designate one or more officers, not below the rank of Deputy Inspector General (DIG), to issue takedown orders—reversing the earlier restriction of a single authorised officer per state.
How the final rules differ from last year’s draft
The notified rules differ significantly from the draft amendments released by MeitY in October 2025. The draft had proposed rigid watermarking requirements, including a mandate that AI disclaimers cover at least 10% of a visual display or the initial 10% of an audio clip’s duration.
That proposal drew pushback from industry bodies, including IAMAI and its members, who argued that fixed-size watermarks were impractical across formats. The final rules have dropped the size-based requirement while retaining the obligation to prominently label AI-generated content.
"Earlier, the draft required all content with any AI modification or enhancement to be labelled. That was impractical," said Ashish Aggarwal, vice-president of policy at Nasscom. "The revised guidelines now clearly focus on synthetically generated content that is intended to mislead or falsify information. That intent-based focus is a very positive change."
The final framework also exempts good-faith and routine edits that do not materially alter the substance of the content.
What kind of AI content is now prohibited
The amended rules require companies to ensure that they do not permit the creation, publication or dissemination of AI-generated content that is prohibited under existing laws, including the Bharatiya Nyaya Sanhita and the POCSO Act.
Platforms are explicitly barred from allowing AI content that features child sexual abuse material, contains obscene, vulgar or sexually explicit material, generates false documents or electronic records, enables the creation or development of explosives, or invades an individual’s privacy through impersonation or manipulation.
All technological intermediaries are required to identify and shut down users posting AI-generated CSAM, NCII or other unlawful content.
The government has provided illustrative examples of prohibited AI content, including sexually explicit deepfake videos depicting identifiable individuals without consent, synthetic imagery intended to violate bodily privacy, forged government identity documents, fabricated appointment letters or salary slips, and AI-generated instructional videos for preparing explosives.
In the case of political or public figures, barred content includes deepfake videos of election candidates making inflammatory remarks they never made, synthetic interviews of celebrities or sportspersons expressing controversial views, fabricated endorsements, false instructions attributed to senior government officials or CEOs, and synthetic news footage showing riots, attacks or accidents that never occurred.
AI labelling becomes mandatory and traceable
For the first time, the IT Rules define and regulate “synthetically generated information”, covering AI-generated or AI-modified images, videos and audio that appear real or are likely to be perceived as real.
The amendments mandate that such content must be prominently labelled as AI-generated. The label must be easily noticeable and adequately perceivable, and platforms are required to embed a unique identifier or metadata that allows the computer resource used to create, generate or modify the content to be traced.
Importantly, companies are not permitted to enable the removal or modification of AI labels. Users must also declare when they upload AI-generated or AI-modified content, while platforms are required to deploy technical measures to verify these declarations before publishing the content.
MeitY has clarified that routine or good-faith editing does not require AI labelling. This includes applying filters to photos, compressing videos for uploads, transcribing or translating videos, removing background noise, using AI in PowerPoint presentations, formatting PDFs, and generating diagrams, graphs or charts using AI tools.
Tech intermediaries are further required to inform users of AI-related regulations every three months. These disclosures must include that platforms may remove posts or disable accounts for illegal content, that users may face liability for posting unlawful content, and that such content may be reported to law enforcement authorities.
Platforms must also explicitly warn users not to create deepfakes, impersonation content or non-consensual intimate imagery, and inform them that violations may result in content removal, account action and legal consequences.




