India's New IT Rules: Tackling Deepfakes and Synthetic Content with Enhanced Regulations

SUMMARY

AI Generated Content
  • India rules target deepfakes, AI content
  • Platforms must label synthetic content
  • Content takedown now within 3 hours
Read More
Read more
AD

WHAT'S THE STORY?

New IT rules from MeitY are here to combat deepfakes! Discover how AI-generated content will be regulated, what platforms must do, and the faster takedown timelines.

Regulating Synthetic Content

The Ministry of Electronics and Information Technology (MeitY) has introduced significant updates to the 2021 IT (Intermediary Guidelines and Digital Media

Ethics Code) Rules. These amendments officially bring "synthetically generated information" (SGI), a category that prominently includes deepfakes, under a comprehensive regulatory framework. This move signifies a proactive approach by the government to address the evolving challenges posed by manipulated digital content. The core of these new regulations mandates that online platforms, referred to as intermediaries, must clearly label or identify any content that has been artificially created or altered. This identification can be achieved through various means, such as visible disclosures to users, embedding technical metadata, or through direct notifications to those consuming the content. The aim is to empower users with the knowledge that the information they are encountering has been synthetically produced or modified, thereby preventing potential deception.

Narrowed Scope and Harm Focus

A notable refinement in the final notified rules, compared to the initial draft, is the deliberate narrowing of the scope of content that necessitates flagging. The draft had defined SGI in a very broad manner, encompassing any information that was "artificially or algorithmically created, generated, modified, or altered." However, the finalized rules emphasize a more targeted approach, focusing on SGI that is likely to mislead users. This shift is a direct response to industry feedback, which cautioned that an overly broad definition could inadvertently capture routine digital edits alongside malicious deepfakes. By adopting a harm-based approach, the regulations now prioritize content with the potential to deceive or misinform individuals, ensuring that regulatory efforts are directed towards the most impactful and damaging forms of synthetic media.

Accelerated Takedown Timelines

Beyond the crucial disclosure requirements, the government has drastically reduced the timelines for content moderation. The previous draft had proposed a 36-hour window for intermediaries to act upon lawful orders regarding content removal. In contrast, the final rules impose a much stricter requirement: platforms must now remove or disable such content within a mere three hours of receiving a directive from either the government or a court. This significant reduction in response time is designed to curb the rapid spread of harmful misinformation and deepfakes once they are identified. Furthermore, other content moderation deadlines have also been shortened; for instance, some timelines have been reduced from 15 days to seven days, and others from 24 hours to 12 hours, depending on the specific nature of the violation. This accelerated process ensures a more agile and effective response to digital content issues.

Flexible Compliance Measures

The newly notified rules also introduce a greater degree of flexibility in terms of compliance for intermediaries, offering a more accommodating approach than the initial draft. The draft had proposed more stringent visible labeling and broader obligations for platforms. However, the final regulations now require companies to make "reasonable efforts" to identify SGI. This allows for compliance through various technical means, such as embedded metadata or other technological solutions, rather than solely relying on explicit visible disclosures. This flexibility acknowledges the diverse technical capabilities of different platforms and aims to make compliance more achievable without compromising the core objective of identifying and flagging synthetic content. This approach was influenced by valuable feedback received from industry bodies like IAMAI and Nasscom, who advocated for a more practical and adaptive regulatory framework.

AD
More Stories You Might Enjoy