AI Content Under Law
India's digital legal framework is undergoing a significant transformation, bringing artificial intelligence-generated and synthetically produced content
under formal regulation. Effective February 20, amendments to the Information Technology Intermediary Guidelines and Digital Media Ethics Code Rules, 2021, officially define and govern what is termed 'synthetically generated information'. This move shifts the approach from mere guidance to firm obligations for social media platforms and other large online service providers. The government has precisely defined synthetic content to encompass audio, video, or visual material manipulated by computer systems to appear authentic and potentially deceive users. Importantly, the new rules distinguish these from routine edits like color correction, noise reduction, or translations, provided they don't alter the original meaning or context. This clear definition brings AI-driven impersonations and deepfakes directly within the purview of legal scrutiny, aiming to combat their rising misuse in political messaging, financial scams, and online harassment.
Mandatory Labels & Tracking
A pivotal change introduced by these amendments is the requirement for mandatory labeling of all synthetic content. Platforms that facilitate the creation or sharing of such material must implement clear, immediate identifiers for users to recognize it. This can take the form of visible on-screen labels or embedded metadata. Furthermore, the rules stipulate the inclusion of persistent technical markers, such as unique identifiers, wherever technically feasible, to ensure traceability. Crucially, platforms are prohibited from offering any tools that allow for the removal or alteration of these labels and metadata, ensuring that once content is identified as synthetic, its designation remains intact. For major social media entities, pre-upload user declarations about AI generation are now obligatory. These platforms must also deploy reasonable technical measures to validate these declarations, particularly for content carrying a higher risk of potential harm.
Faster Takedowns & Liability
The newly enforced regulations introduce significantly compressed response timelines for handling content issues. In certain critical situations, platforms are now mandated to comply with government or court orders for content removal or restriction within a mere three hours, a substantial reduction from the previous 36-hour window. Other compliance periods have also been streamlined. The amendments unequivocally state that synthetic content employed for illicit activities, including impersonation, the creation of fraudulent records, distribution of child sexual abuse material, obscene content, or material related to weapons and explosives, will be treated with the same legal severity as any other form of illegal information. This strong stance underscores the government's commitment to addressing the misuse of AI-generated content. Moreover, the rules clarify 'safe harbour' protections for platforms. Service providers will retain their immunity under Section 79 of the IT Act, provided they implement automated tools or technical measures to identify and restrict or remove synthetic content, thereby adhering to the new regulatory framework. This clarification addresses industry concerns about overly broad liability, adopting a more targeted approach focused on actual harm rather than every instance of AI-assisted editing.



