Accelerated Content Removal
The Indian government has implemented a crucial update to its IT Rules 2021, significantly compressing the time intermediaries have to remove harmful content.
Previously, platforms had up to 36 hours for court-ordered or law enforcement takedowns of deepfakes and other unlawful synthetic content. However, under the new amendments, this window has been drastically slashed to a mere three hours. Similarly, the removal of non-consensual nudity, a deeply problematic form of synthetic media, will now be enforced within two hours, down from the previous 24-hour period. Even grievance redressal timelines have been halved to seven days. These rapid response requirements necessitate that platforms establish robust, round-the-clock rapid response teams and significantly enhance their automated moderation systems. The goal is to expedite law enforcement coordination and ensure swift action against the proliferation of malicious digital content, making the internet a safer space for users by reacting with unprecedented speed to harmful material.
Defining Synthetic Content
A cornerstone of the new regulations is the introduction of a detailed definition for 'synthetically generated information' (SGI). This encompasses any audio, visual, or audio-visual content that has been artificially or algorithmically created or altered in a way that makes it appear authentic and indistinguishable from real individuals or events. The rules also provide clarity on what does not constitute SGI. Routine editing, formatting, enhancements, technical corrections, colour adjustments, noise reduction, transcription, or compression that do not materially change the substance, context, or meaning of the content are exempt. Similarly, the preparation of standard documents, presentations, educational materials, or research outputs in good faith is also excluded, provided their core meaning remains unaltered. This nuanced definition aims to capture the malicious use of AI while avoiding unnecessary burdens on legitimate content creation and modification processes.
Mandatory Labeling and Transparency
To combat the deceptive nature of deepfakes and AI-generated content, the amended rules impose stringent requirements for transparency and labeling. Intermediaries will now be obligated to ensure that all synthetic content, unless it falls under specific prohibited categories, is prominently labeled. These labels must be clearly visible in visual displays, prefixed prominently in audio content, and embedded with metadata or technical provenance markers. Crucially, these markers will include a unique identifier for the computer resource used to generate the content, enhancing traceability. The rules explicitly forbid the suppression, modification, or removal of these labels and metadata. For significant social media intermediaries (SSMIs), the obligations are even stricter, including mandatory user declarations when content is SGI and verification of these declarations through technical measures. This ensures users are aware when they are interacting with synthetic media, promoting informed digital participation.
Platform Accountability and Safeguards
The updated regulations place a greater burden of accountability on intermediaries, particularly concerning the prevention of unlawful synthetic media. Platforms are now required to implement 'reasonable and appropriate technical measures,' including automated tools, to proactively prevent the generation or sharing of unlawful SGI. Certain categories of SGI are outright prohibited, such as content depicting child sexual abuse material (CSAM), non-consensual nudity, obscene or sexually explicit material, the creation of false documents or electronic records, and content related to explosives, arms, or ammunition. The rules also mandate that intermediaries inform users about the potential consequences of unlawfully creating or sharing SGI, which can include content removal, account suspension or termination, disclosure of identity, and mandatory reporting under laws like POCSO or BNSS. For SSMIs, failing to implement these measures could jeopardize their safe harbor protections, making robust compliance architecture essential.
Expert Reactions and Challenges
Legal experts have largely welcomed the government's move towards regulating synthetic media, acknowledging the necessity of increased oversight and accountability. However, many have also highlighted significant challenges in the implementation of these new rules. The compressed takedown timelines, particularly the three-hour mandate, will require substantial technological upgrades and continuous operational readiness from platforms. Concerns have also been raised about the feasibility of uniformly identifying synthetic content across diverse datasets and the potential for automated systems to lead to over-censorship. Balancing compliance with user privacy and free speech remains a critical consideration. Effective enforcement will hinge on clear technical standards, proportional compliance expectations, and consistent regulatory oversight to ensure these well-intentioned rules translate into tangible improvements in digital safety.











