Feedpost Specials    •    9 min read

MeitY's New AI Rule: 8 Essential Points for Social Media Users and Platforms

WHAT'S THE STORY?

Navigating the evolving digital landscape just got more regulated. MeitY's new AI rules are here to curb misuse. Discover the 8 critical points you need to know before posting AI content.

Why the New Rule?

The Indian government, through the Ministry of Electronics and Information Technology (MeitY), has introduced new directives prompted by the escalating

AD

misuse of artificially generated content. The proliferation of synthetic media poses significant challenges, ranging from misleading the public and disseminating falsehoods to causing individual harm. Consequently, this regulatory framework aims to establish a clear line of accountability and foster greater transparency in the digital sphere, ensuring that creators and platforms are responsible for the content they disseminate.

Defining Synthetic Media

Synthetic media encompasses any digital content that can convincingly mimic reality, potentially deceiving viewers into believing it is authentic. This broad category includes AI-crafted videos, images, or audio recordings that appear genuine but are fabricated. The core characteristic is their ability to mimic real individuals or events, making them indistinguishable from actual footage or recordings to the untrained eye, thereby blurring the lines between the digital fabrication and the real world.

Routine Edits vs. AI Labels

Not every alteration to digital content necessitates an AI-generated label. The new guidelines distinguish between routine enhancements and AI-driven synthesis. Actions such as applying filters to photos, optimizing video compression for uploads, transcribing or translating audio, removing background noise, using AI for presentation slides or PDF formatting, and generating diagrams or charts are classified as standard editing practices. These adjustments do not require users to declare their content as AI-generated, focusing the labeling mandate on more significant forms of artificial manipulation.

Mandatory User Declarations

A crucial component of the new regulations is the requirement for social media companies to ensure users explicitly declare when their content has been generated using artificial intelligence. Upon confirmation of AI involvement, platforms are mandated to affix a prominent and easily discernible label to such content. This measure is designed to inform viewers directly about the nature of the content they are consuming, promoting informed engagement and critical evaluation.

Platform User Notifications

Social media intermediaries are obligated to remind their users every three months about the platform's policies regarding illegal content. These notifications must explicitly state that users can face account suspension or content removal for violating these rules. Furthermore, users will be informed about their personal liability for any illegal content they post and that such content may be reported to the appropriate authorities. This recurring communication aims to continuously reinforce user awareness and adherence to legal and ethical standards.

Deepfakes & Impersonation

Platforms must issue stern warnings to users against the creation and dissemination of deepfakes, impersonation content, and non-consensual intimate imagery. The policies clearly stipulate that engaging in such activities can lead to severe repercussions, including the immediate removal of the offending content, disciplinary actions against the user's account, and potential legal prosecution. This strict stance highlights the government's commitment to protecting individuals from malicious digital manipulation and reputational damage.

Expedited Takedown Orders

The government has significantly accelerated the timeline for complying with takedown orders. If an order is issued by 11 am, social media companies must now act by 2 pm, a drastic reduction from the previous 36-hour window. This authority can be exercised by police officers of DIG rank or above. For nude image violations, content removal must occur within two hours, down from 24 hours. Additionally, user-flagged deceptive or impersonation content must be removed within 36 hours, a notable decrease from the prior 72-hour limit.

Prohibited AI Creations

AI companies face stringent restrictions on the types of content they can generate. This includes prohibitions against creating non-consensual sexually explicit deepfakes, content that infringes on bodily privacy, forged official documents, fabricated appointment letters, fake financial instruments, and instructional videos for making explosives. Furthermore, the creation of deepfakes involving political figures or public personalities, such as fake statements by candidates, false endorsements, fabricated interviews, or synthetic news reports of non-existent events like riots or accidents, is strictly forbidden.

AD
More Stories You Might Enjoy