What is the story about?
Government’s move to soon mandate labelling of AI-generated content is an absolutely necessary regulation as morphed or manipulated images can cause serious harm to people, Zoho founder and Chief Scientist Sridhar Vembu has said, adding he fully supports the proposed AI labelling rules.
The comment from one of India’s most prominent technology leaders comes as the government has proposed changes to IT rules, mandating the clear labelling of AI-generated content and increasing the accountability of platforms, and indeed other players involved, in verifying and flagging synthetic information.
”We absolutely need this regulation because morphed images and all of that…can cause a lot of damage to people. I fully support this,” Vembu told PTI.
Indian government’s move to mandate labelling of AI-generated content seeks to empower users to scrutinise such content and ensure that synthetic output is not masquerading as truth, IT Secretary S Krishnan had recently said at an event, and informed that the rules are now nearing finalisation.
The move geared to curb user harm from deepfakes and misinformation aims to impose obligations on two key sets of players in the digital ecosystem; one, the providers of AI tools such as ChatGPT, Grok and Gemini, and on social media platforms.
The draft rules involve mandating companies to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.
The IT Ministry had earlier highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create ”convincing falsehoods”, where such content can be ”weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
In fact, the issue of deepfakes and AI user harm, once again, came into sharp focus following the recent controversy surrounding Elon Musk-owned Grok allowing users to generate obscene content. Users flagged the AI chatbot’s alleged misuse to ’digitally undress’ images of women and minors, raising serious concerns over privacy violations and platform accountability.
The days and weeks that followed saw pressure mounting on Grok from governments worldwide, including India, as regulators intensified scrutiny of the generative AI engine over content moderation, data safety and non-consensual sexually-explicit images.
The microblogging platform has now implemented technological measures to prevent Grok from allowing generation of images of real people in revealing clothing in jurisdictions where it is illegal.
On January 2, the IT Ministry had pulled up X and directed it to immediately remove all vulgar, obscene and unlawful content generated by Grok or face action under the law.
”Anything that violates someone’s privacy and any attack on that, has to be regulated. We will evolve but (our system is that) we respond quickly to this,” Vembu said.
The comment from one of India’s most prominent technology leaders comes as the government has proposed changes to IT rules, mandating the clear labelling of AI-generated content and increasing the accountability of platforms, and indeed other players involved, in verifying and flagging synthetic information.
”We absolutely need this regulation because morphed images and all of that…can cause a lot of damage to people. I fully support this,” Vembu told PTI.
Indian government’s move to mandate labelling of AI-generated content seeks to empower users to scrutinise such content and ensure that synthetic output is not masquerading as truth, IT Secretary S Krishnan had recently said at an event, and informed that the rules are now nearing finalisation.
The move geared to curb user harm from deepfakes and misinformation aims to impose obligations on two key sets of players in the digital ecosystem; one, the providers of AI tools such as ChatGPT, Grok and Gemini, and on social media platforms.
The draft rules involve mandating companies to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.
The IT Ministry had earlier highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create ”convincing falsehoods”, where such content can be ”weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
In fact, the issue of deepfakes and AI user harm, once again, came into sharp focus following the recent controversy surrounding Elon Musk-owned Grok allowing users to generate obscene content. Users flagged the AI chatbot’s alleged misuse to ’digitally undress’ images of women and minors, raising serious concerns over privacy violations and platform accountability.
The days and weeks that followed saw pressure mounting on Grok from governments worldwide, including India, as regulators intensified scrutiny of the generative AI engine over content moderation, data safety and non-consensual sexually-explicit images.
The microblogging platform has now implemented technological measures to prevent Grok from allowing generation of images of real people in revealing clothing in jurisdictions where it is illegal.
On January 2, the IT Ministry had pulled up X and directed it to immediately remove all vulgar, obscene and unlawful content generated by Grok or face action under the law.
”Anything that violates someone’s privacy and any attack on that, has to be regulated. We will evolve but (our system is that) we respond quickly to this,” Vembu said.
/images/ppid_59c68470-image-176914506394517900.webp)


/images/ppid_a911dc6a-image-176925352908864737.webp)

/images/ppid_59c68470-image-176932754054616955.webp)





/images/ppid_59c68470-image-176914753056031970.webp)