The Ministry of Electronics and Information Technology has proposed amending the IT Rules, 2021, to protect the public from AI-generated content including ‘deepfakes’. The Centre is looking to make it easier for users to distinguish between genuine and AI-generated content and to hold social media platforms accountable for spreading misinformation.
The government has already notified amendments to the IT Rules, 2021. But what do we know about the changes notified? Which other nations have similar rules?
What we know about the changes notified
The Ministry of Electronics and Information Technology (MeitY) has issued a notification about amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
According to MeitY, Rule 3(1)(d) is slated to be amended in order to ensure that unlawful content is removed in a ‘transparent, proportionate and accountable manner’. Under these new rules, only a senior officer in the government not below the rank of Joint Secretary or its equivalent can order the removal of unlawful information.
When it comes to the police, only an officer not below the rank of Deputy Inspector General of Police (DIG) can do so.
Under the new rules, the notification must clearly spell out the legal basis and statutory provision, the nature of the unlawful act, and the specific URL/identifier or other electronic location of the information and the content to be removed.
This replaces the earlier broader reference of ‘notifications’ with ‘reasoned intimation’ – which aligns the rules with the ‘actual knowledge’ required under Section 79(3)(b) of the IT Act. MeitY has said that all such intimations issued will undergo a monthly review by an officer not below the rank of Secretary.
It added that this will ensure that such actions remain ‘necessary, proportionate, and consistent with law’ as well as strike a balance between citizens’ constitutional rights and the powers of the State. The rules will take effect on 15 November 2025.
Which other nations have similar rules
China
The 'Measures for Identifying Artificial Intelligence-Generated Synthetic Content' law came into force in China in September.
The law, jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, creates a standardised system for identifying synthetic content created using artificial intelligence (AI).
China has said the idea behind the law is to promote responsible AI development and safeguard public interest.

The Measures for Identifying Artificial Intelligence-Generated Synthetic Content law came into force in China in September.
Beijing defined synthetic content as any text, images, audio, video, or virtual environments created using AI. It mandates that service providers categorise the content in two ways – explicitly and implicitly. The former requires visuals such as labels, symbols, or watermarks that can clearly be seen, while the latter requires metadata and digital watermarks in the files to be labelled as such.
These rules apply to service providers that fall under its existing AI, algorithm, or deep synthesis laws. Under these rules, platforms that host or disseminate synthetic content must certify whether it is appropriately identified, add warnings for such content, and encourage users to label their synthetic content. China has also said artificial intelligence must align with its “core socialist values”.
United Arab Emirates (UAE)
The United Arab Emirates (UAE) has banned the use of artificial intelligence when it comes to generating content relating to national symbols or public figures. The country’s Media Council warned social media users, media entities, and content creators that doing so without prior approval violated its media laws.
It noted that spreading misinformation, defamation, undermining reputation, or attacks on societal values — specifically when involving AI-generated content — are punishable under the Media Violations Regulation.

The United Arab Emirates (UAE) has banned the use of artificial intelligence when it comes to generating content relating to national symbols or public figures.
The UAE Media Council in May signed an agreement with global data firm Presight to analyse and validate digital media content in real time. This, it said, will ensure that the content is in compliance with legal standards and adheres to the UAE’s national values.
Italy
The Giorgia Meloni government in Italy has passed a law mandating prison terms for those who generate ‘deepfakes’ that cause harm. Those convicted of doing so could face between one and five years in prison for illegally spreading AI-generated or manipulated content.
Under the law, children under the age of 14 need parental consent to access AI.

The Meloni government has said the law aims to increase “human-centric, transparent and safe AI use” while emphasising “innovation, cybersecurity and privacy protections”. Reuters
The government has said the law will be enforced by the Agency for Digital Italy and the National Cybersecurity Agency. The idea behind the law is to increase “human-centric, transparent and safe AI use” while emphasising “innovation, cybersecurity and privacy protections”.
Denmark
The Danish government too is planning to introduce a copyright law to solve the problem of ‘deepfakes’. Under the law, every person will have the right to own the digital representation of themselves, their body, and their voice.

The Danish government too is planning to introduce a copyright law to solve the problem of ‘deepfakes’ Representational image.
The copyright law classifies a ‘deepfake’ as a very realistic digital representation of an individual, including their appearance and voice. It would allow people to have their likeness, if shared without their consent, removed from online platforms. It could also entitle them to some compensation.
However, parodies and satire would not be impacted by these changes.
Spain
Spain’s government in March approved a bill mandating massive fines on companies that use content generated by artificial intelligence (AI) without properly labelling it as such. The government said it did so to counter the use of disinformation and ‘deepfakes’.
It seeks to define platforms not properly labelling AI-generated content as committing a ‘serious offence’. It calls for fines of up to $40.6 million (Rs 356.8 crore) or 7 per cent of their global annual turnover.

The government in March approved a bill mandating massive fines on companies that use content generated by artificial intelligence (AI) without properly labelling it as such.
The bill also seeks to establish the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) to enforce these rules.
It also bans other practices, such as the use of subliminal techniques, sounds, and images that are imperceptible, to manipulate vulnerable groups. This includes chatbots inciting people with addictions to gamble or toys encouraging children to perform dangerous challenges.
However, the bill is yet to be passed by Spain’s lower house of Parliament.
The European Union (EU)
The EU, come August 2026, will require that AI-generated content including text, images, voices, or videos be labelled as such. The bloc introduced Article 50 of Regulation (EU) 2024/1689, making this mandatory for images that could be thought to be authentic or human-made but which are not.
“Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake shall disclose that the content has been artificially generated or manipulated,” it states. “Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated.”
The idea behind this legislation is to increase transparency and prevent deception, particularly in the case of deepfakes.

The EU, come August 2026, will require that AI-generated content including text, images, voices, or videos be labelled as such. Reuters
The EU in 2024 adopted the AI Act, a landmark piece of legislation that imposed strict transparency obligations on high-risk AI systems. It also banned AI in social scoring, predictive policing, and untargeted scraping of facial images from the internet or CCTV footage.
It mandated massive fines for violations ranging from $8.7 million (Rs 76.5 crore) or 1.5 per cent of turnover to $40.6 million (Rs 356.8 crore) or 7 per cent of global turnover, depending on the type of violation.
The United States and other nations
The United States, meanwhile, under the Donald Trump administration, continues largely to rely on voluntary compliance and a patchwork of state regulations. It remains to be seen how successful efforts to remove ‘deepfakes’ and label AI-generated content will prove.
Many other countries, including Australia, Brazil, Japan and Israel, are currently drafting laws to deal with AI-generated content.
With input from agencies