Court Mandates AI Regulation
In response to a Public Interest Litigation (PIL) highlighting the rampant misuse of artificial intelligence for generating and distributing deepfake videos
and images, the Gujarat High Court has taken decisive action. The court has issued formal notices to leading technology companies, including Meta India, Google, X (formerly Twitter), Reddit, and Scribd. These notices are a direct call for a comprehensive regulatory mechanism aimed at preventing the malicious application of AI technologies. The bench, comprising Chief Justice Sunita Agarwal and Justice D.N. Ray, has mandated that these platforms actively participate in developing stricter protocols. The aim is to foster a more responsible digital environment where the creation and dissemination of fabricated content are effectively curtailed, thereby protecting individuals and public trust from the damaging effects of digital manipulation.
Platform Compliance Mandate
As part of its directive, the Gujarat High Court has also mandated that the concerned technology intermediaries integrate with the Sahyog portal. This portal is designed to facilitate improved coordination and ensure timely action concerning the removal of unlawful content. The court emphasized the critical importance of these platforms adhering strictly to the provisions outlined in the Information Technology Act, 2000. Specifically, the bench highlighted that 'effective and meaningful responses/action of the respondent intermediaries will be key to the due diligence obligations enforced upon them under the statutory framework.' This underscores the expectation that these digital giants must proactively engage in identifying and removing harmful content, rather than merely reacting to complaints. The returnable date for these notices has been set for May 8, signaling an urgent need for tangible progress in addressing the deepfake menace.
Government's Stance on Delays
The Union and Gujarat governments have also brought to the court's attention a persistent issue of significant delays, repeated procedural hurdles, and a general lack of compliance from certain tech platforms when responding to lawful notices issued to them. This statement from the governmental bodies adds further weight to the PIL's concerns and the High Court's intervention. It suggests a systemic problem where intermediaries are not adequately or promptly addressing takedown requests for illegal or harmful content, including that generated through AI. The court's action, therefore, is not just about deepfakes but also about enforcing accountability and ensuring that these powerful platforms fulfill their legal obligations in a timely and effective manner, especially when dealing with matters of national importance and public safety.















