What's Happening?
India's Department for Promotion of Industry and Internal Trade (DPIIT) has proposed a new framework that mandates AI companies to pay royalties for using copyrighted content in training their models.
This initiative aims to ensure that content creators are compensated when their work is used by AI companies like OpenAI and Google. The proposal includes the establishment of a collection agency to manage and distribute these royalties to rights-holding organizations. This move comes amid global discussions on the use of copyrighted material in AI training, with OpenAI's Sora 2 video generation AI recently facing criticism for generating content resembling Japanese intellectual properties. The proposal is seen as one of the most interventionist approaches globally, contrasting with the U.S. and EU's ongoing debates on transparency and data usage boundaries.
Why It's Important?
The proposed system could significantly impact U.S. tech companies operating in India, a major and rapidly growing market for AI technologies. By requiring royalties, the framework could increase operational costs for companies like OpenAI and Google, potentially affecting their business strategies and profitability. The initiative underscores a growing trend of countries seeking to regulate AI development and ensure fair compensation for content creators. This could lead to a shift in how AI companies source and utilize training data, influencing global AI development practices. The proposal also highlights the tension between innovation and intellectual property rights, with industry groups like NASSCOM warning that mandatory licensing could stifle innovation.
What's Next?
The Indian government is currently seeking public comments on the proposal, inviting feedback from companies and stakeholders within 30 days. A committee will review the feedback and make final recommendations before the framework is potentially adopted. This period of consultation will be crucial for AI companies and industry groups to voice their concerns and influence the final policy. The outcome could set a precedent for other countries considering similar measures, potentially leading to a more regulated global AI landscape.








