What's Happening?
OpenAI has announced a collaboration with actor Bryan Cranston, SAG-AFTRA, and other actor unions to address concerns over deepfakes on its AI video creation app, Sora. This decision follows the release
of unauthorized AI-generated clips using Cranston's voice and likeness after the launch of Sora 2 in late September. Cranston, known for his roles in 'Breaking Bad' and 'Malcolm in the Middle,' expressed gratitude towards OpenAI for enhancing its policy and guardrails to protect personal and professional rights. OpenAI will also work with United Talent Agency, the Association of Talent Agents, and Creative Artists Agency to strengthen protections against unapproved AI content.
Why It's Important?
The move by OpenAI to tighten controls on deepfake technology is significant in the ongoing debate over AI's impact on intellectual property and personal rights. Deepfakes pose a risk to actors and public figures by potentially misusing their likenesses without consent, which can lead to reputational damage and legal challenges. By collaborating with major talent agencies and unions, OpenAI is taking steps to address these concerns, which could set a precedent for other tech companies. This development is crucial for the entertainment industry, as it seeks to balance technological innovation with the protection of creative and personal rights.
What's Next?
OpenAI's collaboration with industry stakeholders suggests a proactive approach to managing AI-generated content. The company may continue to refine its policies and technologies to prevent unauthorized use of likenesses. This could lead to broader industry standards and regulations governing AI content creation. Stakeholders, including actors, agencies, and tech companies, are likely to monitor the effectiveness of these measures and advocate for further protections if necessary. The entertainment industry may also push for legislative action to address the challenges posed by AI and deepfake technologies.