Identifying AI Content
The video platform is actively exploring novel methods to assess the quality and origin of its vast content library. A select group of users have recently
encountered in-app surveys that prompt them to evaluate videos based on whether they appear to be "AI slop" or "low-quality AI." This feedback mechanism is designed to help YouTube understand viewer perceptions of content that might have been rapidly produced by artificial intelligence rather than human creators. The goal is to refine recommendation algorithms, ensuring that users are presented with more engaging and valuable content, while simultaneously identifying and potentially removing videos that detract from the viewing experience. These surveys are presented directly to users while they are watching videos, asking them to rate the perceived AI-generated nature of the content on a scale from "Not at all" to "Extremely." This data is crucial for YouTube's ongoing battle against AI-generated spam, which can proliferate across feeds and diminish overall content quality.
Combating Low-Quality AI
The proliferation of AI-generated videos, often termed 'AI slop,' has become a significant concern, with studies indicating that over 20% of recommended videos for new users might fall into this category, generating substantial revenue despite minimal creative input. These videos are typically characterized by their repetitive nature, lack of coherence, or wholesale creation by AI systems. YouTube has been employing its own AI tools to demote such low-quality content. However, this effort is happening concurrently with YouTube's provision of AI tools to content creators, such as those for generating Shorts or dubbing content. Critics argue that despite these measures, the platform still struggles to effectively identify all instances of low-quality AI-generated videos. YouTube's CEO, Neal Mohan, has articulated a commitment to balancing AI innovation with maintaining platform quality and safety, which includes implementing clear labels for deepfake content.
Viewer Feedback Loop
This new feedback system, though not yet officially detailed on YouTube's blog, has primarily surfaced through creators sharing their experiences on platforms like X. The purpose of these surveys is to gather crucial insights into viewer sentiment regarding AI-generated content. By understanding what constitutes 'low-quality AI' from the user's perspective, YouTube can significantly enhance its recommendation engine. Videos that are consistently flagged by viewers as low-quality AI may be shown less frequently or even removed from user feeds altogether. This proactive approach by YouTube underscores its serious commitment to combating the influx of AI-generated spam while ensuring that original and creative content remains easily discoverable. While currently limited to a subset of users, this testing phase could eventually lead to a broader rollout of the survey, further refining the platform's content curation strategy.














