What's Happening?
A report by Graphika reveals that state-sponsored online propaganda campaigns, including those linked to China and Russia, are increasingly using artificial intelligence to generate content. Despite the
growing sophistication of generative AI, these campaigns produce low-quality 'AI slop' that fails to engage audiences. The report highlights examples of unconvincing synthetic news reporters, clunky translations, and fake news websites. While AI has automated some tasks, the content remains largely ineffective, with little impact on Western social media platforms.
Why It's Important?
The use of AI in propaganda campaigns underscores the challenges of leveraging technology for influence operations. While AI can automate content creation, the low quality of the output limits its effectiveness in swaying public opinion. This finding contradicts earlier concerns that AI would enable high-quality, deceptive content capable of influencing democratic societies. The reliance on AI-generated content reflects a shift in strategy, but the lack of engagement suggests that traditional methods of propaganda may still be more effective. The report raises questions about the future of AI in political and social influence operations.
Beyond the Headlines
The proliferation of low-quality AI content in propaganda campaigns highlights the ethical and technological challenges of using AI for influence operations. As AI technology continues to evolve, the potential for more sophisticated and convincing content remains a concern. The integration of AI into propaganda strategies may lead to increased scrutiny and regulation, particularly as governments and tech companies seek to address misinformation and protect democratic processes. The report also suggests that AI-generated content could inadvertently influence AI chatbots, further complicating efforts to manage online information.











