In a world that is driven by artificial intelligence, the demarcation between real and virtual is disappearing faster than we realise. You can just scroll through Instagram for a couple of minutes, and chances are that you will come across a perfect face, a convincing voice, and a believable story—only to figure out that none of it is real. From popular virtual influencers like Lil Miquela that collaborated with brands like Dior, Prada, and Samsung to India’s own Kyra landing partnerships with boAt and Amazon Prime Video, synthetic personalities have strongly entered the mainstream. But the real problem arises when AI-generated influencers and deepfakes start to take control of how and what we trust online. As per cybersecurity expert Dr. Anil
Rachamalla, Vice President at FourthSquare and co-founder of the Council for Digital Safety & Wellbeing, the real danger is not in the development of AI but rather how it is being weaponised. The Problem: How Digital Influence Is Turning Into DeceptionAI influencers are no longer just a novelty. They are training a wider audience to accept that synthetic faces are normal. And Rachmalla argues that this drastic shift is deeply alarming. He stressed, “People are getting used to seeing these faces. The familiarity is increasing, but the truth behind them is not.”The growing acceptance for virtual influencers is being exploited at a large scale. Scammers are layering scams on top of hyper-realistic AI-generated content, such as investment pitches, creating fake endorsements and setting up emotional narratives that are difficult to identify. Deepfakes of celebrities such as Amitabh Bachchan, Shahrukh Khan, and Rashmika Mandanna have been circulated online, often used to promote fake products and manipulate the larger public sentiment. Some of these cases even include political figures and financial authorities to deceive people to lend credibility to scams. The result is what Rachamalla describes as a ‘familiar face but unfamiliar truth,' where audiences trust what they see, even when it is entirely misleading.Use Cases: From Emotional Manipulation To Financial FraudsAccording to Rachamalla, the misuse of AI influencers and deepfakes spans several categories listed below:Financial ScamsFraudsters are creating believable AI videos of public figures and popular influencers claiming huge returns from investments. These often direct users into WhatsApp or Telegram groups, where they are gradually manipulated into transferring money, with minor or no chance of recovering their deposits. Voice Clone FraudsPopular AI tools like ElevenLabs are being used to replicate your family member’s voice within minutes. Victims receive fake calls from what may sound like their family members, leading to panic-driven financial transfers.Sextortion And BlackmailUnlike earlier methods needing compromising media, AI now lets criminals generate explicit content using just an image. Victims are then blackmailed, even though the content is purely AI-generated. Emotional Hooks And Engagement FarmingDramatic videos are created showcasing politicians and celebrities to capture attention online and increase watch time to monetise engagement rather than directly scam anyone. Political ProvocationRachamalla highlights how emerging deepfake cases are used to create tensions between communities and religious groups, creating communal tensions. He notes that even the entertainment industry is not behind with such experiments. The cybersecurity expert cited an example of a Spanish show, Falso Amor. It manipulates AI-generated intimacy scenarios to provoke emotional reactions, only to reveal later that the clip was not real.










/images/ppid_a911dc6a-image-177691503204512375.webp)
/images/ppid_a911dc6a-image-177691414259544417.webp)
/images/ppid_a911dc6a-image-177691362568585032.webp)
