Global cybersecurity company McAfee has released its annual ‘Most Dangerous Celebrity: Deepfake Deception List’ for 2025, highlighting the growing threat of deepfake scams. The new consumer research shows how cybercriminals use the names of well-known individuals—from Indian to international celebrities—to deceive people into falling for scams.
According to the report, Shah Rukh Khan tops the list as the most exploited celebrity, followed closely by Alia Bhatt and Elon Musk. Names like Priyanka Chopra Jonas, Cristiano Ronaldo and MrBeast also appear among the Most Dangerous Celebrities list. These individuals' names and likenesses are being used without consent to promote fake endorsements, giveaways, and scams.
The report reveals that 90% of Indians have encountered fake or AI-generated celebrity endorsements, resulting in average losses of ₹34,500 per victim. Furthermore, 60% of Indians have seen AI-generated or deepfake content featuring influencers and online personalities, indicating a rapid spread of deceptive content.
“The celebrities on these lists are targets, not perpetrators. Scammers hijack their likenesses and voices, without consent, to exploit the trust people place in familiar faces,” the report read.
Scammers can now produce incredibly convincing deepfakes with just three seconds of someone's speech. These deepfakes are frequently used to imitate celebrity endorsements for skincare goods (42%), giveaways (41%), and cryptocurrency or trading schemes (40%), using fake "must-have" devices and supplements, it added.
Top 10 Most Dangerous Celebrities | Deepfake Deception List (2025): India
- Shah Rukh Khan
- Alia Bhatt
- Elon Musk
- Priyanka Chopra Jonas
- Cristiano Ronaldo
- MrBeast
- Lionel Messi
- Taylor Swift
- Kim Kardashian
- Members of BTS
To combat these threats, Pratim Mukherjee, Senior Director of Engineering, emphasised the importance of awareness, caution, and reliable protection tools.
“Deepfakes have changed the game for cybercriminals; they’re no longer hacking systems — they’re hacking human trust. Technology can now effortlessly mimic the voices, faces, and mannerisms of people we admire. In a country where millions engage with celebrity and influencer content daily, such fakes can spread instantly,” Mukherjee added.
Consumers should use tools which check texts, emails, and even videos for indications of AI manipulation, shop at authorised stores, and only trust verified accounts to stay safe.
/images/ppid_59c68470-image-176312252937477557.webp)



/images/ppid_a911dc6a-image-176292913758019693.webp)

/images/ppid_59c68470-image-176302262464619939.webp)
/images/ppid_a911dc6a-image-176294163618910829.webp)



