Previously,
we got to see the viral deepfake videos and photos of Archita Phukan. And now, the deepfake video of a popular creator, Payal Gaming, is circulating on the internet. All of those who even have a little knowledge of AI and the things that are going on with deepfakes know that the videos are fake. But the major concern here is the speed at which these fake videos make it to the headlines, and what is more troubling is the pace with which people fall for these deepfakes and share insanely; then the searches soar sky high on Google trends, making it to headlines. Not a single thought is spared for the person involved in this deepfake scam.
Misuse Of AI And Regulations Around It
While AI is something that has empowered the masses for sure but the negative effects of the same are too visible to ignore. As of now, there are multiple Instagram pages, Telegram groups, and even websites that openly allow users to create deepfake images and videos. And the worst part is that there is zero regulation on this even when the cases are skyrocketing. In a society that hardly thinks twice before objectifying the victim and chase the ‘viral’ madness for views, shares, and sometimes ‘just for fun’, a punishment is possibly the only way out. But where are the regulations, and who is watching?
Where It All Started?
The term deepfake came into existence via a Reddit account with the same username that was posting AI-generated porn videos of celebrities like Gal Gadot, Taylor Swift, and more, back in 2017. This instance is commonly accepted as the ground zero of deepfakes. And porn has always been the first adopter of anything new, and there are no barriers, which is why creating celebrity deepfakes became a normal thing back then, so much so that there are adult content websites that are only running on AI-generated celebrity adult videos. What followed was a large number of degenerates and losers who can't even talk with grace, making deepfake videos of women who have photos available on social media platforms.
Deepfake Statistics
-A report by Deeptrace Labs suggested that around 96 per cent of deepfake videos on the internet are based on women, showing them in adult content. -Another stat by Sensity suggests that the number of online deepfake videos doubles every six months. -A research by Keepnet suggests that human detection rates for high-quality deepfake videos is as less as 24.5 per cent.-According to a research by Cornell University, the nudifier apps (that remove clothes from photos) have been downloaded more than a million times.
Regulations In India
India is a country where the deepfake is a menace left, right, and centre, and a large population with mobile phones, but no understanding of internet let alone deepfake or AI. Though there are laws which punish the person after the crime is committed but what about the source or a platform that facilitated it? The problem stays at the source, and hence the crime continues.When the tragedy (here a deepfake video) strikes us, we get vocal but ‘post’, ‘share’ and ‘comment’ when it happens right next door. As citizens of this country, with the women in our family being active on social media platforms, no one would want to be in a situation where you are mocked, criticised, ridiculed, and often objectified for the video that is not even remotely linked to you.
The Thinking Problem
We as a society fail everyday. As soon as there is a “trending video” case like that related to a celebrity, the majority of people start searching for it, commenting for links, meme pages start posting blurry clips to get the maximum engagement. Some users who post such videos take a moral high ground, asking government to take action but the cleaning starts at home. A question: Why do they share the content which they know is objectionable and deepfake? This is not entirely the government's problem; it is our problem as a society. We preach morals but practice ‘social fun’. It is a problem that can be solved on an individual level and community level, if each one of us choose to set aside the hunger for social engagement and for once think like one big family, saving the honour of a woman in distress.
How To Spot Deepfake Videos?
Weird Face Body Symmetry: This is the biggest identifier for a deepfake video. In most of them, you will see the face looks a little asymmetrical compared to the body. Other elements are ear flicker, moving hairline, abruptly blinking eyes, and more.
Motion: The motion of the video also works as a major hint, as you can see morphed faces between the frames. Low resolution of faces is also a hint that the video is a deepfake.
Tools: You can also use professional deepfake identifying tools like Reality Defender, Deepware Scanner, Hive AI Detection, and Microsoft Video Authenticator to identify deepfake videos.