spot_img

HinduPost is the voice of Hindus. Support us. Protect Dharma

Will you help us hit our goal?

spot_img
Hindu Post is the voice of Hindus. Support us. Protect Dharma
23.5 C
Sringeri
Saturday, April 27, 2024

Deepfake technology can disrupt the Bharatiya society and democracy

The deepfake video of actress Rashmika Mandanna that went viral a couple of days back brought the whole issue of the dangers of AI-generated deepfakes to the core. An FIR has been registered by the Delhi Police Special Cell in connection with the deepfake video of Rashmika Mandanna.

Barely a few days after this incident, deepfake photos of actress Katrina Kaif and Sachin Tendulkar’s daughter Sara Tendulkar also went viral on the internet. Yet, this is just the tip of the iceberg. The dangers of AI generated deepfakes aren’t limited to celebrities. They have much larger ethical implications for the society and country. This devious tool of artificial intelligence can be used to manipulate elections, malign the reputation of political opponents, generate fake news videos, and create all sorts of targeted propaganda. Yet, it’s ironic that it takes a celebrity’s deepfake video clip for the Bharatiya media and society to become vigilant on the issue.

What is a Deepfake?

Simply put, deepfakes are AI-generated fake images developed using deep learning which is a form of artificial intelligence. This technology can be used to generate fictitious videos of events that never happened using known faces and identities, thus implicating people in circumstances they were never exposed to. There are many deepfake videos of politicians across the world circulating on the internet. For example, a deepfake of ex-US President Barack Obama  emerged during the last Presidential elections in the US. Celebrities and public figures like Gal Gadot, Nancy Pelosi, Mark Zuckerberg, and Donald Trump have also become victims of deepfakes.

The most rudimentary method of creating deepfakes involves superimposing another person’s face on a person’s body, thus creating a fake video from an incident that happened with another person. For example, the Rashmika Mandanna deepfake video is a manipulated and distorted version of a video originally created and posted by social media influencer Zara Patel in which she can be seen entering a lift smiling and saying Hi in the lift. It’s a very short video and in the deepfake version, Zara Patel’s face has been replaced by Rashmika Mandanna.

Deepfakes are not limited to videos. People can also create fabricated photos using artificial intelligence tools. Audios can also be deepfaked; that’s why the combination of audio and visuals being deepfaked can lead to convincing deepfake videos of politicians or public figures indulging in acts or making remarks they never actually did.

How does Deepfake Technology work?

The most common aspect of deepfake technology is the face swap video.

To make a face swap video, you have to run thousands of face shots of the two people involved through an encoder that’s an AI algorithm. The encoder keeps on finding similarities between both faces, thus reducing them to their common features, and compressing those images.

Then, the images are run through the decoder, another AI algorithm that is used to recover both the faces from the lot of the compressed images. You typically assign one decoder to recover one person’s face, and the second decoder to recover another person’s face. Now, you have to feed encoded images into the wrong decoder to morph the images and accomplish the face swap.

Another method for making deepfakes involves the use of two AI algorithms and then pitting them against each other. In this process, the first algorithm produces random synthetic images. Now, the second algorithm comes into the picture. This algorithm is fed with a steady stream of real images of people. Now, the synthetic images from the first algorithm are added to the real images from the second one. This process is repeated n number of times until the results are super-refined and the AI generator begins to produce realistic faces of nonexistent celebrities.

It’s a highly complicated technology that requires the use of high-end desktops with powerful cloud computing or high-power graphic cards. Having said that, nowadays, many tools are available to create deepfakes. Thus, the technology is becoming more accessible to anyone wanting to create mischief or settle scores with someone.

How is Deepfake dangerous for the Bharatiya society?

AI-generated deepfakes are highly problematic from an ethical point of view.

According to an article published in the Guardian, the AI company Deeptrace discovered the existence of “15,000 deepfake videos online in September 2019, a near doubling over nine months. A staggering 96% were pornographic and 99% of those mapped faces from female celebrities on to porn stars”. These are alarming statistics and if this phenomenon becomes widespread, not only celebrities but normal people would be vulnerable to the menace of deepfake porn.

According to oosga.com, Bharat has an estimated 470.1 million active social media users. This is an estimated number of social media users who logged in at least once a month in 2022. The actual figures, I am sure would be much higher. Most of the Bharatiya social media users have their pictures uploaded on platforms like Instagram, Facebook, etc. In that sense, we are sitting on the top of an AI-ticking time bomb. Imagine the kind of havoc that can be wreaked if such kind of pornographic deepfakes of ordinary Bharatiya men and women start circulating on the internet.

Celebrities have the power and clout to draw media attention to their issues, and file complaints. But how would your usual Bharatiya girl cope with defamation through a deepfake picture or video? Experts also warn of the possibility of the rise of revenge porn as the AI technologies and tools become more accessible.

Unregulated social media has already disrupted the Bharatiya society to a great extent. Social media sites like Facebook and Instagram are becoming hubs of soft porn with some of the stuff coming across in one’s reel feeds so disturbing and graphic that it’s hard to even talk about it. Who are these people making and posting such videos? In the ubiquitous world of social media, anyone can create multiple ids by fake names, and start uploading random stuff. These companies have absolutely no mechanisms to check the authenticity of the accounts, the appropriateness of the content being uploaded, whether that content belongs to the creator, or whether it is manipulated or morphed content. All these social media accounts showcasing soft porn reels and photos might be showcasing deepfake videos and photos for all you know.

Social media companies are obviously not interested in regulating this. For them, this kind of unregulated growth especially in the Bharatiya market means huge revenue. The onus is on the government to regulate social media companies and put stringent rules and regulations in place that they must comply with.

Larger implications of Deepfakes for Bharatiya politics and democracy

Yet another alarming danger posed by deepfake technology vis a-vis Bharat is that it can also be used by the woke leftist lobby to create anti-Bharat propaganda.

With Bharatiya elections due sometime next year, generative AI offers immense opportunities for information warfare and propaganda. Bestselling author, speaker and a pioneer in the research on civilizations, Rajiv Malhotra warned about the dangers posed by AI even before this whole deepfake debate had started. His book “Artificial Intelligence and the Future of Power” which came out in 2021, highlighted in detail the ethical concerns raised by the development of AI technologies, and how these technologies and algorithms could be systematically used to disrupt Bharat.

In a post on X from January 2021, he states, “AI based deep fakes include voices, videos, writing compositions, ideological messages, etc. Even experts can’t distinguish fake from real. Many applications both good and dangerous. Do you know how Breaking India 2.0 plans to use this?”

What can the Bharatiya government do to check the menace of Deepfakes and other AI technologies?

As of now, Bharat doesn’t have any separate laws to regulate social media or to deal with the menace posed by AI technologies.

When the Rashmika Mandanna deepfake issue became public, the Bharatiya government cited 66D of the Information Technology Act 2020 which calls for “punishment for cheating by personation by using computer resource”. However, this is too vague a clause and we need more specific laws to directly address the issues arising from misuse of AI.

There has been a growing demand for a separate law for the regulation of social media companies. All the objectionable content including deepfakes are posted by users on social media platforms so these companies should be held legally liable for the content posted on their platforms. They shouldn’t be able to get away by saying that we are just providing a platform. The only way to stop this menace is by making it compulsory for the social media giants to put mechanisms in place for keeping a check on manipulated content like deepfakes. Unless and until that happens, this issue cannot be regulated. It’s next to impossible to try and investigate who created these deepfakes in the virtual world. The only way to curb this menace is by making social media platforms liable for whatever content is posted on their platforms. Mere guidelines won’t help either. We need enforceable laws to regulate social media.

The government has now introduced a draft bill to regulate digital media and broadcasting. One hasn’t gone through the provisions of the Bill yet since it’s a fairly recent development. But from what it looks like, the focus of this Bill is more on regulating OTT platforms. If the Bill has some provision to regulate social media platforms and address the menace caused by AI technologies like Deepfakes, it would be a step in the right direction.

Guardian Article Reference – https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

Subscribe to our channels on Telegram &  YouTube. Follow us on Twitter and Facebook

Related Articles

Rati Agnihotri
Rati Agnihotri
Rati Agnihotri is an independent journalist and writer currently based in Dehradun (Uttarakhand). Rati has extensive experience in broadcast journalism having worked as a Correspondent for Xinhua Media for 8 years. She was based at their New Delhi bureau. She has also worked across radio and digital media and was a Fellow with Radio Deutsche Welle in Bonn. She is now based in Dehradun and pursuing independent work regularly contributing news analysis videos to a nationalist news portal (India Speaks Daily) with a considerable youtube presence. Rati regularly contributes articles and opinion pieces to various esteemed newspapers, journals, and magazines. Her articles have been recently published in "The Sunday Guardian", "Organizer", "Opindia", and "Garhwal Post". She has completed a MA (International Journalism) from the University of Leeds, U.K., and a BA (Hons) in English Literature from Miranda House, Delhi University.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

Sign up to receive HinduPost content in your inbox
Select list(s):

We don’t spam! Read our privacy policy for more info.

Thanks for Visiting Hindupost

Dear valued reader,
HinduPost.in has been your reliable source for news and perspectives vital to the Hindu community. We strive to amplify diverse voices and broaden understanding, but we can't do it alone. Keeping our platform free and high-quality requires resources. As a non-profit, we rely on reader contributions. Please consider donating to HinduPost.in. Any amount you give can make a real difference. It's simple - click on this button:
By supporting us, you invest in a platform dedicated to truth, understanding, and the voices of the Hindu community. Thank you for standing with us.