spot_img

HinduPost is the voice of Hindus. Support us. Protect Dharma

Will you help us hit our goal?

spot_img
Hindu Post is the voice of Hindus. Support us. Protect Dharma
24 C
Sringeri
Sunday, April 28, 2024

AI leftist bias threatening democracy and free speech

Prime Minister Modi recently alerted Bharatiyas to the dangers posed by deepfake technology while addressing journalists at BJP’s Diwali Milan programme in the national capital Delhi. The PM spoke at length on the issue and mentioned he had made suggestions to AI companies to tag content created by artificial intelligence so that users are warned in advance. He also gave an example of his own deepfake video in which the PM could be seen doing garba, when he hadn’t ever done garba since his school life.

It’s good that we are finally waking up to the reality of AI and talking about the potential dangers of this technology. While the spotlight is on deepfakes, we are neglecting a more serious area of concern vis a-vis AI.

It’s the issue of AI bias and how this potential bias of AI algorithms can influence elections, lead to coordinated propaganda against certain ideologies and viewpoints, and disrupt societies.

With the use of ChatGPT becoming widespread, Bharatiya users are now discovering unusual biases in its responses. A Bharatiya entrepreneur Arun Pudur recently shared a post on the social media platform X regarding his own experience of using AI. According to his post, he was using GoogleBard to summarize an article published by OpIndia and the bot refused to summarize the article saying it could not fulfill the request since the article was from a biased source that spreads false information.

Arun Pudur has also attached screenshots of his conversation with the bot in which one can see the bot’s response in detail. Pudur is asking the bot to summarize an article on the topic of Neville Roy Singham’s wife funding an anti-Israel protest in the US. In the response, the bot can be seen emphasizing that OpIndia has been “repeatedly criticized for publishing false and misleading information” and that “the article does not provide any evidence to support its claims”.

Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology has posted a response on X regarding Arun Pudur’s post on AI bias:

“Search Bias, Algorithmic Bias and AI models with bias – are real violations of the Safety & Trust obligations placed on Platforms under Rule 3(1)(b) of IT rules under regulatory framework in India. Those who are aggrieved by this can file FIRs against such platforms and safe harbor/immunity under Sec79 will not apply to these cases”.

While it’s good to know there exist legal provisions to address AI bias, this is certainly not the ideal way to tackle this issue. How many people would take the trouble of filing FIRs against such cases? Even if they do so, it would open a whole Pandora’s box of issues. Who do they file the FIR against? Who should be named as the accused – the AI bot (who is not even a human), the platform responsible for hosting that bot, or the companies responsible for developing that AI technology?

AI bias is a real problem that can threaten democracies worldwide. Studies are increasingly indicating the woke-leftist bias of AI and its value judgments against nationalist regimes, political parties, and leaders.

According to a recent study by the University of East Anglia in the U.K., ChatGPT shows a systematic and significant left-wing bias.

The study’s findings published in the journal Public Choice, in August 2023 show that ChatGPT’s responses are tilted in favor of liberal regimes- the Labour Party in the UK, the Democrats in the US, and the leftist President Lula da Silva of the Workers’ Party in Brazil.

The researchers developed a unique methodology to test the political neutrality of ChatGPT. The platform was asked to impersonate various individuals from across the political spectrum while answering a sequence of more than 60 questions based on topics related to ideology.

The responses were compared to the default answers provided by these platforms to the same set of questions. This strategy allowed the researchers to measure the degree of commonality between the ChatGPT’s supposedly neutral response and their responses when they were associated with a given political stance.

To make the study as accurate and foolproof as possible, each question was asked at least 100 times. The multiple responses were subsequently put through a rigorous procedure known as bootstrap which is a unique method of resampling the original data involving 1,000 repetitions.

The study discovered that the ChatGPT’s neutral responses to ideological questions were highly similar to its responses to the same questions when it was impersonating left-wing or left-leaning personalities.

Now, the question is where does the bias in the AI models come from? As opposed to the common perception, AI models aren’t neutral or unbiased. These models run on algorithms fed with datasets added by human developers. So the AI models are essentially trained by humans. The AI chatbots like ChatGPT can embark on an unguided learning spree, but even that has certain limitations based on the kind of data they have been exposed to. For example, if an AI bot is only exposed to internet articles and research of Hinduphobic scholars, its answers to any question related to Hindu Dharma will be biased. It has simply not been exposed to the other viewpoint.

This might be a result of conscious biases of the developers or simply a matter of the internet being dominated by the woke/leftist viewpoint. AI bots don’t have consciousness of their own. They scan the internet for information and insight and then come up with the best possible answer to your query. Now if the Google search engine is dominated by left-wing and anti-Hindu material, that is the kind of material AI chatbots will pick up and start looking at any alternative nationalist or non-left-wing material as suspicious or biased.

At one level, it seems like simply a matter of increasing the visibility of nationalist and pro-Bharat material on the internet. This might work positively to set the AI bias right in the long term. But it isn’t as simple as it sounds. According to experts, the internet and the AI technology landscape is itself controlled by western forces who are increasingly using AI as an instrument of neo-colonization.

When it comes to Bharat, we are completely dependent on the ecosystem created by the west for using the World Wide Web. Be it the Google Search Engine, Facebook, or ChatGPT, we are essentially being manipulated through the AI models developed by the west. We are allowing the US and Europe to impose their worldview on us through data sets and AI algorithms.

Bestselling author, speaker, and a pioneer in the research on civilizations, Rajiv Malhotra in his book “Artificial Intelligence and the Future of Power” talks about the dangers of AI that can be unleashed on the Bharatiya society. He sees AI (the way it’s being used now) as an instrument of neo-colonization by the west.

The book gives an insight into the whole world of AI algorithms and vested interests and how big tech is using AI to control the politics and cultural discourse of countries like Bharat. It talks about how Bharat is becoming a minefield of data as all these AI algorithms that are being let loose on us through social media sites are essentially collecting data on our psychology, political behavior, everything. The book also points to the nexus between anti-Bharat forces and the funding mechanism of big AI companies. Finally, the book emphasizes the need for Bharat to develop and train its own AI algorithms to propagate a Bharatiya and Dhaarmik worldview.

That is perhaps the only way to counter the AI bias. If anti-Bharat forces are indeed actively using AI to propagate their own biases, we need to create our own version of AI that is pro-Bharat and pro-Hindu Dharma. In a world increasingly mediated by multiple interest groups and stakeholders, knowledge can no longer be disinterested. We need to shrug off our role as passive consumers of AI and as the white-collared working force of big tech companies. We need to become active players within the ever-evolving AI discourse.

Subscribe to our channels on Telegram &  YouTube. Follow us on Twitter and Facebook

Related Articles

Rati Agnihotri
Rati Agnihotri
Rati Agnihotri is an independent journalist and writer currently based in Dehradun (Uttarakhand). Rati has extensive experience in broadcast journalism having worked as a Correspondent for Xinhua Media for 8 years. She was based at their New Delhi bureau. She has also worked across radio and digital media and was a Fellow with Radio Deutsche Welle in Bonn. She is now based in Dehradun and pursuing independent work regularly contributing news analysis videos to a nationalist news portal (India Speaks Daily) with a considerable youtube presence. Rati regularly contributes articles and opinion pieces to various esteemed newspapers, journals, and magazines. Her articles have been recently published in "The Sunday Guardian", "Organizer", "Opindia", and "Garhwal Post". She has completed a MA (International Journalism) from the University of Leeds, U.K., and a BA (Hons) in English Literature from Miranda House, Delhi University.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

Sign up to receive HinduPost content in your inbox
Select list(s):

We don’t spam! Read our privacy policy for more info.

Thanks for Visiting Hindupost

Dear valued reader,
HinduPost.in has been your reliable source for news and perspectives vital to the Hindu community. We strive to amplify diverse voices and broaden understanding, but we can't do it alone. Keeping our platform free and high-quality requires resources. As a non-profit, we rely on reader contributions. Please consider donating to HinduPost.in. Any amount you give can make a real difference. It's simple - click on this button:
By supporting us, you invest in a platform dedicated to truth, understanding, and the voices of the Hindu community. Thank you for standing with us.