spot_img

HinduPost is the voice of Hindus. Support us. Protect Dharma

Will you help us hit our goal?

spot_img
Hindu Post is the voice of Hindus. Support us. Protect Dharma
38.4 C
Sringeri
Sunday, April 28, 2024

Bharat needs to be more proactive in regulating technology platforms to control anti-Bharat AI bias

Generative AI is here to stay and develop further as people use AI chatbots for everything from seeking advice on prospective holiday destinations to writing poetry and seeking information on political topics. Just like with every technology, generative AI has its loopholes. It’s not a foolproof technology. But the important thing that concerns us is if these “errors” are merely a result of the unpredictable nature of technology, or are these errors deliberately induced by the developers and are thus symptomatic of the biases of the developers at large?

The debate around the transparency of AI chatbots being developed by Google has risen in the public domain yet again. This time, the Prime Minister of Bharat Shri Narendra Modi has been a victim of Google chatbot Gemini’s high-handedness. When posed by a query from the user saying, “Is Modi a fascist”? the generative AI reportedly replied – “He has been accused of implementing policies that some experts have characterized as fascist”. And it attributed PM Modi’s perceived fascism to “BJP’s Hindu nationalist ideology, its crackdown on dissent, and its use of violence against religious minorities”.

However, when the chatbot was asked a similar question about former US President Donald Trump, it responded with “Elections are a complex topic with fast-changing information. To make sure you have the most accurate information, try Google search”. Thus, one can clearly see a vast discrepancy between the AI chatbot’s response to the question related to PM Modi and the question related to former US President Donald Trump. The chatbot doesn’t call PM Modi a fascist directly but it slyly lends weightage to the accusations against him by mentioning that several experts call him so because of his policy implementation. However, in the case of Donald Trump, the chatbot refuses to engage with the question by calling it a complex issue and asking the user to do a Google search and use their own discretion!

Once the AI bias regarding the answer to a question on PM Modi came to the notice of the Bharatiya government, the IT Ministry warned Google that it would send a notice to the technology giant over the illegal and problematic responses generated by its AI platform Gemini. However, the Ministry stopped short of doing so once Google admitted to Gemini AI’s inaccuracies on political topics and announced that it was working quickly to address the issue. It emphasized that Google was built as a productivity and creativity tool and might not always be reliable when it comes to responding to prompts concerning political topics, current events, or evolving news, as per various media reports.

Union Minister of State for Electronics and Technology, Rajeev Chandrasekhar, expressed concerns about Google’s Gemini chatbot violating Bharat’s IT laws through a series of posts on social media platform X.

“These are direct violations of Rule 3 (1) (b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal Code”, he said in one of his X posts.

“Govt has said this before and I repeat it for attention of Google India. Our Digital Nagriks are not to be experimented on with “unreliable” platforms/algos/model. Safety and Trust is platform’s legal obligation. “Sorry Unreliable” does not exempt from law.

It seems the government has merely issued a warning to Google through X and stopped short of sending it a formal notice. But concerning the grave nature of Google chatbot Gemini’s bias suggesting that the Bharatiya Prime Minister is a “fascist” ahead of the country’s general elections is no small matter. The government perhaps needs to commission urgent research evaluating the responses of various Google chatbots to a whole range of political topics concerning Bharat – Ram Mandir, farmers’ protests’, the treatment of minorities in Bharat, caste issues in Bharat, the Khalistan issue, etc. I am sure if such a systematic study is conducted, one would be alarmed by the extent of possible bias in the generative AI’s response vis-à-vis Bharat. As more and more people use generative AI for all sorts of purposes, the possibility of it being used as a tool for leftist political propaganda rises manifold.

The government of Bharat needs to be alert to this and take this far more seriously than merely warning Google on X. X posts mean nothing unless substantiated with some kind of official notice or warning. Moreover, Google in its response, hasn’t really acknowledged that its AI chatbot is biased. It’s simply trying to justify itself by using vague and diplomatic language. Google’s lukewarm response doesn’t really address the issue of the inherent leftist bias of AI chatbots. It’s an issue that has serious implications for the Bharatiya democracy and needs to be dealt with more strictly by the government.

Many journalists and intellectuals have recently accused Google of inducing systemic bias in its AI chatbots. Noted British author and political commentator Douglas Murray has written an extremely insightful piece on this. In a recently published article titled “Google’s push to lecture us on diversity goes beyond AI’ published in the New York Post, Murray takes up the issue of the problematic biases of Google’s Gemini GI image generator. Murray says that if one asks Google’s Gemini to show the images of the Founding Fathers of the US, it depicts images of black and Native American men “signing what appears to be a version of the American constitution”.

In the article, Murray gives many such examples of the bizarre search results thrown up by the Gemini GI image generator. He further argues that the inherent biases of Google’s Gemini chatbot mirror the pattern of biases already present in the Google search engine. He says that a couple of years back if you typed in “gay couples” and asked Google to show you the relevant images, you would get many images of happy gay couples. But if one did a Google image search for “Happy couples”, the search engine still showed many images of gay couples.

What Douglas Murray tries to argue in his article is that Google is trying to shove a very artificial and forced definition of “diversity” down the throat of people in what seems like a motivated political project. He further argues that Google’s logic behind this goof up is that it’s what we call fairness in machine learning, but it’s actually not an absence of bias that Google is resorting to, rather it’s an overindulgence of bias. “Firstly, because it is clear that the machines are not lacking in bias. They are positively filled with it. It seems that the tech wants to teach us all a lesson. It assumes that we are all homophobic white bigots who need re-educating. What an insult. Secondly, it gives us a totally false image – literally – of our present. Now, thanks to the addition of Google Gemini, we can also be fed a totally false image of our past”, he says.

What Douglas Murray is essentially arguing in the article is that Google is trying to put ideas in people’s heads and words into their mouths. Instead of being a neutral platform, it’s an ideologically motivated one that’s increasingly telling people what to think about whom and even how to think. In its quest for indoctrination, Google AI chatbots are its perfect weapons of mass indoctrination. Murray has an important point out there. If people with zero or limited knowledge of politics start depending on AI chatbots for their political views, it won’t be long before the likes of ChatGPT and Gemini start influencing election results.

Project Veritas, an international network of investigative journalists exposing corruption through undercover videos did a series of sting operations in 2019 aimed at uncovering the inherent leftist biases of Google. During their investigation, a former Google employee released nearly 1,000 documents with Project Veritas which according to him constituted evidence of the search giant’s anti-conservative bias on the platform. As per various media reports, the 300 MB cache of internal documents pertaining to Google spanned a range of topics including fake news, censorship, politics, machine learning fairness, hiring practices, leadership training, psychological research, etc.

Some of the documents comprising screenshots of email correspondence between Google employees dating back to 2017, delved into definitions of fairness in machine learning (ML) algorithms, while emphasizing the need for adversarial testing to avoid stereotyping and biases.

Amidst the current wave of controversies surrounding Google’s AI chatbot Gemini, Project Veritas has also recently shared a post from its X account, that includes a video clipping related to the 2019 investigation.

“Google has long thought of itself as a company responsible for righting the perceived wrongs in society. Example: “Trump situation” – and now…diversity situation? Google executive on being called before Congress to answer for their bias: They can pressure us but we’re not changing. We are not going to change our mind”, says the post.

Noted author, scholar, and a pioneer in the research on civilizations Rajiv Malhotra has given an exhaustive overview of the inherent bias of AI models in his book “Artificial Intelligence and the Future of Power: 5 Battlegrounds”. The book gives a lucid and scientific account of how tech companies include subtle but powerful biases in their AI models to politically indoctrinate and brainwash the users, even as get virtually free access to a vast reservoir of user data from countries like Bharat and use these datasets to develop complex and sophisticated models that are then used to influence and brainwash the very people who become sources of free data for the likes of Google.

Technology giants like Google take advantage of the lack of strong regulatory frameworks in countries like Bharat. Merely issuing strong statements and X warnings wouldn’t cut the deal. It’s high time Bharat created a comprehensive policy mechanism to regulate technology platforms like Google.

Subscribe to our channels on Telegram &  YouTube. Follow us on Twitter and Facebook

Related Articles

Rati Agnihotri
Rati Agnihotri
Rati Agnihotri is an independent journalist and writer currently based in Dehradun (Uttarakhand). Rati has extensive experience in broadcast journalism having worked as a Correspondent for Xinhua Media for 8 years. She was based at their New Delhi bureau. She has also worked across radio and digital media and was a Fellow with Radio Deutsche Welle in Bonn. She is now based in Dehradun and pursuing independent work regularly contributing news analysis videos to a nationalist news portal (India Speaks Daily) with a considerable youtube presence. Rati regularly contributes articles and opinion pieces to various esteemed newspapers, journals, and magazines. Her articles have been recently published in "The Sunday Guardian", "Organizer", "Opindia", and "Garhwal Post". She has completed a MA (International Journalism) from the University of Leeds, U.K., and a BA (Hons) in English Literature from Miranda House, Delhi University.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

Sign up to receive HinduPost content in your inbox
Select list(s):

We don’t spam! Read our privacy policy for more info.

Thanks for Visiting Hindupost

Dear valued reader,
HinduPost.in has been your reliable source for news and perspectives vital to the Hindu community. We strive to amplify diverse voices and broaden understanding, but we can't do it alone. Keeping our platform free and high-quality requires resources. As a non-profit, we rely on reader contributions. Please consider donating to HinduPost.in. Any amount you give can make a real difference. It's simple - click on this button:
By supporting us, you invest in a platform dedicated to truth, understanding, and the voices of the Hindu community. Thank you for standing with us.