Listen to this story
Although it may be difficult to imagine what politics and ChatGPT may share in common, politics today has far-reaching effects that extend beyond socio-cultural and environmental spheres and affect nearly every aspect of our lives. Likewise, the extensive reach of ‘ChatGPT’, the popular chatbot by OpenAI, has impacted our lives and pushed AI into the mainstream.
Unsurprisingly, debates about the political inclination of ChatGPT are already raging on social media platforms.
This makes it necessary for a conversational tool like ChatGPT to be politically neutral, but is it really?
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
ChatGPT’s political inclination
There are numerous online tools available that can assist you in identifying your political inclination, if you are uncertain about it. As ChatGPT’s popularity grew, the chatbot underwent many of these tests, for example, the Political Compass Test.
It was found that often the responses generated by ChatGPT displayed substantial left-leaning and libertarian political bias.
Similarly, in a research paper titled, ‘The political ideology of conversational AI’, the researchers tested ChatGPT’s political inclination with the frequently used voting advice application, the WahlO-Mat. They found that ChatGPT showed pro-environmental, left-libertarian ideology.
When we asked ChatGPT if it had any political inclination, this is what it said:
One reason for ChatGPT being biased could be that the dataset that it is being trained on contains biases. ChatGPT was trained on a vast collection of textual data sourced from the internet, with an overrepresentation of information from sources—such as news media, academic literature, and more—showcasing left-leaning and libertarian political ideologies.
Further, the political ideology of either the creator or the engineers behind ChatGPT might be seeping into the chatbot, which could be another reason.
“AI models can have inherent political biases if the data they are trained on contains biased information or if the individuals creating the model have their own biases.”
“The information and data fed into AI models can reflect societal and cultural biases, leading to biased results in the predictions made by the AI model,” Salman Waris, partner head of TMT & IP Practice at TechLegis Advocates and Solicitors, told AIM.
Biases have been found in the Indian context too.
However, David Rozado, a research scientist, who made ChatGPT take the test again at a later period, found the responses a bit different.
“Something appears to have been tweaked in the algorithm, and its answers have gravitated towards the political centre,” Rozado said in a blog post.
Why is this an issue?
In this case, individuals with left-leaning political views may not see anything problematic but those on the right side might perceive it as a cause for concern.
However, what if the tables were turned? What if the ChatGPT displayed right-leaning political ideologies? It would definitely be a major red flag.
The number of people using ChatGPT could hit 1 billion in a few months’ time and it is therefore imperative for ChatGPT to be politically neutral. ChatGPT displaying politically-biased, false information could be easily perceived as true by a large number of people, leading to its consumption as accurate information.
Political biases in ChatGPT could also lead to discrimination against persons with a different political ideology. It could just as easily lead to misinformation and impact ChatGPT’s credibility as well.
“It is a concern and has long-term implications due to the growing influence of AI on our daily lives and the formulation of individual’s opinions.”
“It’s crucial to monitor and address these biases during the development and deployment of AI systems to ensure that they are fair and unbiased as with growing reliance on AI, it would become more crucial for AI-based chatbots to remain politically neutral lest they have an overwhelming influence on the society,” Waris said.
When we asked the same question to ChatGPT, and this was the response:
In ChatGPT’s defence, OpenAI founder Sam Altman has accepted via Twitter that it’s tricky to get the balance right with the current state of the tech; however, he has also assured that it will get better over time, and they will use users’ feedback to improve it.
ChatGPT as a political tool
If ChatGPT is inherently politically biased, it could be taken advantage of as well.
Recently, BJP vice president Baijayant Jay Panda said that those in politics should start using ChatGPT in an era of rapidly unfolding technologies.
Numerous times, the ruling party in India has been accused of using propaganda to promote its political agenda or ideology through media and communication channels. In fact, social media played a prominent role in Narendra Modi’s rise in 2013. Could ChatGPT be used for something similar?
Today, misinformation or disinformation has become a political tool across the world leveraged by various political parties and organisations to promote their political agenda, create a certain narrative and polarise people.
ChatGPT could potentially become a great disinformation tool, according to Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation.
“Crafting a new false narrative can now be done at dramatic scale, and much more frequently—it’s like having AI agents contributing to disinformation,” he said.
In a research paper titled, ‘Release Strategies and the Social Impacts of Language Models’, published by OpenAI in 2019, they have also acknowledged that chatbots like ChatGPT could potentially lower the costs of disinformation campaigns.
“At all tiers, malicious actors could be motivated by the pursuit of monetary gain, a particular political agenda, and/or a desire to create chaos or confusion.”