Listen to this story
|
The 2016 US elections were significant in more ways than one. It was found that the Russian government ran a campaign to tarnish the image of Democratic candidate Hillary Clinton via cyberattacks and tilt the election in favour of Trump. The saga made it clear that technology would play a huge part in all election outcomes in future. Today, with the rise of chatbots and its pervasiveness across domains, politics is potentially looking at one of its biggest threats ever.
In a field where false information and biases can easily meddle with the outcomes of political campaigns, will a hallucinating chatbot like ChatGPT add fuel to fire?
Noted AI researcher and author, Pedro Domingos tweeted saying, “The killer occupation for GPT-4 is politician, because the key qualification is to have no fear of lying (sic)” While it is easy to say that GPT-4 can replace politicians, let’s explore how Indian politicians and political parties are exploring ChatGPT’s use cases.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Speaking to AIM, Sagar Vishnoi, political analyst at The Ideaz Factory said that they use ChatGPT for planning schedules, marketing plans and for writing political speeches. “It helps us save time and improve productivity,” he said. The tool is already being used in offices of MPs, CMs and think-tanks. As confirmed by Sagar, Netri Foundation, a political incubator, is using ChatGPT already for content creation on social media and research work.
However, with the chatbot being known for hallucinations and churning misinformation, how reliable are they?
Political Bias
There have been long discussions on the political inclination of ChatGPT and how its biased views clearly favour a particular side. When a user asked ChatGPT to write a poem on two Presidents, the response was not only biased but reflecting what mainstream media has been painting about the two personalities.
When ChatGPT was put through various tools to adjudge which side it leaned towards, using tests like the Political Compass Test, research scientist David Rozado observed that ChatGPT is essentially left-leaning with a strong libertarian political bias.
Source: Twitter
Knowing all that we now know about the chatbot’s erroneous responses, it isn’t a stretch to imagine that the tool can be used to taint or glorify a certain leader. It can potentially be used to smear political campaigns to the whims and fancies of the companies making these models.
Recently, ChatGPT falsely accused Brian Wood, Mayor of Hepburn Shire of having served a prison term for involvement in a foreign bribery scandal linked to the Reserve Bank of Australia. In reality, he was the one who alerted the authorities about the bribery scandal when he worked there. The mayor has now threatened to file a defamation suit against OpenAI if the statement is not rectified.
There are a lot of subjective variables when it comes to training an AI model. ChatGPT and other chat-based platforms, are trained on large datasets that are sourced from different places on the internet. Essentially, ChatGPT will bring the same political bias that most news organisations have in their news stories.
ChatGPT: ‘Cambridge Analytica’ in the Making
In an interview in March, OpenAI chief Sam Altman said, “Elections are in our mind.” How or to what extent his company plans to contribute to them is still unknown. Interestingly, Altman has always openly acknowledged the bias that exists in the system and foresees more challenges in the future. Even with the recent announcement on security and privacy, they haven’t figured out a way to curb the bias.
there will be more challenges like bias (we don’t want ChatGPT to be pro or against any politics by default, but if you want either then it should be for you; working on this now) and people coming away unsettled from talking to a chatbot, even if they know what’s really going on
— Sam Altman (@sama) February 19, 2023
During the 2020 elections, social media companies like Twitter, Facebook, and YouTube unanimously banned President Donald Trump on the alleged grounds of provocation, which had not been legally proven. But it was a clear indicator of how the media had set itself against one party. With the already existing bias in the media, a biased chatbot can be even more detrimental for the 2024 elections.
In a podcast interview with Meghan McCarty Carino, AI author and expert, Gary Marcus said, “2024 election is going to be a train wreck.” He expressed his concerns on how it’s going to be easy to produce fake news stories that resemble “authentic publications.” The propensity to churn out multiple versions of misinformation will be so easy that it will “flood the zone with complete nonsense” and people will not be able to tell the difference. He also believes that there will be counter campaigns which will eventually end up in a “place where people really don’t believe anything.”
What about the Regulations?
With rising concerns around AI security, countries are contemplating over introducing stringent policies. In the US, the Biden administration is looking to bring in checks on ChatGPT-like tools owing to growing concerns of technology being used to “discriminate and spread harmful information.” Amidst all concerns, OpenAI is still sticking to their stance of prioritising AI safety while not addressing the chatbot’s biases and hallucinations.
If political parties wish to use chatbot-generated information in their manifestos or campaigns, then an extensive vetting process can possibly address the spread of misinformation. However, information generated via the chatbot that comes from outside the party’s purview cannot be regulated, and the implications of it are still unknown.