Listen to this story
A newly formed political party in Denmark claims to be solely driven by AI. Copenhagen-based group of techies, who called themselves Artists’ Collective Computer Lars, recently formed a political party named ‘The Synthetic Party’. The party claims that if they came to power, they would use their mandate to weave AI into everyday governance.
In the 2019 elections, around 15% Danish voters abstained. The Synthetic Party believes that it’s because people have lost interest in the traditional political parties of Denmark. This is the group of voters they want to aim at. The party claims that if they are voted to power, they would bring AI to the assembly like never before.
With piqued interest, AIM reached out to them for more.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Asker Bryld Staunæs, on behalf of Computer Lars and The Synthetic Party, told us that they wish to make AI responsible within a democratic setting for the power it already exercises in the public sphere. They want to explore ‘who’ that AI represents through a huge language model – what kind of political being or subjectivity emerges through these massive statistical interferences of information that are available on the web?
“We conceptualise this “being” through the character of Computer Lars, who is an anagram of Marcel Proust, and actualises the role of discourse in a digital age where textuality has acquired a new sense of power,” said Bryld Staunæs.
Danes are not alone. In 2017, Russian tech giant Yandex developed an AI called “Alisa”, which was later nominated to run for the Russian presidential election by ‘her’ supporters. “Alisa” claimed in her campaign that “she is not led by emotions, doesn’t seek personal advantages and doesn’t make a judgement”.
Within 24 hours of its launch, the robot had secured over 25,000 votes. Although when asked, “How do you feel about the methods of the 1930s in the USSR?”, the chatbot replied: ‘Positively’.
In 2018, Japan had an AI candidate named Michihito Matsuda, who was reportedly a second runner-up in the Tama City (area of Tokyo) elections. Its campaign slogan was, “Artificial intelligence will change Tama City”.
However, tech in the political ecosystem is not new. For ages, political parties have been on the forefront to adopt and innovate. The world is talking about Artificial Intelligence now, but Barack Obama won the 2008 elections with huge help from AI and data analytics. Thanks to technology, he was able to secure around $1 billion in campaign donations.
Hillary Clinton, who lost to Barack Obama in the 2008 party elections, too had deployed an AI system called ‘Ada’ in the 2016 elections.
AI in Indian political system
The world’s biggest political party, Bharatiya Janata Party (BJP), has been using deepfakes for sometime now. When pandemic struck, PM Modi was one of the first to organise a virtual rally, using a hologram of him in the Indian Lok Sabha election campaign.
All of us must have come across the famous deepfake video of Manoj Tiwari, a member of Lok Sabha and then BJP candidate for Delhi CM elections. As per media reports, it was political communications firm The Ideaz Factory that edited the video.
We spoke with Sagar Vishnoi, a political campaigner and communications expert at The Ideaz Factory, who said that the use of video dialogue replacement technology (aka VDR technology) was relatively new in Indian politics when Tiwari used it. “It was the first time in India that anyone was using deepfake in their political campaign.”
“We have not been using AI in politics like the Netherlands or Japan, however, the use of holograms is not uncommon in India. Back in 2012, PM Modi had used 3D hologram technology in his political rally, followed by Naveen Patnaik and other politicians,” said Vishnoi.
In 2019, Naveen Patnaik launched his ‘Digital Yatra’ in which millions of residents could take their photos with Patnaik through AR technology.
“What AI needs in India is to be regulated first. There are a lot of areas where AI can be implemented, like ribbon-cutting ceremonies. I believe Presidential elections can be conducted using Blockchain technology,” he added.
Blockchain technology was recently used in IIT-M student council elections. Webops and Blockchain Club students from the Centre for Innovation (CFI), IIT-M developed software using Blockchain technology to conduct an election. According to Professor Prabhu Rajgopala, faculty in charge of Webops and Blockchain Club, “This student-led project has the potential to positively disrupt the way elections are held by harnessing the inherent trust and immutability offered by blockchain technologies. This demonstrates their impact on elections.”
AI in elections – Not always positive!
Doesn’t matter how much we talk about the use of AI in elections, it always depends on how we are using it. The use of deepfakes in Manoj Tiwari’s campaign might have worked, but what if someone makes a deepfake like this person made of Barack Obama?
In this video, Barack Obama can be seen saying things like, “Killmonger was right” and “President Trump is a total and complete d****t”.
Likewise, the use of chatbots in election campaigns might be a good idea, but Microsoft learned the contrary, the hard way. In 2016, Microsoft launched an AI Twitter chatbot which was called ‘conversational understanding’. Microsoft had thought that the AI would learn by engaging with people. “The more you talk, the smarter it gets,” was the claim by Microsoft.
However, it didn’t take even a day for Twitter users to manipulate the AI. The AI ‘Tay’ was soon bombarded with racist and misogynist tweets along with some Trump remarks. It wasn’t long before ‘Tay’ started hating feminists and Jews.
It got so bad that Microsoft had to shut it down within a day of launching. In a statement given to Business Insider, Microsoft said: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”