Many of us have enjoyed political leaders’ entertaining reels and memes, humming to the tunes of wildly popular songs, created using the easily and cheaply available deepfake technology today. But harmless as they seem, a dangerous potential looms.
Last year, fake images depicting former US President Donald Trump’s arrest went viral on social media platforms causing a huge uproar among supporters who fell for this fake image.
Now, picture a hyper-realistic deepfake of a major political leader, circulating on social media, declaring withdrawal from elections just a day before polls. Imagine it on giant screens in global hubs like Wall Street and Indian cities, reaching the remotest corners via smartphones. The suddenness of such an event could drastically influence public opinion and election outcomes in major democracies worldwide.
Thus, safeguarding the democratic process against these threats has become crucial in 2024 as over 60 countries, representing half the world’s population—an estimated 4 billion voters worldwide in countries like India, the US, the UK, Russia, Ukraine, Indonesia, Pakistan, Bangladesh, Maldives, and Sri Lanka, go to polls this year.
It’s Already Here
Swift action and increased vigilance are needed as services like “deepfakes for $24 a month” are already being used in the lead-up to Bangladesh’s recent elections. This issue isn’t just isolated to Bangladesh.
The use of AI-generated content to discredit information has also been witnessed in India’s leaked audio incident. Recently, the head of the BJP in Tamil Nadu, K. Annamalai released audio clips allegedly featuring Palanivel Thiagarajan from the DMK party alleging corruption within his party and praising the BJP. Thiagarajan vehemently denied the authenticity of the clips, attributing them to artificial intelligence.
Additionally, a country like India with a majorly rural population which has been introduced to sophisticated technology, cannot still safeguard their interests against its perils. Many have now been able to build resilience to at least recognise fake messages over WhatsApp.
The threats become additionally real as tools like HeyGen and D-ID can generate convincing deep fakes within seconds and are available for low costs.
Indian companies producing AI-based video and audio deep fakes have also started receiving calls from regional as well as international politicians to produce AI videos for election campaigns.
The CEO of RephraseAI voiced his concern about deep fakes’ rising use and possible application against political opponents. He narrated an instance when they received a request from Kenya offering to pay a substantial amount to create personalised deep fake videos of leaders from the opposing side as they did with the Cadbury ad starring Shah Rukh Khan.
Meanwhile, Divyendra Singh Jadoun, the founder of The Indian Deepfaker, previously hesitant to create deep fake campaign videos for state elections, is now preparing to produce them for the upcoming general election. However, these will be personalised video messages from politicians for party workers, not voters, that can be sent on WhatsApp.
“They can have an impact, because there are hundreds of thousands of party workers and they will, in turn, forward them to their friends and family,” he said, adding that they will add watermarks.
Tools for Deep Fake Detection
Given the situation, there’s an increasing need for advanced tools for detection. Many social media aggregators and tech giants are aiding this fight.
Intel’s Real-Time Deepfake Detector (FakeCatcher) stands out with a staggering 96% accuracy rate. It harnesses photoplethysmography (PPG) to scrutinise videos for subtle “blood flow” indicators, a unique approach that differentiates genuine footage from AI-generated fabrications.
WeVerify, a project dedicated to debunking falsified content, relies on a multifaceted strategy encompassing content verification, social network analysis, and a blockchain-based database to expose and contextualise fabricated media.
Microsoft’s Video Authenticator Tool is another tool which operates by meticulously examining grayscale variations, providing instant confidence scores that allow for the immediate identification of deepfakes in both images and videos.
Moreover, the Phoneme-Viseme Mismatch Detection technique, conceived by researchers from Stanford and UC, capitalises on inconsistencies between mouth movements and spoken words, serving as a telltale sign of deep fake manipulation.
Govt Takes a Note
Additionally, another positive is that leaders worldwide are taking note of the growing threat. During his address to the media PM Modi recently raised similar questions urging the media to play a role in educating the public about this phenomenon.
“There is a very big section of society which does not have a parallel verification system,” he said. adding that just as products like cigarettes come with health warnings, deep fakes should also carry disclosures.
Additionally, Union Minister Ashwini Vaishnaw emphasising the threat, unveiled a four-point plan to combat this challenge—detecting deep fakes, preventing their spread, fortifying the reporting mechanism, and fostering public awareness.
The minister highlighted the need for an effective regulatory mechanism, aiming for either new laws or amendments to existing rules. The meeting discussed technological solutions and strategies like watermarking videos, labelling, and potentially banning apps facilitating deepfake creation. The government’s stern advisory to social media platforms included warning of consequences, including the loss of immunity, for failing to swiftly remove deep fake content.