Artificial Intelligence has to be one of the most impactful technologies that the world has seen in recent years. It is no longer just limited to the quaint research and development labs of academies and bigger institutions but has successfully penetrated the normal and day-to-day functioning of the society.
Like any other technology, AI also comes with its set of challenges. However, the stakes are slightly higher, considering the impact AI-technology-gone-rogue can have. Below we list some of the most controversial moments of the AI industry in 2020. If not anything, this may be considered as a cautionary alarm moving forward.
Clearview AI & Facial Recognition Technology
‘The Secretive Company That Might End Privacy As We Know It’, this was the title of The New York Times article published in January 2020. This article was about Clearview AI, a startup founded by Australian entrepreneur Hoan Ton-That. A facial recognition software in its most primitive form is already debated, but there is an added problem with Clearview AI due to the unscrupulous way that it operates. It reportedly collected and stored billions of personal images of people around the world from the web and social media accounts into their database without any proper permission system. These pictures in the database were then compared with photos of unknown individuals using facial recognition technology.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Clearview’s data and technology were accessible by at least 26 countries and their law enforcement agencies, even though the company strictly maintained that it had provided access to just the concerned authorities of the USA and Canada. In India too, it was reported that the Gujarat police services too used Clearview’s facial recognition technology. Further, despite official denial from the company itself, a ‘leaked’ report revealed that apart from 2,200 law enforcement organisations from the world, many private organisations too had adopted this technology.

As the controversy gained ground, Facebook, Twitter, Google, and YouTube sent cease-and-desist letters to the company after these tech giants released that it had been scraping off images from these platforms. Currently, several countries, including Canada, have either completely removed this technology or are actively probing the matter.
GPT-3
In June this year, researchers from OpenAI released the GPT-3, a state-0f-the-art language model. While its predecessor GPT-2 had 1.5 billion parameters and was considered the largest models at the time of its release, GPT-3 surpassed it by miles by having 175 billion parameters. To gain a perspective of how large the model is, it would be useful to consider that it is ten times larger than Turing NLG that ranks behind it.
Upon its introduction, several sections called it the revolutionalisation of the concept of machines writing codes like humans and taking it a step further by writing blogs, stories, websites, and apps. One of the best examples would be the case of a college student who created an entire blog using GPT-3. Another case involved a GPT-3 powered bot that was caught interacting with people in the comment section of Reddit threads. The Guardian also wrote an entire article using this language model.
GPT-3, with its range of capabilities, was not immune to criticism. With its ability to produce very human-like texts, there were obvious concerns raised against its misuses such as spam and phishing, fraudulent academic essay writing, social engineering pretexting, and abusing legal processes, among others.
The criticism further gained ground when OpenAI decided to give its exclusive access to Microsoft. This step was severely criticised even by big wigs such as Elon Musk, one of OpenAI’s founders himself.
There are other shortcomings of GPT-3 as a software, which includes: bias in generated text, poor understanding of actual words to produce lengthier and comprehensible pieces, and poor generalisation. Recently, Yann LeCun, the VP & Chief AI Scientist at Facebook, trashed OpenAI’s massive language model. He called out on people’s unrealistic expectations from this software, which according to him ‘is entertaining, and perhaps mildly helpful as a creative tool.’
Deepfake Videos
Deepfake, in itself, has been a rather controversial technology from the beginning. It has been a worldwide concern for the authorities regarding its misuse. While earlier, deepfakes were mostly restricted to making funny and entertaining videos, the growing accessibility of easy-to-use tools and the advancement in GANs made it easier for notorious minds to generate almost genuine-looking AI-generated videos and images.
There were several instances of famous celebrities’, especially females’ faces being morphed on highly objectionable videos. Apart from this, deepfakes were feared to mischievously affect elections in a year when major countries and states were going to the polls. For example, India saw its first use of deepfakes for an election campaign when a BJP leader released a campaign video, where he originally spoke in Hindi, however, using deepfake technique, the same video was also released in a few other languages. While this particular use case seemed rather harmless, critics were quick to raise the alarm for possibly setting a hazardous precedent.
AI-Based Grading System In UK
Lockdowns across multiple countries meant that the normal functioning of life and society took a major hit. Every aspect, from healthcare, governance, education, and inter social mingling saw a complete set of challenges. Speaking of education, a lot of schools and colleges have been indefinitely closed due to fear of the virus. Exams have been postponed or cancelled together, keeping in view of the situation.
However, in August, the UK’s exam regulation department called the Office of Qualifications and Examinations Regulation (Ofqual) decided to adopt an AI-based grading system to gauge student performance since the A-level examination that decides student gets to go to which university had to be cancelled. Parents and children protested strongly against this system as they claimed that the algorithms were highly biased against poorer students. Experts also called this system ‘unethical and harmful to education’. Keeping in view the harsh and relentless criticism and protests against this AI-based grading, the government of the UK had to finally drop it and decided that it would instead use teachers’ discretion and other parameters to allot grades to students.
‘Terrible’ Quality Of Reviews At NeurIPS
This isn’t particularly an AI-related controversy but sheds a rather poor light on the research quality and practices. It concerns itself with the annual conference on Neural Information Processing Systems, NeurIPS 2020 held virtually this year. This popular machine learning event saw 38% more submissions than last year. The review period of paper submissions began in July, and by August, the conference sent out the paper reviews.
These reviews came under scanner for being of ‘terrible’ quality as they were either unclear or incomplete, in a few cases, both. Critics demanded increased accountability for poor reviewers, and some even went to the extension of requesting disbandment of such persons with a proven bad record.