MITB Banner

Watch More

Biggest AI Goof-Ups That Made Headlines In 2020

Biggest AI Goof-Ups That Made Headlines In 2020

Design by Biggest AI Goof-Ups That Made Headlines In 2020

The innovation of GPT-3 and advancements in facial recognition technology, brain chips, chatbots, self-driving cars, drones, as well as robotics have marked 2020 as the year of artificial intelligence. However, similar to any other technologies, AI also came with its challenges. From biasness to inaccuracy, AI has proved its immaturity in many cases. 

As a matter of fact, prominent tech leaders, researchers as well as scientists, like Elon Musk, Yann LeCun, as well as Bill Gates have continuously warned the industry about the hype AI has created and the consequences it can bring if not appropriately handled by the tech giants. Such critical judgements came from the many instances where AI failed to demonstrate its value to the industry. 

From AI’s failed COVID prediction to AI mistaking referee’s bald head for a ball in a football match, here are eight biggest AI goof-ups and blunders of 2020 that made the front page, in no particular order.

Also Read: 10 Biggest Data Breaches That Made Headlines In 2020

Read AI Goof-ups from last year.

Read AI failures of all time.

The inaccuracy of YouTube’s AI moderators

To sustain the COVID pandemic, similar to other companies, YouTube, in March 2020, turned towards AI and machine learning to moderate videos and content on its platforms. With fewer human employees and a major dependence on AI led to the removal of 11 million videos from April to June alone, which is a far higher number than usual. Such a huge number brought in the attention of many content creators and YouTube has undergone a heavy backlash of removing many videos that aren’t violating any rules. Out of these takedown videos, 320,000 were appealed, with half of them reinstated. While the company blamed the entirety on machines, this AI blunder is indeed notable. And that’s why, in September, Youtube announced to bring back its human moderators who were kept at a halt during the pandemic.

Read more about it here.

AI hits the wall & mistakes referee for a ball

In a week of AI goof-ups in November, two main instances made headlines — first, where a self-driving car in a motor race drives straight into the wall, and secondly in a live match, where an AI camera followed the bald head of a linesman instead of the ball. For the instance of a self-driving car, an engineer explained to the media that the failure happened during the initialisation lap where the AI got confused and turned the steering wheel to NaN (not a number), which led to the car hitting the wall. Similarly, in the AI camera case, the algorithm confused the bald head for footballs. Such scenarios, where AI is getting easily confused between different objects, made researchers and scientists think how AI can react to cases where human lives are involved.

Read more about it here.

Fumbling of facial recognition technology

The COVID pandemic has brought in a new norm of wearing face masks; however, such a change has completely perplexed the pre-pandemic facial recognition algorithms. In recent news, it has been stated that the Face ID of Apple’s recently released iPhone 12 failed to recognise people with face masks. This raised significant concern for all the facial recognition models that were trained on pre-pandemic data without masks. To test this theory, in a recent study, researchers experimented on prominent commercial facial recognition applications and noted that 89 of those came up with errors. Such a drop in the accuracy of these applications highlights how these facial recognition systems can be deceived and result in misinformation about people.

Read more about it here.

The wrongful arrest of Robert Williams

Similar to the above mentioned “facial recognition system fail,” in another news, an African American man was wrongfully arrested after a facial recognition system mistaken his photo as a shoplifter. Facial recognition has been a controversial technology since its inception, and with this faulty algorithm news, it made massive havoc in the industry. Not only this raised concerns but also forced tech giants like Amazon, Microsoft, IBM, as well as Google to stop their facial recognition offerings for police authorities. Clearview AI, another facial recognition company, has also been criticised this year for collecting personal information and images of people from their social media accounts without any proper permission system. Such instances also made many researchers and scientists work towards making this technology less biased and more useful for humanity.

Read more about it here.

Also Read: Five Most Controversial Moments Of AI In 2020

The A-Level fiasco

In another case of sustaining COVID pandemic, the government of UK, in August, substituted teachers with AI algorithms to score students for a cancelled A-level (advanced-level) qualification exams. However, the algorithm scored the students based on the historical performance of individual secondary schools, which resulted in students receiving way lower than they had expected. As a matter of fact, for a few students, the AI’s result would make them ineligible for the university programs that they were expecting to attend. In this scenario, the scores of bright students from less-advantaged schools were decreased while the marks for more affluent schools were increased. This denoted that the AI algorithm had a societal bias that forced it to make students wrongfully.

Read more about it here.

The most devastating software mistake

In another AI goof, a popular epidemiologist from the Imperial College, London — Neil Ferguson lost his job. In this case, the AI model inaccurately forecasted the possible deaths due to COVID-19 with and without lockdown. However, there have been many instances; this one made the headlines because of its importance in framing policies in the UK and US. The model inaccurately predicted deaths in the US and UK, which led to structural mistakes in analysing the outbreak response. According to experts, not only the countries ignored standard contact tracing but also missed continuous monitoring to identify patients with symptoms. This led to massive havoc and an increase in the number of COVID patients. Such scenarios highlight how to complete dependability on AI can lead to irreparable outcomes.

Read more about it here.

Twitter & Zoom’s Racism

For both of these cases, the AI was biased against the black community by either cropping out or omitting the entire head of the people. In Twitter’s case, the US programmer, Tony Arcieri, launched an experiment on Tweet to check the algorithm’s biasness. He noted that when uploaded large photo collages of former US President Barack Obama and Republican Senate leader Mitch McConnell, Twitter’s image preview automatically cropped out Obama’s face instead of McConnell’s. With several similar experiments, the programmer stated that Twitter’s is yet another example of biases in ML algorithms. In a similar case with Zoom, a video conferencing company, the platform cropped out the entire head of a black faculty member when used as a virtual background. Such instances will keep reminding us of the issue working with bad algorithms.

Read more about it here.

Google’s inaccurate medical AI

No list can be complete with Google’s mention. While AI has been making a breakthrough in the medical industry, in one instance, Google’s medical AI failed to prove its accuracy in real-world testing. The tool which claimed to identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy, was first tested in Thailand. The AI tool indeed speeds things up, but its performance level in the real world wasn’t up to the mark. Considering the deep learning models were trained on high-resolution images to ensure accuracy, in the real world, the images below quality level were rejected, which caused a lot of inconvenience for the healthcare staff. This, in turn, received a lot of backlash from the industry as well as from the public. Thus, questioning the usefulness of this technology.

Read more about it here.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Sejuti Das

Sejuti Das

Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories