Major AI Controversies Of 2021

Soon after Gebru was fired, Samy Bengio wrote a Facebook post saying, "I stand by you, Timnit."
AI Controversies

Advertisement

While 2021 was an exciting year for AI regarding innovations and new inventions, it was immune to controversies and scandals. In this article, we take a look at some of the most prominent ones that grabbed headlines.  

Tesla Humanoid

From the range of announcements made at the Tesla AI Day 2021, one that caught the fancy of a lot of people was the humanoid robot. Introduced in a unique manner, a human dressed in a white bodysuit and shiny mask did the news reveal during the event. Called Optimus, this humanoid robot, standing five feet eight inches and weighing 125 pounds, would be capable of performing repetitive tasks; the first prototype is likely to be released next year. This robot will leverage Tesla’s existing tech for automated machines and SLV software. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The industry was divided in its reaction to Optimus. While many lauded Musk vision, others seemed sceptical about whether the company would be able to deliver on its promise in the announced frame of time. Apart from the apprehensions regarding the delivery time, the main point of contention is the availability of required tech capabilities to develop a humanoid robot, which at best has been restricted to the conceptual stage. The general trend in the robotics industry has been worrying over the past few years. Many innovative companies had to shut shop. Boston Dynamics was saved last minute by Hyundai when it bought out the former for $1.1 billion, an amount considered low for an innovative company like Boston Dynamics.

In the past, too, Musk has failed to deliver on many of his promises, attracting doubts on Optimus as well. That said, if the grand plan does take off, Tesla will achieve a major breakthrough.

Patent for Invention by AI

In first, the Companies and Intellectual Property Commission of South Africa granted a patent to an invention by an AI system. The AI system is called Device for Autonomous Bootstrapping of Unified Sentience (DABUS) that invented a fractal geometry-based food container. DABUS was invented by Missouri physicist Stephen Thaler. The patent application identified stated ‘the invention was autonomously generated by an artificial intelligence.’

In July, the Federal Court of Australia ruled that the AI machine – that is, a mathematical equation that analysed and processed data – can be an inventor under Australian patent laws.

That said, the patent application for the DABUS invention was rejected by the US and the UK. Awarding a patent to an invention by an AI system has been at the centre of major controversy. While supporters call it a progression, a section of stakeholders seemed apprehensive. The European Patent Office stated that the inventor on a patent application must have legal capacity. Similarly, the United States Patent and Trademark Office (USPTO) said that the ‘Application Data Sheet’ with the patent application did not identify the inventor by its legal name.

In July, Microsoft and OpenAI released the technical preview of AI-based coding assistant GitHub Copilot. Built using Codex, this assistant builds on the context of the code being written by a human coder and suggests successive lines of functions and programs. People have been calling it a game-changer even as there is a murmur about how the next softwares would take away human programmers’ jobs (Copilot inventors have addressed these concerns).

However, a few programmers have raised concerns about its copyright issues. Since the tool is trained in publicly available code repositories, many of them licensed and under copyright protection, what happens when such snippets are reproduced? Since the tool is trained on free and open-source code, can parent organisations — Microsoft, OpenAI, and GitHub — monetise it?

Another controversy hit GitHub Copilot when a study revealed that the code generated by the tool might include bugs or design flaws that an attacker can exploit.

Uber Facial Recognition

A few months back, UK-based App Drivers and Courier’s Union (ADCU) and Workers Info Exchange (WIE) appealed to Microsoft to suspend the sale of Face Detection API to Uber. The unions said that in at least seven different cases, failed facial recognition has led to the suspension of drivers and, in a few cases, revocation of license by the Transport for London.

However, the company spokesperson maintained that such verification is necessary to avoid potential fraud. The spokesperson also claimed the decision to remove drivers involved human review. 

Google AI Ethics Team

After the controversial firing of Timnit Gebru from Google for an unpublished research paper on the ethical issues of recent advances in AI and large language models, reportedly, Google found the work to be objectionable, and Gebru was asked to retract the paper or remove her name from the coauthors’ list. Early in 2021, Gebru’s former colleague at Google, Margaret Mitchell, was reportedly collecting evidence against Gebru’s wrongful firing. As per Google, Mitchell was fired for violation of security policy.

Later, Samy Bengio, a well-known researcher with Google’s group Brain, announced that he was quitting the company. While Bengio said that he is departing from the company to pursue ‘exciting opportunities’, his resignation had a few relating it with the Gebru and Mitchell firing incidents. Soon after Gebru was fired, Bengio wrote a Facebook post saying, “I stand by you, Timnit.”

Facebook AI Mislabels

Facebook’s AI kicked up a storm when it mislabelled a video featuring Black men as a video about primitives. A video uploaded by the Daily Mail documented an encounter between a White man and a group of Black men who were celebrating a birthday. AI showed a message prompt after one finishes watching the video whether they would like to continue watching ‘videos about primitives’. Soon after the incident was reported, Facebook disabled the topic recommendation feature and apologised for the mishap.

More Great AIM Stories

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Our Upcoming Events

Conference, in-person (Bangalore)
MachineCon 2022
24th Jun

Conference, Virtual
Deep Learning DevCon 2022
30th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MORE FROM AIM