With facial recognition systems fooling hundreds of people into losing their money to NLP models spouting racist and sexist sentences, AI is becoming increasingly unethical, biased and dangerous. But, in a hopeful turn of events, 2021 also witnessed big tech companies, governments and judiciaries finally take some of the most foundational steps to curb the growth of unethical AI. We’ve curated a side-by-side of the year’s biggest ethical AI fiascos and the possible positives.
Twitter’s photo cropping algorithm
Sign up for your weekly dose of what's up in emerging technology.
Twitter introduced a new norm for identifying and moderating AI bias by launching a bug bounty challenge, open to all, to find faults in Twitter’s photo cropping algorithm that they disabled in March, but wanted to monitor closely. The top winners identified three major issues that revealed that the algorithm favoured faces that are “slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits.” The other two entries suggested that the system was biased against people with white or grey hair and favoured English over Arabic script in images.
In May, ex-Facebook employee, Frances Haugen, left her position at the company along with thousands of internal documents. In one of the biggest allegations and reveals, she proved that Facebook was aware of how their services were damaging to teenagers’ mental health and inciting ethnic violence, and more, but chose to ignore the issues. “I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy,” she said in her testimony to Congress.
Clearview AI raised funding despite controversies
American facial recognition company, Clearview AI, is currently facing investigations by British and Australian governments after scraping the web to cull billions of personal images of people worldwide without their permission. Despite being scrutinised by the governments of Britain, Australia, Canada, Vermont, New York and California, and the American Civil Liberties Union, the company raised $30 million in Series B funding this year. Additionally, in December, the company was provided with the go-ahead signal for securing a federal patent for its facial recognition software.
CoPilot and copyright issues
Copilot is Microsoft and OpenAI’s ‘invite only’ tool that writes codes based on human input, but the tool has faced severe copyright issues. It is trained in publicly available code repositories where many are licensed and under copyright protection. Additionally, according to a study, the GitHub Copilot’s code generated might include bugs or design flaws, vulnerable for an attacker to exploit.
Facebook mislabelled black men as primates
In September, Facebook’s AI stirred huge criticism online after it mislabelled a video featuring Black men as a video about primates. This was in a video uploaded by the Daily Mail, showing a White man with a group of Black men celebrating. Soon after watching the video, the AI prompt asked the viewer whether they would like to continue watching ‘videos about primitives’. Facebook apologised for the same and disabled the entire topic recommendation feature.
Google ethical lead
Last year, Timnit Gebru, co-leader of the AI ethics team at Google, was fired after Google objected to her unpublished research paper on the ethical AI issues caused by large language and ML models today. In 2021, her colleague, Margaret Mitchell, was fired for violation of Google’s security after she was claimed to be collecting evidence to suggest Gebru’s wrongful firing. Soon after, Samy Bengio, a researcher at Google’s Brain Team, resigned from the company and followed it up with a Facebook post saying, “I stand by you, Timnit.”
The reach of pegasus revealed
Developed by the Israeli cyber arms firm NSO Group, Pegasus is a highly dangerous software that can be covertly installed on mobiles and other personal devices. According to a news break by a consortium of media organisations, the software can record your calls, copy your messages, see your emails, listen to your talks via a microphone in your phone, and secretly film you, threatening one’s privacy and security. The tool has been revealed to assist in planning murders of journalists and used by the governments of ten countries, namely, Azerbaijan, Bahrain, Kazakhstan, Mexico, Morocco, Rwanda, Saudi Arabia, Hungary, India, and the United Arab Emirates (UAE). In India, the software was found on the phone of Rona Wilson, a government critic, a few months before his arrest.
Amazon employee fired by a robot
At Amazon, third party merchants and apps are used to manage workers in the warehouse and oversee contract & independent delivery drivers for quick turnarounds. But there were several errors on the other end of this, which, according to a Bloomberg report, Amazon was aware of but carried on anyway to save on labour costs. For example, in July, the amazon Flex delivery service, run by an algorithm, fired delivery personnel over failure in the selfie verification system. The algorithm failed to identify the photos when people lost weight, shaved beards, got a haircut, or took a picture in low lighting.
The future hope
While algorithmic biases and challenges are doubling at a tremendous rate, global governments and companies are starting to take the necessary steps to prevent unethical AI. Judiciaries and governments across the globe are passing bills and making laws to monitor data usage and algorithms by big tech companies. This includes the FTC, the European Union’s proposed AI framework, and the UK’s new plan to create gold standards for AI.
This year, the 193 member states of UNESCO adopted a historic agreement, defining the common principles and values required to ensure the healthy development of artificial intelligence, creating a legal infrastructure for safe AI. Similarly, a New York-based not for profit, Data and Trust Alliance signed up major companies like General Motors, Nike, CVS Health, Deloitte, Humana, IBM, Mastercard, Meta and Walmart, to collectively develop an evaluation and scoring system for AI softwares.
In fact, even big tech companies are starting to consider the side-effects of their technologies, with Twitter’s bug bounty programme being an excellent example. In a surprising range of events, these tech giants are increasingly saying no to technologies that involve facial recognition, voice mimicking software, and emotion analysis. Reuters found this, revealing how Google, IBM, and Microsoft have been resisting and turning down projects on account of ethics concerns.