MITB Banner

Top goof-ups by AI models

Satya Nadella, the CEO of Microsoft, summarised the Tay incident as a teaching moment and stated that Tay had changed the mindset of Microsoft's approach toward AI.

Share

Listen to this story

Artificial intelligence is everywhere, from self-driving cars to automated industrial systems to smart home appliances. It is expanding at a rapid speed and scale. That said, this technology is not immune to occasional gaffes. Let us look at some of the goof-ups by AI models:

Microsoft’s Tay(bot) turns fascist

Microsoft released its AI-based conversational bot called ‘Tay’ in March 2016. Tay started off well by chatting with other Twitter users, captioning photos provided to it into the form of internet memes, etc. In less than 16 hours of its launch on Twitter under the handle @Tayandyou, Microsoft shut the account down as the bot started posting offensive and fascist tweets. Tay’s scandalous tweets like “Hitler was right” and that the “Holocaust was made up” later revealed Tay was learning from the interactions with people on Twitter, including trolls and extremists.

Satya Nadella, the CEO of Microsoft, summarised the Tay incident as a teaching moment and stated that Tay had changed the mindset of Microsoft’s approach toward AI.

AI’s blurting racial slur

In June 2015, freelance web developer Jacky Alcine discovered that Google Photos computer vision-based facial recognition system categorised him and his black friends as Gorilla. His tweets caused an uproar on Twitter, with even Google’s team taking notice. This incident was quickly followed up by Google’s then chief social architect, Yonatan Zunger, who posted an apology and stated that this was 100 percent not ok.

AI-game in extra hard mode

In June 2016, Frontier developments launched the 2.1 engineer’s update of their popular AI-based game Elite: Dangerous. However, the AI in the game took things too far when it started creating overpowered bosses that went beyond the parameters of the game design to defeat players. This incident was identified as a bug in the game that caused the Game’s AI to create super weapons and target the players.

Frontier Developments later removed this bug by suspending the feature: Engineer’s weapon.

AI pulls a financial scam

DELIA (Deep Learning Interface for Accounting) was an AI-based software developed by Google and Stanford to help users with menial accounting tasks like transferring money between bank accounts. It was created to monitor customer transactions and look for patterns like recurring payments, expenses, and cash withdrawals using ML algorithms. Sandhil Community Credit Union became the testbed for this program as they used it on 300 customer accounts. However, they were left surprised when DELIA started creating fake purchases and siphoning funds into a single account called ‘MY Money’.

The researchers shut down the project as soon as the problem came to light in a few months.

AI-based Uber runs wild

In 2016, the cab-hailing giant Uber started offering rides in self-driving cars in San Francisco without a permit for autonomous vehicles. They did this without getting approval from California state authorities. At that time, Uber’s self-driving Volvos, which operated with ‘safety driver’ were already deployed in Pittsburgh. However, these vehicles were found to jump red lights; soon after, the company was forced to halt its program in California.

In 2020, Uber gave up on its self-driving vehicle dream by selling its autonomous vehicle business to Aurora Innovations, shortly after one of the vehicles killed a pedestrian.

AI-Model that predicts crime

In 2016, Northpointe (now Equivant), a tech firm working on creating software for the justice department, launched an AI tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). COMPAS took into account factors such as age, employment and previous arrests to provide risk scores for recidivism (the tendency of a convicted criminal to re-offend), one of the factors considered by judges while passing judgement on individuals. 

However, COMPAS turned out to be biased against Black defendants and incorrectly labelled them as “high-risk” in comparison to their white counterparts. Despite public uproar on the model, Northpointe defended its software by stating that the algorithm is working as intended and arguing that COMPAS’s assumption that Black people have a higher baseline risk of recidivism which trickles down to a higher risk score, is valid. 

Share
Picture of Kartik Wali

Kartik Wali

A writer by passion, Kartik strives to get a deep understanding of AI, Data analytics and its implementation on all walks of life. As a Senior Technology Journalist, Kartik looks forward to writing about the latest technological trends that transform the way of life!
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.