MITB Banner

Watch More

8 Real Life Examples When Algorithms Turned Rogue, Causing Disastrous Results

rogue algorithm

rogue algorithm

Artificial intelligence and machine learning have made human lives easier, but there’s no denying that if the deftly-written algorithm goes wrong, the results can be disastrous. From actual deaths, racism and monetary losses, the algorithms have the power to create as well as destroy.

But experts believe that these biases are a product of careless algorithm design, not malice.

Researchers say that the reasons for these biases can be traced down to the similar cases in history. The algorithms are fed with former data, which has been collected and analysed by humans, and therefore contain the biases automatically.

“Learned intermediaries will be technical personnel trained to evaluate the output of machine-learning algorithms and detect bias on the margins and legitimate auditors who must conduct periodic reviews of the data algorithms with the objective of making them stronger and more privacy protective. They should be capable of indicating appropriate remedial measures if they detect bias in an algorithm. For instance, a learned intermediary can introduce an appropriate amount of noise into the processing so that any bias caused over time due to a set pattern is fuzzed out,” said Rahul Matthan a fellow with the Takshashila Institute in his recently published paper Beyond consent: A new paradigm for data protection.

Here are our pick of the disastrous aftereffects when an algorithm went wrong:

1. Facebook chatbots:

facebook robot

Facebook researchers recently noticed that two of their artificially intelligent robots were no longer using coherent English to communicate. Interestingly, when the researchers dug deeper, they found out that the AI agents — Bob and Alice — were speaking a variation of English, because reportedly, they had deduced that English language “lacks reward”.

2. Stock Market Algorithm

Knight Capital, a firm that specialised in executing trades for retail brokers, took $440m in cash losses in 2012 due to a faulty test of new trading software and its algorithm.

3. Autopilot gone wrong

planecrash
Image courtesy @AirCrashMayday

After the Air Sweden 294 crashed in 2016, investigators were able to determine that the Air Data Inertial Reference Unit, or ADIRU — a device that tells the plane how it’s moving through space — had begun to send erroneous signals. But they couldn’t figure out why.

4. Solid Gold Bomb

keep calm

A make-your-own t-shirt company Solid Gold Bomb had got into trouble because their merchandise with the World War II slogan “Keep Calm And Carry On” was grossly manipulated. In order to maximise reach, the company had allegedly recruited a dictionary algorithm to spit out designs and have them all made on order. The results were horrendous with t-shirts carrying slogans like “Keep Calm and Rape Her” started appearing on online e-retail websites.

5. Amazon Book Costs A Bomb

amazon

In 2011, a biology textbook about flies was priced at $23 million on Amazon. The reason was later discovered to be two sellers, who had set up algorithms which would watch each other’s prices and then reset their own.

6. Microsoft’s AI Bot

tay

Microsoft’s chatbot Tay was meant to be an experiment in AI and machine learning, but it only took 24 hours for the bot to turn racist. ‘Tay’ was supposed to speak like millennials, learning from the people it interacted with on Twitter and the messaging apps Kik and GroupMe. But the transgression from “Humans are super cool!” to “Hitler was right” were disastrous.

7. Facial Recognition Algorithm

There have been several cases in the US where an algorithm-based bot has accidentally identified a regular citizen as a criminal and has automatically suspended the driving licence. Similar cases have also occurred at airports all over the world, where innocent people have been “recognised” as terrorists.

8. Beauty Pageant Algorithm Turns Racist

In 2016, for an international beauty contest which was to be judged by machines, thousands of people from across the world submitted their photos. Beauty.AI was to work on the basis of an advanced algorithm free of human biases, to find out what “true beauty” looked like in the eyes of the computer. But things went wrong quickly, as the algorithm started associating skin colour with beauty, and picked winners solely on the basis of race.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Prajakta Hebbar

Prajakta Hebbar

Prajakta is a Writer/Editor/Social Media diva. Lover of all that is 'quaint', her favourite things include dogs, Starbucks, butter popcorn, Jane Austen novels and neo-noir films. She has previously worked for HuffPost, CNN IBN, The Indian Express and Bose.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories