Active Hackathon

What Is AI Incident Database?

Today, businesses and government organisations are increasingly deploying intelligent systems to safety-critical problem areas such as healthcare, credit scoring, law enforcement, aircraft control, and corporate recruitment. Failures of such systems pose severe risks to life and expose the limits of intelligent systems deployed in real-world situations. The wrongful arrest of Robert Williams due to a flawed facial recognition system is a good case in point.

Experts believe AI practitioners should be aware of past failures of intelligent systems to avoid such fiascos. To that end, the Partnership on AI (PAI), a nonprofit organisation established to outline best practices in AI technologies, has introduced the AI Incident Database (AIID). A compelled repository of AI failures, AIID helps practitioners figure out what can go wrong when the system is deployed. Simply put, the database makes it easy for AI practitioners to learn from previous mistakes.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Led by Sean McGregor, the technical lead of IBM Watson AI XPRIZE, AI Incident Database provides an infrastructure supporting AI best practices, a dataset of more than one thousand incidents, and an architecture for building research products. 

Also Read: Are Blockchains More Secure Than Distributed Databases?

Nuts & Bolts

The AI Incident Database (AIID) catalogues more than 1,000 publicly available incident reports, including documents and reports from the academic press. According to the paper released by McGregor, these AI failure reports serve a range of purposes, starting from providing multiple viewpoints on incidents, to the number of publications types that double as a proxy for interest in the incident. Additionally, sampling multiple reports per incident would provide more comprehensive coverage of the incident, which increases the practitioners’ chances to discover relevant incidents.

Explaining the process, McGregor told media that considering AI systems learn to operate from their training data, it can easily change its conduct based on such data. Thus, such AI-based safety-critical systems can map new possibilities for failure.

According to McGregor, most incidents submitted revolve around Ethical AI, especially facial recognition systems, followed by failures in autonomous cars and trading algorithms that either cause substantial damage or put lives at risk.

AI practitioners can even search the database based on keywords, source, and authors involved, to get a 360-degree view. For instance — searching for ‘facial recognition’ will bring up 98 reports of AI incidents involving failures or problems related to automatic face recognition, biometric identification, identity verification etc. The search can be further refined based on the requirements.

The database’s outline has been massively inspired by the ‘Aviation Accident Reports’ — a shared database critically designed for managing flight safety by analysing the aircraft’s past incidents. McGregor said, the AI Incident Database will help users manage the safety of the AI systems deployed in the real world. The AIID is a collection of web applications that interfaces with a MongoDB document database storing incident report text and metadata.

Also Read: Why Do Facial Recognition Systems Still Fail

 Applications

Such pragmatic coverage of AI incidents can help AI practitioners discover and understand past experiences and create more possibilities in deploying AI systems in real-world applications. McGregor explained some of the critical areas, including deploying AI-powered recommendation systems or integrating ML systems to reduce financial and compliance risks. Engineers can also use AIID to learn more about the environment their systems are deployed within, and researchers can understand the AI systems’ safety and fairness. 

Identifying AI failures by the PAI members started in 2018. However, nobody kept a record of it until now.

McGregor has even open-sourced the project on GitHub, where he has welcomed industry users to improve its capabilities and build taxonomies and data summaries in the AIID codebase. Making the database shareable will persuade technology companies to evaluate the bad outcomes before implementation. In due time, McGregor hopes the database will develop into community-owned infrastructure to help create beneficial intelligent systems for the greater common good.

Read the paper here.

More Great AIM Stories

Sejuti Das
Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.

[class^="wpforms-"]
[class^="wpforms-"]