Today, businesses and government organisations are increasingly deploying intelligent systems to safety-critical problem areas such as healthcare, credit scoring, law enforcement, aircraft control, and corporate recruitment. Failures of such systems pose severe risks to life and expose the limits of intelligent systems deployed in real-world situations. The wrongful arrest of Robert Williams due to a flawed facial recognition system is a good case in point.
Experts believe AI practitioners should be aware of past failures of intelligent systems to avoid such fiascos. To that end, the Partnership on AI (PAI), a nonprofit organisation established to outline best practices in AI technologies, has introduced the AI Incident Database (AIID). A compelled repository of AI failures, AIID helps practitioners figure out what can go wrong when the system is deployed. Simply put, the database makes it easy for AI practitioners to learn from previous mistakes.
Led by Sean McGregor, the technical lead of IBM Watson AI XPRIZE, AI Incident Database provides an infrastructure supporting AI best practices, a dataset of more than one thousand incidents, and an architecture for building research products.
Nuts & Bolts
The AI Incident Database (AIID) catalogues more than 1,000 publicly available incident reports, including documents and reports from the academic press. According to the paper released by McGregor, these AI failure reports serve a range of purposes, starting from providing multiple viewpoints on incidents, to the number of publications types that double as a proxy for interest in the incident. Additionally, sampling multiple reports per incident would provide more comprehensive coverage of the incident, which increases the practitioners’ chances to discover relevant incidents.
Explaining the process, McGregor told media that considering AI systems learn to operate from their training data, it can easily change its conduct based on such data. Thus, such AI-based safety-critical systems can map new possibilities for failure.
According to McGregor, most incidents submitted revolve around Ethical AI, especially facial recognition systems, followed by failures in autonomous cars and trading algorithms that either cause substantial damage or put lives at risk.
AI practitioners can even search the database based on keywords, source, and authors involved, to get a 360-degree view. For instance — searching for ‘facial recognition’ will bring up 98 reports of AI incidents involving failures or problems related to automatic face recognition, biometric identification, identity verification etc. The search can be further refined based on the requirements.
The database’s outline has been massively inspired by the ‘Aviation Accident Reports’ — a shared database critically designed for managing flight safety by analysing the aircraft’s past incidents. McGregor said, the AI Incident Database will help users manage the safety of the AI systems deployed in the real world. The AIID is a collection of web applications that interfaces with a MongoDB document database storing incident report text and metadata.
Also Read: Why Do Facial Recognition Systems Still Fail
Such pragmatic coverage of AI incidents can help AI practitioners discover and understand past experiences and create more possibilities in deploying AI systems in real-world applications. McGregor explained some of the critical areas, including deploying AI-powered recommendation systems or integrating ML systems to reduce financial and compliance risks. Engineers can also use AIID to learn more about the environment their systems are deployed within, and researchers can understand the AI systems’ safety and fairness.
Identifying AI failures by the PAI members started in 2018. However, nobody kept a record of it until now.
McGregor has even open-sourced the project on GitHub, where he has welcomed industry users to improve its capabilities and build taxonomies and data summaries in the AIID codebase. Making the database shareable will persuade technology companies to evaluate the bad outcomes before implementation. In due time, McGregor hopes the database will develop into community-owned infrastructure to help create beneficial intelligent systems for the greater common good.
Read the paper here.