As part of a series of longitudinal studies on AI, the Stanford HAI has come out with the new AI100 report titled ‘Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.’ The report evaluates AI’s most significant concerns in the previous five years.
As part of a series of longitudinal study on AI, the new #AI100report examines AI’s most pressing dangers in the last five years. Read the report here:https://t.co/np9hAtO8XM pic.twitter.com/nZql5C06OD
— Stanford HAI (@StanfordHAI) September 27, 2021
Much has been written on the state of artificial intelligence and its effects on society since the initial AI100 report. Despite this, AI100 is unusual in that it combines two crucial features.
First, it is authored by a study panel of key multidisciplinary scholars in the field—experts who have been creating artificial intelligence algorithms or studying their impact on society as their primary professional activity for many years. The authors are experts in the field of artificial intelligence and offer an “insider’s” perspective. Second, it is a long-term study, with periodic reports from study panels anticipated every five years for at least a century.
As AI systems demonstrate greater utility in real-world applications, they have expanded their reach, raising the likelihood of misuse, overuse, and explicit abuse. As the capabilities of AI systems improve and they become more interwoven into society infrastructure, the consequences of losing meaningful control over them grow more alarming.
New research efforts aim to rethink the field’s foundations to reduce the reliance of AI systems on explicit and often misspecified aims. A particularly evident concern is that AI might make it easier to develop computers capable of spying on humans and potentially killing them on a large scale.
However, there are numerous more significant and subtler concerns at the moment.
- Techno-Solutionism: As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones, such as racial and ethnic discrimination, as well as user-biased algorithms that magnify the same biases humans experience.
- The Risks of taking a Statistical Approach to Justice: Some people believe that AI decision-making is objective, even if it results from skewed historical judgments or outright discriminatory actions. Discrimination is a serious issue in both criminal and healthcare settings. Bias-inducing algorithms can promote biases, including gender, racial, class, and ableist stereotypes, without even realising it. Case law gaps make applying Title VII to algorithmic discrimination challenging. Information bubbles formed by AI can potentially reduce autonomy.
- The Threat of Disinformation to Democracy: AI systems are being co-opted by criminals, rogue states and ideological extremists to manipulate people for economic or political advantage. Disinformation poses a serious threat to society, as it effectively changes and manipulates the evidence to create social feedback loops that undermine any sense of objective truth.
- Medical Discrimination and Risk: They claim that failure to include human concerns in AI integration has led to system mistrust. Optum’s algorithm for determining which patients need special medical attention included racial biases. According to a new report, Black patients receive $1,800 less in annual healthcare costs than white patients. In the case of melanomas, the five-year survival rate for blacks is 17 percentage points lower than for whites.
One can access the entire report here.