Dangers Of Artificial Intelligence: Insights from the AI100 2021 Study

As part of a five-year study, the new AI100 report looks at the most significant concerns posed by artificial intelligence.

As part of a series of longitudinal studies on AI, the Stanford HAI has come out with the new AI100 report titled ‘Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.’ The report evaluates AI’s most significant concerns in the previous five years.

Much has been written on the state of artificial intelligence and its effects on society since the initial AI100 report. Despite this, AI100 is unusual in that it combines two crucial features. 

First, it is authored by a study panel of key multidisciplinary scholars in the field—experts who have been creating artificial intelligence algorithms or studying their impact on society as their primary professional activity for many years. The authors are experts in the field of artificial intelligence and offer an “insider’s” perspective. Second, it is a long-term study, with periodic reports from study panels anticipated every five years for at least a century.

As AI systems demonstrate greater utility in real-world applications, they have expanded their reach, raising the likelihood of misuse, overuse, and explicit abuse. As the capabilities of AI systems improve and they become more interwoven into society infrastructure, the consequences of losing meaningful control over them grow more alarming. 

New research efforts aim to rethink the field’s foundations to reduce the reliance of AI systems on explicit and often misspecified aims. A particularly evident concern is that AI might make it easier to develop computers capable of spying on humans and potentially killing them on a large scale. 

However, there are numerous more significant and subtler concerns at the moment.

  • Techno-Solutionism: As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones, such as racial and ethnic discrimination, as well as user-biased algorithms that magnify the same biases humans experience.
  • The Risks of taking a Statistical Approach to Justice: Some people believe that AI decision-making is objective, even if it results from skewed historical judgments or outright discriminatory actions. Discrimination is a serious issue in both criminal and healthcare settings. Bias-inducing algorithms can promote biases, including gender, racial, class, and ableist stereotypes, without even realising it. Case law gaps make applying Title VII to algorithmic discrimination challenging. Information bubbles formed by AI can potentially reduce autonomy.
  • The Threat of Disinformation to Democracy: AI systems are being co-opted by criminals, rogue states and ideological extremists to manipulate people for economic or political advantage. Disinformation poses a serious threat to society, as it effectively changes and manipulates the evidence to create social feedback loops that undermine any sense of objective truth.
  • Medical Discrimination and Risk: They claim that failure to include human concerns in AI integration has led to system mistrust. Optum’s algorithm for determining which patients need special medical attention included racial biases. According to a new report, Black patients receive $1,800 less in annual healthcare costs than white patients. In the case of melanomas, the five-year survival rate for blacks is 17 percentage points lower than for whites.

One can access the entire report here.

Download our Mobile App

Dr. Nivash Jeevanandam
Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring