Advertisement

Did Wrong COVID-19 Predictions Undermine AI’s Credibility?

There is no value in AI without subject-matter expertise.
COVID-19 AI Prediction

Inarguably, artificial intelligence has played a critical role in containing the COVID-19 pandemic. AI techniques were used to track cases, making predictions, and also in administering vaccines. It’s fair to assume that the clickbaity media have dramatised the role of AI in fighting the pandemic, even at the expense of downplaying human contributions.

AI predictions

The US National Institute of Health (NIH) published a paper in 2020 about how AI has fallen short in pandemic predictions. The poor performance of the AI models can be chalked up to a variety of factors.

Lack of reliable data is the topmost factor, especially at the onset of the disease. In the initial days, the data just wasn’t sufficient to build AI models to accurately track and map its spread. Experts depended heavily on a small sample, mostly from China–a large part of which was not peer-reviewed.

Soon, government and private entities across the world started launching initiatives to gather and share data to train AI models. Prominent examples include World Health Organization’s Global Research on Coronavirus Disease Database, GISAID Initiative, and COVID- 19 Open Research Dataset (a joint initiative between Semantic Scholar, the Allen Institute for Artificial Intelligence, Facebook, Microsoft, and others). 

Post hitting the critical mass, the next challenge was to figure out how to crunch the collected data. The past efforts towards using big data and AI in mapping infectious diseases had fallen flat. Google Flu Trends (GFT) is a good case in point. Introduced in 2008, Google projected GFTs could ‘nowcast’ the flu based on google searches. The search data, if properly tuned to the flu tracking information from the Centers for Disease Control and Prevention, could produce accurate estimates of flu prevalence two weeks earlier than the CDC’s data. In 2013, GFT’s flu forecast was off by an embarrassing 140 percent.

Models like Susceptible-Exposed-Infectious-Removed (SEIR) have been used to analyse the pandemic data. However, the classic forecast models couldn’t account for a pandemic of COVID’s scale and complexity. Such models are prone to errors and risk compounding with large data.

Bias is another major challenge with using AI. According to an article published in the Journal of the American Medical Informatics Association, the inherent bias of the AI models could disproportionately impact people from underrepresented communities, such as blacks. The paper argued that the impacts could be felt in terms of infection rate, hospitalisations, and mortality. “The most frequent problems encountered were unrepresentative data samples, high likelihood of model overfitting, and imprecise reporting of study populations and intended model use,” the researchers said.

Surveillance is another indirect consequence of AI implementation. For example, in China, the government deployed a social control technology using AI to allow or deny an individual access to public space. In India, critics panned apps like Arogya Setu over privacy concerns.

A recent paper by a team of researchers from the University of Cambridge and the University of Manchester found many machine learning-based studies conducted between January 2020 and October 2020 suffered from methodological flaws, underlying bias and sometimes both.

Wrapping up

In a recent paper, former Google researcher Melanie Mitchell detailed four fallacies in AI research. The fallacies lead to unrealistic expectations from AI.  For instance, ‘wishful mnemonics’–attributing AI programs qualities generally associated with human intelligence.

In the case of COVID-19 predictions,  several people equated AI to a human expert. But, AI is only helpful when applied judiciously by a subject matter expert. In the same context, Alex C. Engler is a Rubenstein Fellow in Governance Studies at The Brookings Institution, wrote, “journalists that breathlessly cover the “AI that predicted coronavirus” and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.”

Download our Mobile App

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it