Inarguably, artificial intelligence has played a critical role in containing the COVID-19 pandemic. AI techniques were used to track cases, making predictions, and also in administering vaccines. It’s fair to assume that the clickbaity media have dramatised the role of AI in fighting the pandemic, even at the expense of downplaying human contributions.
The US National Institute of Health (NIH) published a paper in 2020 about how AI has fallen short in pandemic predictions. The poor performance of the AI models can be chalked up to a variety of factors.
Lack of reliable data is the topmost factor, especially at the onset of the disease. In the initial days, the data just wasn’t sufficient to build AI models to accurately track and map its spread. Experts depended heavily on a small sample, mostly from China–a large part of which was not peer-reviewed.
Soon, government and private entities across the world started launching initiatives to gather and share data to train AI models. Prominent examples include World Health Organization’s Global Research on Coronavirus Disease Database, GISAID Initiative, and COVID- 19 Open Research Dataset (a joint initiative between Semantic Scholar, the Allen Institute for Artificial Intelligence, Facebook, Microsoft, and others).
Post hitting the critical mass, the next challenge was to figure out how to crunch the collected data. The past efforts towards using big data and AI in mapping infectious diseases had fallen flat. Google Flu Trends (GFT) is a good case in point. Introduced in 2008, Google projected GFTs could ‘nowcast’ the flu based on google searches. The search data, if properly tuned to the flu tracking information from the Centers for Disease Control and Prevention, could produce accurate estimates of flu prevalence two weeks earlier than the CDC’s data. In 2013, GFT’s flu forecast was off by an embarrassing 140 percent.
Models like Susceptible-Exposed-Infectious-Removed (SEIR) have been used to analyse the pandemic data. However, the classic forecast models couldn’t account for a pandemic of COVID’s scale and complexity. Such models are prone to errors and risk compounding with large data.
Bias is another major challenge with using AI. According to an article published in the Journal of the American Medical Informatics Association, the inherent bias of the AI models could disproportionately impact people from underrepresented communities, such as blacks. The paper argued that the impacts could be felt in terms of infection rate, hospitalisations, and mortality. “The most frequent problems encountered were unrepresentative data samples, high likelihood of model overfitting, and imprecise reporting of study populations and intended model use,” the researchers said.
Surveillance is another indirect consequence of AI implementation. For example, in China, the government deployed a social control technology using AI to allow or deny an individual access to public space. In India, critics panned apps like Arogya Setu over privacy concerns.
A recent paper by a team of researchers from the University of Cambridge and the University of Manchester found many machine learning-based studies conducted between January 2020 and October 2020 suffered from methodological flaws, underlying bias and sometimes both.
In a recent paper, former Google researcher Melanie Mitchell detailed four fallacies in AI research. The fallacies lead to unrealistic expectations from AI. For instance, ‘wishful mnemonics’–attributing AI programs qualities generally associated with human intelligence.
In the case of COVID-19 predictions, several people equated AI to a human expert. But, AI is only helpful when applied judiciously by a subject matter expert. In the same context, Alex C. Engler is a Rubenstein Fellow in Governance Studies at The Brookings Institution, wrote, “journalists that breathlessly cover the “AI that predicted coronavirus” and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.”