MITB Banner

Scapegoating Algorithms: Stanford’s Vaccine Distribution Misfiring Exposes Our Attitudes Towards AI

“Stanford Medicine residents were left out of the first wave of staff members for the new Pfizer vaccine.”

In what sounds like a textbook case of scapegoating, the Stanford Medical Centre was caught dodging responsibility from what appears to be a glaringly “human” error. Last week, the frontline workers at the medical centre protested how the vaccine distribution was administered. 

As first reported by ProPublica, Stanford Medicine residents who worked in close contact with COVID-19 patients were left out of the new Pfizer vaccine’s first wave of staff members. “…residents are hurt, disappointed, frustrated, angry, and feel a deep sense of distrust towards the hospital administration given the sacrifices we have been making and the promises that were made to us,” read the letter that was sent to Stanford by the Chief Resident Council.

What Went Wrong

Algorithm that was used to decide who gets the vaccine first(Source: MIT Tech Review )

The algorithm which Stanford used to decide who gets the first shot excluded many frontline workers from the list. What followed was a massive uproar amongst the residents at Stanford, and the authorities had to publicly apologise for the whole botch up.

Tim Morrison, the ambulatory care team director, was caught on video admitting that their algorithm, that the ethicists and infectious disease experts worked on for weeks, clearly didn’t work right.

People in charge of the whole distribution business apologised and blamed it to the “very complex algorithm.” People who got curious about the mechanisms of the algorithm were surprised to find out that this so-called complex algorithm is nowhere close to the infamous black-box machine learning algorithms.

As shown in the picture above, the algorithm, in this case, counts the prevalence of COVID-19 among employees’ job roles and department in two different ways, but the difference between them is not entirely clear. The algorithm failed to distinguish between those staffers who contracted COVID-19 from patients and others.

“The more different weights are there for different things, it then becomes harder to understand—‘Why did they do it that way?’” said Jeffrey Kahn, the director of the Johns Hopkins Berkman Institute of Bioethics, in an interview that appeared on MIT Tech Review. “It’s really important [for] any approach like this to be transparent and public …and not something really hard to figure out.”

Who Gets The Gavel

Outsourcing decision making to machines is nothing new. But when an algorithm fumbles on making a simple decision like — “give it to the needy”, it certainly casts serious accountability when it comes to algorithms.

We all can agree upon the fact that we, humans, are biased. But, when a mathematical model is tasked with something as critical as vaccine distribution, it is normal to expect it to be nearly perfect. After all, what else is the point of taking humans out of the loop? 

Those who deploy these models cannot direct the blame towards a model’s lacklustre performance. It is appalling that these algorithms are not only erroneous, but those in charge haven’t even taken measures that are transparent. Instead, they chose to hide in the pretence of algorithmic complexity. 

“Clear transparency regarding the algorithm used to develop the institutional vaccination order. In particular, we expect an explanation of what checks were in place to ensure that the list was indeed equitable as intended,” demanded the chief resident council in their letter to Stanford. 

AI indeed holds many answers to our long-standing challenges in medical diagnosis, safe travel with self-driving cars etc., but people are also aware that the same AI can articulate almost believable fake news, it can create faces of people who never existed. The whole finger-pointing charade by those in charge of algorithms sets a bad precedent in a world where extremely powerful machine learning models like GPT and GANs exist.

“… we should have acted more swiftly to address the errors that resulted in an outcome we did not anticipate. We are truly sorry,” read the email from the Stanford administration who have accepted full responsibility for this blunder. “Unanticipated outcomes” and “should have acted” — the words we wouldn’t hear if we were to be convinced of an algorithm in charge of a cancer diagnosis or self-driving cars!

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories