MITB Banner

Interview with Meredith Broussard: Coded Bias Cast & Assistant Professor at NYU

Share

Interview with Meredith Broussard: Coded Bias Cast & Assistant Professor at NYU

Illustration by Interview with Meredith Broussard: Coded Bias Cast & Assistant Professor at NYU

Shalini Kantayya’s Coded Bias digs into the seamy side of Artificial Intelligence. The documentary’s ensemble cast include AI researchers such as Joy Buolamwini, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, and Timnit Gebru.

Analytics India Magazine caught up with data journalist Meredith Broussard to discuss the implications of the issues highlighted in the documentary and her book,“Artificial Unintelligence: How Computers Misunderstand the World.” 

She is an assistant professor at the Arthur L. Carter Journalism Institute of New York University and an affiliate faculty member at the Moore Sloan Data Science Environment at the NYU Center for Data Science. The 2019 Reynolds Journalism Institute Fellow has also worked as a software developer at AT&T Bell Labs and the MIT Media Lab.

AIM: How serious is the AI bias problem? 

Broussard: I think algorithmic bias is the civil rights issue of our time. Algorithms are increasingly being used to make decisions on our behalf. You don’t always know when an algorithm is making a decision instead of a human being, which is a problem. You should know what the decision-making process is when important decisions are being made. Among the things that Cathy O’Neil (Founder, O’Neil Risk Consulting & Algorithmic Auditing and Cast, Coded Bias) talks about in her book Weapons of Math Destruction and a theme observed in Coded Bias is that algorithms are often used to judge the poor, and the wealthy are judged by human beings. If you are a wealthy person, you think it is fine since you are still getting judged by humans. But it is not going to remain the case forever. 

AIM: In the documentary you talk about how the decisions AI systems make are mathematical and not ethical. How can ethical values translate into mathematical models?

Broussard: There is a fundamental conflict here. Sometimes it is possible to make more unbiased technology, and sometimes it is not. It depends on the context, and we always run into problems when we start using mathematical machines and computers to make social decisions. What is mathematically fair is not always socially fair. There is some very interesting research about how we test algorithms for bias, and Joy Buolamwini’s work in Coded Bias is a good example. She does an intersectional analysis of how effective facial recognition algorithms are. She learns that algorithms are better at recognising light skin than dark skin or men more than women. This is a really useful way of looking at any algorithm. You can look at who does it work best for, who does it fail for, and how. You can look at the kinds of bias that exist in the real world, and you can look for the ways that the bias is expressed in the algorithm. 

Source: Screenshot from Coded Bias

For a very long time, people believed that computers were more objective or more unbiased than people. I call this techno-chauvinism – the idea that computers are superior somehow. It’s not a competition. Instead, I would argue that we need to think what is the right tool for the task. Sometimes the right tool is a computer, sometimes it is not, and one is not better than the other. Computers are not great at ‘context’, and a decision like who gets into college or who gets the mortgage is about that. So, we can use computers to make certain kinds of decisions, and for others, we shouldn’t be using computers, and that is okay. 

AIM: What are the things to consider before deploying AI?

Broussard: This is a very complicated problem, and we don’t have universal guidelines yet about when we should or should not use AI. It’s all context-dependent. We are still at the very beginning of the AI era, and it is going to take many more years before we collectively decide when we should or shouldn’t use AI. The best we can do is be careful and audit our algorithms for discrimination. We can also use a framework that Ruha Benjamin offers in her book – Race After Technology. We can assume that automated systems discriminate by default. We know that sexism, racism, class and caste differences exist in the real world, and we know that automated decision-making (ADM) systems replicate all the real world’s problems. So, we can assume that an automated system will discriminate, and it is just a question of how. Once you start looking at it that way, it becomes easier to spot the problems with ADM systems. 

AIM: You said we wouldn’t make social progress if we use machine learning models to replicate the world as it is. Could you elaborate?

Broussard: When we build machine learning models, we feed in data and say to the computer that this is the thing we want to replicate. For instance, take mortgage data in the US. The history of who has gotten mortgages in the US is the history of racism and segregation. If you feed in data about the world as it has been, the computer will make predictions on who will be a mortgage customer based on who has been a mortgage customer in the past. This mostly means white people. The computer is not going to give mortgages to any black or brown people. And that’s not what we want. The computer is not being empathetic, creative, or having the vision of a better world. It’s just replicating what has been in the past. And when you embed past biases in code, it makes them hard to see and almost impossible to eradicate. Because if you believe that the computer is always right then, you are saying that the history of systemic racism is right.

Source: Flickr

AIM: The documentary highlighted the issues with the deployment of technologies in poorer neighbourhoods. How do we spread awareness among such communities?

Broussard: I think we need to normalise not using technology. I also think we need policy changes. Because individual effort is no longer enough. You can do individual protests – things like carrying an umbrella to shield yourselves from overhead cameras all the time. But what then? Are they going to have everybody in the world carry an umbrella all the time? That’s ridiculous. We should just have a policy that prevents surveillance so that it solves the problem for everybody, all at once, instead of putting all the responsibility on to one individual. 

We do need education and literacy, but we also, at the same time, need policy changes. The Brooklyn tenants organising against FRT in the building was a terrific example of community organising and building literacy around computation and justice. We should take that and make a policy that landlords can’t put facial recognition locks into apartment buildings or public housing. We don’t want to make every single tenant organisation in the world have to organise and go through a big thing with their landlords. We should look at it as an example of injustice, and we should prevent other people from doing it. 

AIM: Coded Bias highlighted the lack of inclusivity in teams developing AI? What do we need to do to change this? 

Broussard: There is a pipeline problem and a culture problem inside tech companies or STEM fields. We need to solve all of these problems. There are a lot of possible solutions, and I encourage people to start anywhere. You could begin with gender-based pay equity. You can also look at the STEM pipeline in K3-12 education. Starting in 3rd grade, girls start getting messages that the boys are better at math and science. A lot of people imagine we just need more young women who are STEM majors in college. That’s already pretty late because these messages start really early. 

We need STEM opportunities early on. We need to provide more resources to schools. One of the things I write in my book is that K3-12 public schools in the US are starved for resources. People imagine that if you give a kid a computer, then that is going to be enough. It’s really not enough. They need computers, books, teachers, and school buildings that are not crumbling around them. We need better education, support, and representation so that younger people have older people to look up to. We also need pay equity, better family leave policies, protection from sexual harassment, and bias training, among other things.

AIM: Is there hope?

Broussard: Coded Bias is a very hopeful film. It is a film about something terrible that is happening, but Joy Buolomwini is an inspiring individual. Her journey is a signifier that better times are ahead, change is possible, and algorithmic bias is not something that we have to just sit back and submit to. People can push back and we can make a better world together.

Share
Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.