MITB Banner

In Conversation With Professor Rahul Dé, Professor At Indian Institute Of Management, Bangalore

Share

This is the fifth article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.

This week, we spoke to Rahul Dé, Professor at the Indian Institute of Management, Bangalore. His research interests are in ICT for Development, Open Source and e-Government Systems. He has published two books and 50 academic articles.

While much of the recent research in Ethical AI has focused on the decision-making systems’ outcomes, Prof Dé’s current research focuses on these systems’ ethical concerns irrespective of the outcome. For instance, is it ethical to change the nature or purpose of human decision-making?

Prof Dé and his team’s research was one of the six projects selected from India by Facebook focusing on governance, cultural diversity, and operationalising ethics. Analytics India Magazine caught up with Prof Dé to understand his thought process.

AIM: Why did you choose to focus on the ethical concerns of delegating decision-making to machines when most of the research is fixated on their outcomes?

Prof Dé: When we looked around AI ethics research, it seemed to be driven by the outcomes. These are called trolley problems. They consider issues similar to self-driving car dilemmas, where the AI makes a decision on saving the driver or the person on the road. But we thought the problem has kind of slipped off into the other side and there are a huge host of questions that actually come before that. 

Over the years, scientists and philosophers have asked what it means to do joint decision-making or what it means to share a whole process that you are going through with a machine, and how much of your values or thoughts get displaced in the process. The pertinent question here is not whether the robots will improve efficiency in a factory. They might or might not, which is fine. The issue is also about what we are going through in bringing robots and impacting our decision-making. Hence, there is a broader question about technology in general and how it is changing us.

AIM: What are some of the consequences of letting machines make decisions for you?

Prof Dé: More than consequences, I will explain the implications. Whether the outcomes are positive or negative, is a value question. 

For example, consider a doctor working in a hospital. When the doctor sees a patient, he is driven by many factors. The doctor is mainly worried about getting the job done, which is to cure the patient. However, along with this, there are other subjectivities involved as well. The doctor is also concerned about, for instance, his or her reputation, integrity, or about how the world sees them. They are also worried about the patient, the patient’s family, whether the patient is experiencing pain, or whether the family can afford the treatment. While all of these factors play a role, parts of them are getting ignored as tasks get displaced to the AI system. 

Another main implication is the growing dependency of doctors on the machine. Even if the doctor has been practising for a while, he or she begins to depend a lot more on the machine as it gives you a pretty good answer. On the other hand, you also start becoming unsure of your own judgement call and lose confidence in the process.

In a typical hospital situation, specialist doctors have a lot of support around them. You have to consider how the relationship between them is getting affected and how it is changing due to machines. We are observing changes in the relationships in our study. Machines can intervene in the relationship that, for instance, an intern is trying to build with his or her doctor. 

Consider another example, in education, where professors grade students. Professors are very conscious of being fair and while grading students, provide reasoning for their decisions. Personally, I worry about my student’s lives and their careers and make an effort to explain my decisions while grading. At the same time, I have tried grading using AI technologies to see how it works and observed that it lacks that consciousness or a personal connect, and this bothers me. 

AIM: How does your research, dealing with the ethical considerations of delegating AI, stack against the research on AI outcomes?

Prof Dé: All tools change the way we work and change the way we see the world. For instance, weapons-grade AI is an area where much robotics work is actually going on, which is of grave concern. While the output of such weapons is absolutely devastating, just the act of building these weapons is doing a lot of things to us, which we have to be concerned about. It is changing the whole meaning of what it means to be alive or have values. 

Another example would be, what happened in Google when they fired Timnit Gebru. The content of what she was saying was really not the issue. What mattered was her opinion on the manner in which things were being done in Google, which is changing us. Google is supposed to be a very inclusive and open company and very sensitive to these issues, yet they fired her. Hence, these things are related and we need to find the connection between the two.

AIM: What are some of the ethical considerations when it comes to delegating AI to make decisions for humans and how does one ensure safe implementation in an Indian context?

Prof Dé: We want to make sure that this consciousness-raising happens and that people understand that just the act of bringing in AI is going to change everything around you. It is going to change your companies and the government. 

On a case-by-case basis, organisations will have to take that call. They will need to consider if they are okay with the changing dynamic of workplaces. Because everything, right from workplace relationships to the way you solve a problem, will change. More than the outcome, your thinking is going to change. Hence, you have to answer the question of whether you are okay with it by considering instances individually.

Whether this will be done or not is something that cannot be assured. Governments will have to consider this while setting up policies. Corporations need to consider that this is an advanced technology and think about how it can impact factors like desires, beliefs, and goals that play a role more important than the outcome. You will have to tailor your workflows and processes within organisations carefully around issues.

In an Indian context, however, there are a few areas where deploying AI should be banned entirely. For instance, anywhere we can displace labour in India, we have to be careful. Some assistance may come in, but implementing AI in agriculture or self-driving cars should not be allowed. It makes sense to introduce AI in areas where it is dangerous for humans to enter or high-speed computing. Here again, it is essential to consider how people can interface with machines in a humane manner. 

Share
Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.