MITB Banner

You Got To Have A Strong Backbone To Be In This Career: Olivia Gambelin, Founder, Ethical Intelligence

Share

This is the 11th article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.

This week, we spoke to Olivia Gambelin, an AI Ethicist and Founder of Ethical Intelligence, a company that works towards making AI Ethics accessible to SMEs and entrepreneurs.

AI ethicists are all the rage now. However, there is a lot of confusion around what exactly an AI Ethicist’s role constitutes. Analytics India Magazine caught up with Gambelin to understand what it means to be an AI Ethicist.

AIM: Why is it so hard to define an AI ethicist?

Gambelin: When it comes to the role of an AI ethicist, it’s been difficult to pin it down exactly due to the lack of education around what ethics is and is not, in terms of tech. Our understanding of ethics is closely tied to understanding what it means to be human. Everyone thinks that if I make ethical decisions daily, then of course I can make ethical decisions in business, which is true. However, the problem with that is, it is a different scale of decisions you are making. You are no longer making ethical decisions for your individual self, but you are making ethical decisions that will be replicated through AI.

We see ethicists already in the medical, political,  and legal fields. All of these subfields of ethics are applied to specific industries to help those that work within those industries meet societal expectations. However, what is happening is that people who work in AI don’t understand that ethics is actually very formulaic and that there are people with expertise in how to manage the ethical or societal impact of their technology. It’s actually a very straightforward role when you break it down. It’s basically just looking where that role of an AI Ethicist fits in the decision-making process within tech.

If you look at medical ethics, there are clear set roles for ethicists. In tech, it’s still developing in terms of what is that role, where is the decision-making power, is there leadership buy-in, and where in the team does an ethicist fit best. Computer scientists are very opposed to having ethicists work with them. We are just trying to make sure that your intentions for creating the technology are realised as it grows and expands.

AIM: How is the role of an AI ethicist different from a traditional ethicist?

Gambelin: There are many different subsets of ethicists. The biggest one that you will find is actually a medical ethicist. When you are adding in the contextual layer of AI it becomes more difficult because we are dealing with complex technology and you need to have a technical understanding. I am an AI ethicist, I don’t code AI, but I have strong working theoretical knowledge of AI. I know the terminology and know how to ask coders questions about how they are developing their technology. When you are adding the context layer of AI, you are taking these high-level concepts like principles or values and applying them to specific situations. Hence, the difference between an AI ethicist and a traditional ethicist is in the context of AI.

AIM: If an AI Ethicist does not really understand how to build AI, why and how do you think they can still be good at their jobs?

Gambelin: It is actually better if an AI ethicist does not code. This is because then the AI ethicist is distracted by the code and is not paying attention to the high-level concepts. You actually want someone who isn’t caught up in the day-to-day grind of the system. You need someone to step out and ask the whys of the project. We are not there to ask how, we are there to ask why. However, we need to understand on a basic level how things work so that we can ask those whys. For instance, I need to know the difference between supervised and unsupervised learning, because I have to ask different questions based on the techniques being used. I don’t need to know how to code it though. It actually brings in more perspective because your entire user base is not AI programmers.

AIM: Why should industries hire AI Ethicists, if it is not legally binding?

Gambelin: Good ethics means good business. A business can meet legal requirements, but that does not mean that it is ethical. Ethics is a step above law. For example, let’s take Facebook’s targeted ads. Legally, they are within the law. However, ethically we look at some of their targeted ad programs as I don’t like that and ask why they are doing this, why they are collecting my data, or whether my dignity is being violated. This is because it is crossing that ethical boundary. 

Soft ethics is above the law and it is actually just as important as being legally compliant because that layer of ethics that comes above the law is actually matching societal expectations. And it’s in that above and beyond the law that you actually earn your consumers trust. If you are not ethically sound, you will not have the loyalty, the trust of your consumer that you need to survive. Hence, at the end of the day, good ethics means good business. 

You can also use ethics as a tool for innovation. So, AI ethics is definitely not a hindrance, instead, it is a risk mitigator and an innovation stimulator.

AIM: What are the qualities an AI Ethicist should possess?

Gambelin: An AI ethicist must be empathetic. I joke sometimes that I am a part therapist because I start most meetings talking to a person saying you are not a bad person. We are going to look at the technology and how the technology can end up with unintended consequences, but this does not mean you are a bad person. 

As an AI ethicist you have to go in knowing that people are automatically going to put up that wall if you come up as very defensive, telling them that everything they did is wrong and how it is going to harm people. Of course, someone’s going to put up their walls, because if you do that because you are basically telling them that they are a morally bad person. No one’s going to listen to that. An AI ethicist should consider that these are going to be hard conversations, from the other end as well. You have to sit with the person with the assumption that they have good intentions and as AI ethicists we want to realise those intentions in the technology, separating the personal feeling and separating the business and technology out of that. 

AIM: As you mention in your study, why do you think the most important quality in an Industry AI Ethicist is to be brave? 

Gambelin: Bravery is the most important quality. The reason it was written into the piece was that I talked with a bunch of other fellow AI ethicists and that was a common theme. It is one of the most important qualities because you can do your best work in the world and have a beautifully designed ethical protocol with a beautiful solution, and have the most comprehensive ethics charter designed, but if you walk into that room and are not brave enough to open your mouth then all of the work you do, won’t get you anywhere. There may be a lot of confrontation and there are times when an ethicist has to put their foot down and say that this is a ‘hard no’. You have to be able to tell them that you may profit in the short term, but in the long term, this will destroy the company. It may be difficult to have to stand in a boardroom and say that we cannot pursue this AI system because it has long-term negative implications for your reputation as a company. It may be very terrifying for a person in a room who has to say that.

AI ethicists are also used as scapegoats. Being brave enough to go and do your job even though I run the risk of being a scapegoat and have someone come in to attack my reputation and career, is difficult. You have to be able to detach from your feelings or personal ethical framework to be able to handle these situations. Hence, bravery is one of the most important aspects because you have to be able to open your mouth and speak. At the end of the day, it doesn’t matter how good you are at your job, if you aren’t working with people and putting it into action, no matter how difficult the situation might feel then your work’s not going to get anywhere. It’s going to be difficult the first few times, the more you do it, the easier it becomes. You got to have a strong backbone to be in this career. 

Share
Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.