You Got To Have A Strong Backbone To Be In This Career: Olivia Gambelin, Founder, Ethical Intelligence

This is the 11th article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.

This week, we spoke to Olivia Gambelin, an AI Ethicist and Founder of Ethical Intelligence, a company that works towards making AI Ethics accessible to SMEs and entrepreneurs.


Sign up for your weekly dose of what's up in emerging technology.

AI ethicists are all the rage now. However, there is a lot of confusion around what exactly an AI Ethicist’s role constitutes. Analytics India Magazine caught up with Gambelin to understand what it means to be an AI Ethicist.

AIM: Why is it so hard to define an AI ethicist?

Gambelin: When it comes to the role of an AI ethicist, it’s been difficult to pin it down exactly due to the lack of education around what ethics is and is not, in terms of tech. Our understanding of ethics is closely tied to understanding what it means to be human. Everyone thinks that if I make ethical decisions daily, then of course I can make ethical decisions in business, which is true. However, the problem with that is, it is a different scale of decisions you are making. You are no longer making ethical decisions for your individual self, but you are making ethical decisions that will be replicated through AI.

We see ethicists already in the medical, political,  and legal fields. All of these subfields of ethics are applied to specific industries to help those that work within those industries meet societal expectations. However, what is happening is that people who work in AI don’t understand that ethics is actually very formulaic and that there are people with expertise in how to manage the ethical or societal impact of their technology. It’s actually a very straightforward role when you break it down. It’s basically just looking where that role of an AI Ethicist fits in the decision-making process within tech.

If you look at medical ethics, there are clear set roles for ethicists. In tech, it’s still developing in terms of what is that role, where is the decision-making power, is there leadership buy-in, and where in the team does an ethicist fit best. Computer scientists are very opposed to having ethicists work with them. We are just trying to make sure that your intentions for creating the technology are realised as it grows and expands.

AIM: How is the role of an AI ethicist different from a traditional ethicist?

Gambelin: There are many different subsets of ethicists. The biggest one that you will find is actually a medical ethicist. When you are adding in the contextual layer of AI it becomes more difficult because we are dealing with complex technology and you need to have a technical understanding. I am an AI ethicist, I don’t code AI, but I have strong working theoretical knowledge of AI. I know the terminology and know how to ask coders questions about how they are developing their technology. When you are adding the context layer of AI, you are taking these high-level concepts like principles or values and applying them to specific situations. Hence, the difference between an AI ethicist and a traditional ethicist is in the context of AI.

AIM: If an AI Ethicist does not really understand how to build AI, why and how do you think they can still be good at their jobs?

Gambelin: It is actually better if an AI ethicist does not code. This is because then the AI ethicist is distracted by the code and is not paying attention to the high-level concepts. You actually want someone who isn’t caught up in the day-to-day grind of the system. You need someone to step out and ask the whys of the project. We are not there to ask how, we are there to ask why. However, we need to understand on a basic level how things work so that we can ask those whys. For instance, I need to know the difference between supervised and unsupervised learning, because I have to ask different questions based on the techniques being used. I don’t need to know how to code it though. It actually brings in more perspective because your entire user base is not AI programmers.

AIM: Why should industries hire AI Ethicists, if it is not legally binding?

Gambelin: Good ethics means good business. A business can meet legal requirements, but that does not mean that it is ethical. Ethics is a step above law. For example, let’s take Facebook’s targeted ads. Legally, they are within the law. However, ethically we look at some of their targeted ad programs as I don’t like that and ask why they are doing this, why they are collecting my data, or whether my dignity is being violated. This is because it is crossing that ethical boundary. 

Soft ethics is above the law and it is actually just as important as being legally compliant because that layer of ethics that comes above the law is actually matching societal expectations. And it’s in that above and beyond the law that you actually earn your consumers trust. If you are not ethically sound, you will not have the loyalty, the trust of your consumer that you need to survive. Hence, at the end of the day, good ethics means good business. 

You can also use ethics as a tool for innovation. So, AI ethics is definitely not a hindrance, instead, it is a risk mitigator and an innovation stimulator.

AIM: What are the qualities an AI Ethicist should possess?

Gambelin: An AI ethicist must be empathetic. I joke sometimes that I am a part therapist because I start most meetings talking to a person saying you are not a bad person. We are going to look at the technology and how the technology can end up with unintended consequences, but this does not mean you are a bad person. 

As an AI ethicist you have to go in knowing that people are automatically going to put up that wall if you come up as very defensive, telling them that everything they did is wrong and how it is going to harm people. Of course, someone’s going to put up their walls, because if you do that because you are basically telling them that they are a morally bad person. No one’s going to listen to that. An AI ethicist should consider that these are going to be hard conversations, from the other end as well. You have to sit with the person with the assumption that they have good intentions and as AI ethicists we want to realise those intentions in the technology, separating the personal feeling and separating the business and technology out of that. 

AIM: As you mention in your study, why do you think the most important quality in an Industry AI Ethicist is to be brave? 

Gambelin: Bravery is the most important quality. The reason it was written into the piece was that I talked with a bunch of other fellow AI ethicists and that was a common theme. It is one of the most important qualities because you can do your best work in the world and have a beautifully designed ethical protocol with a beautiful solution, and have the most comprehensive ethics charter designed, but if you walk into that room and are not brave enough to open your mouth then all of the work you do, won’t get you anywhere. There may be a lot of confrontation and there are times when an ethicist has to put their foot down and say that this is a ‘hard no’. You have to be able to tell them that you may profit in the short term, but in the long term, this will destroy the company. It may be difficult to have to stand in a boardroom and say that we cannot pursue this AI system because it has long-term negative implications for your reputation as a company. It may be very terrifying for a person in a room who has to say that.

AI ethicists are also used as scapegoats. Being brave enough to go and do your job even though I run the risk of being a scapegoat and have someone come in to attack my reputation and career, is difficult. You have to be able to detach from your feelings or personal ethical framework to be able to handle these situations. Hence, bravery is one of the most important aspects because you have to be able to open your mouth and speak. At the end of the day, it doesn’t matter how good you are at your job, if you aren’t working with people and putting it into action, no matter how difficult the situation might feel then your work’s not going to get anywhere. It’s going to be difficult the first few times, the more you do it, the easier it becomes. You got to have a strong backbone to be in this career. 

More Great AIM Stories

Kashyap Raibagi
Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.