Council Post: Tackling ethical challenges in AI within organisations

The components of ethical AI should be incorporated from the product development stage. Every AI/ML product developed should be looked at from an ethical perspective.
Council Post: Tackling ethical challenges in AI within organisations

As per the state of AI in 2021 report by Mckinsey, 56 percent of all respondents reported AI adoption in at least one function in their organisations–a six percent increase from 2020. The adoption rate is highest at Indian companies, followed closely by those in Asia–Pacific region.

Image: Mckinsey

AI is changing the game for a lot of companies, however,  just like any other technology, it has its downsides. In the last few years, we have seen a rise in concerns about the risks associated with AI. The major challenges include:

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.
  • Unexplainable models
  • Racial profiling and discrimination;
  • Gender bias
  • Model drift

Latest controversies

In 2021, Meta apologised for recommending users watching a video of black men if they wanted to see more “videos about primates.”  In 2020, a group of black content creators sued YouTube for using AI to censor their videos based on race.

As per Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus report, the C4 dataset has been extensively ‘filtered’ to remove black and Hispanic authors. It also filtered out material related to gay and lesbian identities.

In 2019, The US Department of Housing and Urban Development (HUD) said it is considering adopting new rules that would “effectively insulate landlords, banks, and insurance companies that use algorithmic models from lawsuits that claim their practices have an unjustified discriminatory effect.”

So, what do we do about it?

Does it mean we abandon AI? The answer is obviously no. Instead, we need to take the right measures while deploying AI in business processes. Our focus should be to ensure best practices while building applications using artificial intelligence. As data is the building block for algorithms, it is crucial to have a set of guidelines and principles for the responsible deployment of AI technology.

Explainability and transparency

As AI is extensively used for decision-making processes in businesses, it is important to understand why an algorithm made a particular decision. The black-box nature of AI models is a recipe for trouble. Transparency and explainability help stakeholders understand the AI processes and decisions and adjust the model accordingly.

Accountability and governance

The right governance processes are critical in building robust AI frameworks. Accountability in AI entails properly defining the roles of the people involved in building and deploying the algorithms. AI governance needs model and application auditing, including documenting data source, lineage, model facts, high-level data sources, information about the target audience of the application etc.

Privacy

Security and protection of privacy are huge concerns. AI models are built on huge chunks of data. Often, data contain sensitive information like race, gender, sexual orientation etc. Organisations must take cognisance and protect user data. Ideally, the companies should reveal to the end-users how they use sensitive data.

Robustness

Robustness of an AI model entails the effectiveness of algorithms in extracting insights and how it stands up to adversarial attacks. 

Fairness and de-biasing

As human beings, all of us come with some sort of bias- both intentional and unintentional. These biases can creep into the data that essentially forms the bedrock of AI systems. The organisations should step in to minimise algorithmic bias.

How to tackle such challenges?

The components of ethical AI should be incorporated from the product development stage. Every AI/ML product developed should be looked at from an ethical perspective. Questions around data collection, data privacy, transparency of the models etc should be addressed.

For instance, a simple post-hoc analysis of an AI model’s decision-making can reveal biases. Accountability is another factor to be considered from the start. If something goes wrong while deploying an AI system, who should be held responsible for it-the company, the data scientists or the engineers who built it or any other stakeholder, are decisions that need to be made from get go.

Technology

Right processes can only be built with the use of the right methodologies and technologies. This starts with hiring the right kind of people in the company. Attracting the best talent who are not only extremely skilled at what they do but understand the ethical and long-term impacts of deployment of AI algorithms is key.

Improving the quality of data used and data preparation methods should also be a focus point for the company. Data scientists have to evaluate if the data they are using to build a solution is actually representative of the group they are catering to. 

Stakeholders

AI is a multidisciplinary field and it does not consist of data scientists and ML engineers alone. It is important to have lawyers, AI ethicists, policymakers, chief data officers (CDO) to get a 360 degree view on the use and deployment of  a particular algorithm. Companies are slowly realising the importance of having a diverse set of people with different backgrounds in the AI workflow.

According to the NewVantage Partners 2021 survey, 65% of data firms have a CDO. A CDO is essentially a leadership role that manages the governance and management of data across an organisation. CDO’s role combines “accountability and responsibility for information protection and privacy, information governance, data quality and data life cycle management, along with the exploitation of data assets to create business value”, the report added.

An AI ethicist works with the team as well as the legal team to understand the challenges in the deployment of an AI model. After taking stock of the pros and cons in building a particular model and what repercussions it can have, he/she can decide how and where such AI systems should be deployed along with what precautions the tech teams should take.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here.

More Great AIM Stories

Aishwarya Srinivasan
Aishwarya is an AI & ML Innovation Leader at IBM. She works cross-functionally with the product team, data science team and sales to research AI use-cases for clients by conducting discovery workshops and building assets to showcase the business value of the technology. Aishwarya has founded a nonprofit organisation Illuminate AI that aims to provide mentorship, career guidance, and educational support to thousands in the community. She is also a board member for nonprofit organisations like AI for Good Foundation and AI Education project.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM