The Rise of Responsible AI

Ethics is knowing the difference between what you have a right to do and what is right to do”, these are the words of Potter Stewart, an American Lawyer and Judge. It is 100% true and makes more sense today as we are surrounded by technology and connected virtually with everything and everyone on the internet as a product through social media services in the age of Artificial Intelligence

As we have petabytes and zettabytes of information floating around and easily accessible, it makes it more important to have proper data handling principles and policies to be in place to make sure that the data is not driving the wrong decisions. As these data sets can be corrupted easily and still will have the power to influence decisions of algorithms based AI applications which can prove to be hazardous to 1 person or for all humans as a race and could negatively impact the environment as well with bad decisions. 

Below, I am providing some examples of how data driven automation can impact the processes if not handled properly and why there is a need of ethics, transparency and traceability driven through Responsible AI in the field of Artificial Intelligence where we see a positive impact on the society, businesses and environment through the implementation of policies, principals and governance across different industries.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

In the Banking industry (in recent past) we have seen that majority of the female credit card applicants either got rejected or chosen few were given less credit limits because the algorithms making the decisions in the backend were trained on  biased data sets thus were, unemotionally, introducing BIAS

The insurance industry also has some recent examples where we have seen that the customers were racially profiled based on their ethnicity and were either rejected or given poor coverage based on the bias in the data set used to train the algorithms

In Healthcare industry it is even more critical to make proper decisions through AI enabled decision making systems as it can become a matter of life and death easily. Especially, in the case of current Corona Pandemic where tons of patient data needs to be processed to find out right time and right dosage of COVID medicines to be provisioned for right candidate so that life can be saved but if a small amount of bias based on sex, race and age crops in, it can bring down the entire program to save lives.

Similarly in Public Safety arena using biased data to train the AI to identify criminals using cyber forensics can lead to the wrongful conviction of innocent people as the output of the software was influenced by racial and ethnicity data points introduced as either the code used was not tested properly or used wrong data sets for testing resulting in destroying lives. 

Apart from the bias in the data set we have also seen that during any application or transactional data processing there is no transparency as to find out why this decision was taken, which parameter influenced it and why did the algorithm took additional steps to mitigate it? All these can be easily answered by embedding explainability and transparency in the AI design processes to provide understandability of the context and interpretability of the decision by AI.

Thus we need Responsible AI which is the practice of using AI with good intention to empower employees and businesses, and fairly impact customers and society – allowing companies to engender trust and scale AI with confidence along with the purpose of providing a framework to ensure the ethical, transparent and accountable use of AI technologies consistent with user expectations, organizational values and societal laws and norms.

Responsible AI is not just a technological discipline rather it impacts and require considerations at – 

  1. Operational Level – It requires setting up of governance and systems that will enable AI to flourish  with proper definitions of design principles and methodologies
  2. Technical level – Ensure systems and platforms are trustworthy and explainable by design and leveraging shift left approach to embed best practices and principles being infused during inception of a project itself
  3. Organization level – Democratize this new way of working and ensure human + machine collaboration and defining new roles with proper responsibilities and accountabilities for proper functioning 
  4. Reputational Level – Articulate the responsible AI mission and ensure it’s anchored to a company’s core values, ethical guardrails, and accountability structure

And for the Responsible AI to be successful across, the foundation must be correct and it depends on TRUST:

TTrustworthiness, i.e. unbiased and diverse in nature

RReliable, i.e. thoroughly tested and proven to be able to support right decision making

UUnderstandable, i.e. explainable and transparent in nature

SSecure, i.e. having right security to protect personal or critical information along with supporting regulations

TTeachable, i.e. human centric in design, flexible to adapt and easy to adopt in complex environment

All these aspects can only be impactful only when they become part of the day to day practices of Data Scientists, Data Engineers and Business Stakeholders. Specially, ML engineers who need to make sure that:

  1. While developing the models make sure to test the data set to identify bias and remove them
  2. Check the code itself whether it comes from online public libraries or developed inhouse to see if it has been tested on the right data set

This has to go hand in hand with proper governance in IT and Business teams to make sure Accountable and Responsible stakeholders are identified to define the Responsible AI principles and accountable to make sure they are implemented properly supported with the actions or to-do steps to ensure recovery and damage control at both Organizational and Reputational level.

Keep coding and be responsible ????!

Rudraksh Bhawalkar
Rudraksh (Rudy) Bhawalkar is an Analytics practitioner by core and currently works as Senior Principal within Accenture Applied Intelligence as part of the Solution Design team. He is also leading the Responsible AI capability in Austria, Switzerland and Germany across all businesses. He has more than 14 years of experience in the field of Data, Analytics and Artificial Intelligence covering Delivery, Sales, Pre-Sales and Solution Architecture. He is also a publisher of more than 35 articles on the topic of Artificial Intelligence, Analytics, IOT, Big Data, Digital Transformation along with being a Public Speaker at various CXO conferences in Americas, Africas and India.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR