“Ethics is knowing the difference between what you have a right to do and what is right to do”, these are the words of Potter Stewart, an American Lawyer and Judge. It is 100% true and makes more sense today as we are surrounded by technology and connected virtually with everything and everyone on the internet as a product through social media services in the age of Artificial Intelligence.
As we have petabytes and zettabytes of information floating around and easily accessible, it makes it more important to have proper data handling principles and policies to be in place to make sure that the data is not driving the wrong decisions. As these data sets can be corrupted easily and still will have the power to influence decisions of algorithms based AI applications which can prove to be hazardous to 1 person or for all humans as a race and could negatively impact the environment as well with bad decisions.
Below, I am providing some examples of how data driven automation can impact the processes if not handled properly and why there is a need of ethics, transparency and traceability driven through Responsible AI in the field of Artificial Intelligence where we see a positive impact on the society, businesses and environment through the implementation of policies, principals and governance across different industries.
Sign up for your weekly dose of what's up in emerging technology.
In the Banking industry (in recent past) we have seen that majority of the female credit card applicants either got rejected or chosen few were given less credit limits because the algorithms making the decisions in the backend were trained on biased data sets thus were, unemotionally, introducing BIAS.
The insurance industry also has some recent examples where we have seen that the customers were racially profiled based on their ethnicity and were either rejected or given poor coverage based on the bias in the data set used to train the algorithms.
Download our Mobile App
In Healthcare industry it is even more critical to make proper decisions through AI enabled decision making systems as it can become a matter of life and death easily. Especially, in the case of current Corona Pandemic where tons of patient data needs to be processed to find out right time and right dosage of COVID medicines to be provisioned for right candidate so that life can be saved but if a small amount of bias based on sex, race and age crops in, it can bring down the entire program to save lives.
Similarly in Public Safety arena using biased data to train the AI to identify criminals using cyber forensics can lead to the wrongful conviction of innocent people as the output of the software was influenced by racial and ethnicity data points introduced as either the code used was not tested properly or used wrong data sets for testing resulting in destroying lives.
Apart from the bias in the data set we have also seen that during any application or transactional data processing there is no transparency as to find out why this decision was taken, which parameter influenced it and why did the algorithm took additional steps to mitigate it? All these can be easily answered by embedding explainability and transparency in the AI design processes to provide understandability of the context and interpretability of the decision by AI.
Thus we need Responsible AI which is the practice of using AI with good intention to empower employees and businesses, and fairly impact customers and society – allowing companies to engender trust and scale AI with confidence along with the purpose of providing a framework to ensure the ethical, transparent and accountable use of AI technologies consistent with user expectations, organizational values and societal laws and norms.
Responsible AI is not just a technological discipline rather it impacts and require considerations at –
- Operational Level – It requires setting up of governance and systems that will enable AI to flourish with proper definitions of design principles and methodologies
- Technical level – Ensure systems and platforms are trustworthy and explainable by design and leveraging shift left approach to embed best practices and principles being infused during inception of a project itself
- Organization level – Democratize this new way of working and ensure human + machine collaboration and defining new roles with proper responsibilities and accountabilities for proper functioning
- Reputational Level – Articulate the responsible AI mission and ensure it’s anchored to a company’s core values, ethical guardrails, and accountability structure.
And for the Responsible AI to be successful across, the foundation must be correct and it depends on TRUST:
T – Trustworthiness, i.e. unbiased and diverse in nature
R – Reliable, i.e. thoroughly tested and proven to be able to support right decision making
U – Understandable, i.e. explainable and transparent in nature
S – Secure, i.e. having right security to protect personal or critical information along with supporting regulations
T – Teachable, i.e. human centric in design, flexible to adapt and easy to adopt in complex environment
All these aspects can only be impactful only when they become part of the day to day practices of Data Scientists, Data Engineers and Business Stakeholders. Specially, ML engineers who need to make sure that:
- While developing the models make sure to test the data set to identify bias and remove them
- Check the code itself whether it comes from online public libraries or developed inhouse to see if it has been tested on the right data set
This has to go hand in hand with proper governance in IT and Business teams to make sure Accountable and Responsible stakeholders are identified to define the Responsible AI principles and accountable to make sure they are implemented properly supported with the actions or to-do steps to ensure recovery and damage control at both Organizational and Reputational level.
Keep coding and be responsible ????!