While talking at Rising 2020, Phaedra Boinodiris, a member at IBM Academy of Technology and a Fellow of the RSA had an interesting take on the good, the bad and the ugly side of AI and ethics and how biases can lead to AI acting in a way that is largely questionable.
A gamer by profession and having worked in the gaming industry for long, the incidence that shook her belief in the technology was the Cambridge Analytica case. When she came to know that the analytics firm has misused the data and misused AI, she took a break in her career to understand the ethical part involved in technologies such as AI. She took up a PhD in AI and Ethics to understand the in and out of the technology.
Since then she has been working on raising awareness about the importance of learning AI for all and understanding the nuances that lead to the bias in technology.
While AI has made lives simpler and is solving the majority of problems, including the fight against Covid-19, it does not mean that results are morally or ethically squeaky clean. She took us through the real dangers of unmitigated bias in AI and steps that organisations can take to best leverage the technology whilst mitigating the risks.
Boinodriris highlighted that one of the key concerns is that people do not know how significant the prominence of bias in AI is. And in this ignorance, the AI can further get calcified, hindering almost all the systems where AI is used.
For instance, people perceive precision medicine to be one of the best ways to treat diseases, but even in that reports have shown that AI bias can significantly alter the expected results. Similar biases have been seen while studying prisoners and analysing their chances of committing a crime, in surveillance and more.
Boinodiris also brought some interesting observations from a survey by IBM on perceptions of responsible AI:
- It found that shared prosperity and impact on jobs are identified by executives as the least important ethical considerations related to AI.
- The second observation was that more than 60% of CHROs believe that they have no minimum obligation to offer to retrain or invest in AI skills.
- The third insight was that over half the executives point CTO and CIO as primarily accountable for AI ethics in organisations. These results are highly concerning and tells us that most people do not take creating responsible AI or AI ethics as seriously as they should.
What Can Be Done To Mitigate The Bias
Boinodiris shared many interesting steps that organisations and others, in general, can take to mitigate bias and address the concerns that prevail widely.
She pointed that the organisations can take these steps to overcome these fight against AI bias:
- Internalising AI ethics: Diversity is important to deal with AI bias
- Introducing forensic technology: Every new set of data should undergo forensic analysis so make sure that they are not biased
- Establish diverse internal AI ethics board to provide governance, oversight and recommendations
- Ensure CEOs and C-level team are fully aware of and engaged in AI ethics issues
- Access AI impact on skills and workforce
- Embed ethical governance and training in all AI initiatives
- Ensuring AI ethics is incorporated in mechanisms for institutionalising values
Whereas, outside of organisations some of the steps that can be taken are:
- Educating students from the ground up about ethics in AI
- Setting guidelines and standards for AI in ethics
- Creating a unified approach and addressing the needs of affected citizens
- Forging a sustainable future to make ai more trustworthy and more trusted
Especially highlighting the importance of teaching responsible AI to kids, she said, “It is unfortunate that AI is often marketed as a tool for coders or mathematicians to know, but in reality, it should be taught to everyone. Whether in agriculture, policymaking or fashion, AI should be a must-know concept.”
She also suggested that there are several tools available such as PAIR that can help in understanding bias and fairness, mitigate hidden biases and create better databases to work with. These tools can help in measuring accuracy. “No matter how we build our model, accuracy across these measures will vary when applied to a different group of people. Secondly, the models trained on real-world data can encode real-world bias. These tools can help identify these hidden biases and fix problems of bias in future models,” she said.
On a closing note, she said that there is no doubt that AI enhances and amplifies human expertise, automates decision, improves overall efficiency, optimises employees’ time to focus on higher-value work, even fight the deadliest pandemic such as Covid-19. It is therefore essential to understand and work in the field of AI in a way that it benefits all.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
Srishti currently works as Associate Editor at Analytics India Magazine. When not covering the analytics news, editing and writing articles, she could be found reading or capturing thoughts into pictures.