Tech giants are racing to compete with each other on the AI advancement front. Unfortunately, in doing so, they often miss out on identifying the wrong from the right. This is where ethical AI steps in. Artificial intelligence that sticks to the guidelines related to fundamental values — privacy, non-discrimination, rights and morals — are termed ethical AI.
While being awestruck by the advancement made by tech giants in the field of AI, we often neglect the potential and terrifying long-term consequences of these technologies. According to MarketsAndMarkets, the global AI governance market is predicted to reach $1,016 million by 2026. The market is expected to grow at a CAGR of 65.5 per cent between 2020 and 2026. Increased government initiatives, the need to build trustworthy AI systems, and the rising awareness around building transparent AI systems have to be attributed to this growth.
In the last couple of years, tech giants — the likes of Google, Microsoft and IBM, have started considering and taking actions against the unethical practices of AI and the use of technologies that can cause potential harm. However, AI advocates have been vocal in their fight against unethical uses of AI. Today, we look at some of the prominent advocates of ethical AI from across the globe.
(The list is prepared in no specific order).
Software Developer II – Machine Learning at Microsoft, Abhishek Gupta is the Founder and Principal Researcher of Montreal AI Ethics Institute, which caters to tangible and practical research in the ethical, safe and inclusive development of AI. Abhishek’s research focuses on applied tech and policy methods to address ethical and inclusivity concerns in using AI across arrays. At Microsoft, Abhishek serves on the Commercial Software Engineering Responsible AI Board as well.
Abhishek’s work has been recognised by governments from North America, Europe, Asia and Oceania. Additionally, he has been working with an interdisciplinary experts group to host workshops, conduct research and develop an AI ethics curriculum, and work on AI ethics audits for different organisations.
Ansgar Koene is the Global AI Ethics and Regulatory Leader at EY. He focuses on Computational Neuroscience, Computational Social Science, AI policy development and engagement, algorithmic accountability and transparency. His work focuses on building regulatory tools that maximise the benefits of information technologies while minimising the negative impact on people and society at large.
Before EY, he was associated with The University of Nottingham for more than seven years when he was the research co-investigator on the UnBias project. Ansgar is also a member of the AI Ethics Board in Hayden AI, which builds AI-powered platforms for smart and safe city applications. Ansgar is also associated with the Advisory Board of We and AI, an NGO working towards increasing public awareness and conception of AI in the UK.
PhD candidate at MIT Media Lab, Joy Buolamwini, is the founder of The Algorithmic Justice League (AJL) — an organisation combining art and research to bring forth the social implications and the potential harm of AI. Joy is an Algorithmic Bias Researcher. Her most prominent work has been her research on AI inaccuracies in facial recognition technology (sold by IBM, Microsoft and Amazon) and automated assessment software.
AJL has been working since 2016 on raising awareness about the impacts of AI by carrying out research, powering the voice of the impacted and inspiring industry practitioners to mitigate AI bias and harm.
Ethical AI advocate, Saishruti Swaminathan, is a data scientist and co-creator of the R-Code for the AI-Fairness 360 product. She is an Electrical Engineering graduate from San Jose State University, and at present, is associated with IBM as its Advisory Data Scientist – AI Strategy and Innovation.
Saishruthi was also one of the semi-finalists in the Silicon Valley Business Plan Competition for 2021 for democratising recruiting using AI.
Deborah Raji is the Founder and Executive Director of Project Include, a non-profit initiative that provides access to engineering education to the underserved and immigrant communities. Presently a fellow with Mozilla, Deborah is a computer activist, and her work focuses on algorithmic bias, AI accountability and algorithmic auditing.
A graduate of Engineering Science from the University of Toronto, Deborah featured in Coded Bias, a documentary on the fallout of the MIT Media Lab researcher Joy Buolamwini’s discovery of racial bias in facial recognition algorithms. She has also worked with Joy on Algorithmic Justice League, Google’s Ethical AI team and New York University operationalising ethical considerations in ML engineering practices.
Kay Firth-Butterfield is the Head of AI and ML and a member of the Executive Committee of the World Economic Forum. Kay is also a Barrister, former Judge, professor and technologist. As an entrepreneur, she co-founded AI Global and was the world’s first Chief AI Ethics Officer in 2014. Kay is recognised as one of the foremost experts globally on the governance of AI, and she is the brainchild behind Twitter’s #AIEthics.
Kay is a member of the Technical Advisory Group at the Foundation for Responsible Robots.
Raj Shekhar is the Founder of AI Policy Exchange — an international cooperative association of institutions and individuals working at the intersection of AI and public policy. At present, he is the Lead of Responsible AI at NASSCOM. His work focuses on supporting NASSCOM’s efforts at defining the responsible AI roadmap in India.
An alumnus of National Law School, Raj is also a member of the Founding Editorial Board – AI and Ethics Journal, at Springer Nature which promotes informed debate around ethical, policy and regulatory implications that surface with the development of AI. Additionally, he is also affiliated with Harvard Kennedy School of Government’s The Future Society, a non-profit think tank that questions the governance of emerging technologies.
AI Ethicist Olivia Gambelin is the Founder and CEO of Ethical Intelligence (EI) Associates, Limited. EI promotes human-centric tech by making ethics accessible and affordable for everyone. Olivia is also a member of the Founding Editorial Board – AI and Ethics Journal at Springer Nature.
According to Olivia, AI is not inherently dangerous; our usage of AI makes it cause danger. She believes ethics is the study of knowing and understanding the difference between good and evil, right and wrong, based on a system of values.