With the amount and power vested in the hands of the world’s tech giants, it has now become the need of the hour to have a few whistleblowers identifying the bad from the good. Ethical AI refers to the artificial intelligence (AI) that adheres to the guidelines of non-discrimination, privacy, morals and rights.
MarketsAndMarkets predicts that the global AI governance market is expected to reach a $1016 million valuation by the end of 2026, growing at a CAGR of 65.5 per cent between 2020 and the year of the forecast. Factors including rising awareness, government initiatives and the demand of building AI systems that are trustworthy are likely to contribute to this growth.
Sign up for your weekly dose of what's up in emerging technology.
AIM has curated a list of ten women from across the globe who are leading the ethical AI fight against the big techs of the world.
Timnit Gebru is one of the well-known and respected personalities in the AI Ethics space. She is also the former co-lead of Google’s Ethical Artificial Intelligence team and was fired for allegedly co-authoring a paper on large language models and their ethical issues. Her work has been instrumental in important discoveries, including the detection of bias in facial recognition systems and the lack of diversity in the tech industry.
Gebru has earlier been vocal about the control that big tech companies hold on the AI landscape. Recently, she launched the Distributed Artificial Intelligence Research Institute— a background-agnostic independent space for researchers to come together and work towards setting an AI research agenda focused on communities and experiences.
Kay Firth-Butterfield is a barrister and former judge, professor and technologist. At present, she heads AI and ML and is a member of the Executive Committee of the World Economic Forum. Kay works at the intersection of AI, international relations, policy, businesses and AI ethics.
She is a part of the Technical Advisory Group at the Foundation for Responsible Robots. She is recognised as a member at the forefront of the governance of AI. She is also the co-founder of AI Global. Kay regularly talks on international forums regarding the benefits and challenges associated with the many aspects of technology, economic and social changes arising from the use of AI.
Kay first started the popular Twitter hashtag #AIEthics and became the world’s first Chief AI Ethics Officer in 2014. Additionally, she was a part of the team which created the Asilomar AI Ethical Principles.
Founder of the Algorithmic Justice League (AJL), Joy Buolamwini, is focused on bringing together art and research leading forth the social implications and the possible harm of AI. A PhD candidate of the MIT Media Lab, Joy’s discovery of racial bias in facial recognition systems has been recorded in a Netflix documentary— Coded Bias. Her most significant work has been researching the inaccuracies of AI in the facial recognition technology sold by tech giants Microsoft, IBM and Amazon.
Joy has been prominent in the ethical AI scene since 2016, when she started working on raising awareness about the harmful impacts of AI with the help of her research.
Principal researcher at Microsoft Research, Kate Crawford, is also the co-founder of research at AI Now Institute at New York University. Her research focuses on AI in the context of politics, culture, labour and environment. She also studies large-scale data structures.
In the past, Kate has extensively written and spoken on topics including the government’s control over media content, social media, big data, youngsters and gender, and mobile devices.
Technology and social media scholar Danah Boyd is a Partner Researcher at Microsoft Research. She is also the founder of the Data and Society Research Institute and a visiting professor at New York University. Danah is also an author, and her work focuses on the sociality, identity, and culture of young people on social networks.
Furthermore, Danah connects with researchers to work on the future of work, accountability in ML, cultural dynamics of AI, media manipulation, and combating bias.
Saishruthi Swaminathan is the Technical Lead and Advisory Data Scientist at IBM. She is an active ethical AI practitioner and has been one of the top contributors in the field. An Electrical Engineer from San Jose State University, Swaminathan was one of the semi-finalists in the Silicon Valley Business Plan Competition this year for democratising recruiting using AI. Moreover, she is also the co-creator of AI-Fairness 360’s R-Code.
Founder and Executive Director of non-profit Project Include Deborah Raji is focused on providing access to engineering education to underserved and immigrant communities. As a computer activist and fellow at Mozilla, Deborah’s work focuses on AI accountability, algorithmic adulting and bias.
Deborah has earlier worked with Joy on Algorithmic Justice League, New York University and with Google’s Ethical AI team.
Rachel Thomas is the Co-founder of fast.ai, and at present, serves at the Queensland University of Technology Centre for Data Science as the Professor of Practice. The computer scientist is also the Founding Director of the Centre for Applied Data Ethics at the University of San Francisco. She has student unconscious bias in ML, and her work focuses on race and gender in data sets and algorithms.
Rachel was selected as Forbes’ 20 Incredible Women in AI in 2017; she is also on the Women in Machine Learning’s Board of Directors.
Olivia Gambelin is an AI Ethicist and the Founder and CEO of Ethical Intelligence (EI) Associates. She promotes the democratisation of ethical and affordable AI and encourages the use of human-centric AI. According to her, ethics is nothing more than the study of understanding the difference between the good and the bad.
Olivia is also a member of the AI and Ethics Journal at Springer Nature. Her Twitter bio reads, “Making sure the robots don’t take over the world.”
Director of Research at New Knowledge, Renee DiResta, is also the Head of Policy at Data for Democracy. Renee is an investigator of the spread of harmful content across social networks, and she assists policymakers in understanding and responding to the same. She is a Research Manager at the Stanford Internet Observatory and regularly talks and writes about influence operations and technology policies.