Just a few years ago, Will Smith, playing detective Del Spooner, said, “I was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody’s baby – 11% is more than enough. A human being would’ve known that.” While it was only in the 2004 film ‘I, Robot’ that the AI decided to rescue Will Smith from the crash while leaving a schoolgirl to drown, scenarios like these can be well expected in the future.
If you were to search for ‘top writers in the world’ or ‘best leaders through history’ in any popular search engine, you would see a list of mostly men, complemented by one Jane Austin or Virginia Woolf. You will also see the list comprises mainly ‘white’ authors. Even on the most basic search queries, the internet is biased. The echo chamber of human biases in the 21st century, the internet, or any AI products, is far from neutral.
Sign up for your weekly dose of what's up in emerging technology.
Human morality is variable and not always correct. Robots are created on already occurring biases, making them far from moral conscience. But it is upon us to create a safer future by ensuring machines behave ethically. Much content and information have been published to address ethical AI in the past years. We have created a treasure trove of top AI guidelines and measurements to refer to when building an AI product/ service to ease navigation.
UNESCO’s guidelines were adopted during the 41st session of UNESCO’s General Conference. The agency embarked on a two-year program to create what they claim is the ‘first global standard-setting instrument on the ethics of artificial intelligence in the form of a Recommendation’. The recommendations outline values and principles of sustainability, transparency, explainability, human oversight, accountability, literary, government collaboration and more. It also outlines ten policy action areas and suggests monitoring and evaluation techniques.
The UK-based research centre’s eight principles aim at individual adoption for designing, building, and operating ML systems. Designed by expert technologists, the principles are in the form of eight pledges that ensure bias evaluation, reproductive operations, data risk awareness and more.
Created by experts in ethics, digital technology, law, human rights, and Ministries of Health leaders, WHO’s guidelines took close to two years in the making. The final published report outlines the ethical challenges and risks of AI in the health field. It also includes six principles and a set of recommendations/frameworks for governments across the globe to ensure the safe benefits of AI in healthcare
IBM has published two sets of AI guidelines. The first is ‘everyday ethics for AI, ‘ focusing on the daily use of AI and the core areas of ethical focus for designers and AI developers. The paper outlines the five key areas as accountability, value alignment, explainability, fairness and user data rights.
The AI fairness 360 paper is a Python toolkit for an industrial audience. The paper looks at AI fairness and entails a toolkit to ensure fairness research algorithms are leveraged in an industrial setting. It consists of a framework and varied metrics for researchers to evaluate their algorithms.
Oxford’s 2019 technical report consists of international standards to enable global coordination in AI R&D. The report outlines the standards of global governance of AI. It provides recommendations for governments around the globe to participate in AI research. They focus on the infrastructure of AI, needed for better design, development and research on AI standards.
The Singapore Government has published a Model for AI Governance Principles that aims to increase trust in AI. The guiding principles cater to the private sector with readily implementable guidelines addressing ethical and governance issues when deploying AI solutions. The report explains AI systems and guiding principles and suggests data accountability practices and transparent communication.
The Institute of Electrical and Electronics Engineers has several works on AI and AS’s ethical considerations. Their works cover governance during the various stages of data collection, privacy, putting principles to practice and algorithmic bias. Their paper for AI in Business is catered towards businesses launching their first AI product/services. It consists of AI ethics values needed in businesses and recommendations to ensure a sustainable culture. It also includes an AI Ethics Readiness Framework to plot the business on their AI ethics preparedness. Additionally, IEEE’s ‘Measurement‘ consists of podcasts, webinars and reports that define the various aspects of the ‘algorithmic age’.
8. Alan Turing’s Understanding Artificial Intelligence Ethics and Safety
The Alan Turing Institute’s guide is a close-to 100-page report responsible for designing and implementing AI in the public sector. The guide, created by public policy programmer Dr David Leslie, is claimed to be the ‘most comprehensive guidance on AI ethics and safety in the public sector to date. It provides the three building blocks of an ethical AI platform with acronyms such as the SUM values, the FAST track principles and the PBG framework. In addition, it looks at creating transparency in designing, producing and deploying AI projects and creating safety-first AI projects.
The Indian government’s think tank NITI Aayog has published an approach document for India in collaboration with the World Economic Forum Centre for the Fourth Industrial Revolution. There are two approach papers identifying ‘broad ethics principles for design, development, and deployment of AI in India’ and presenting seven principles to ensure the same. They have also published papers to showcase the use of AI in various sectors. NITI Aayog’s pilot study aims to assess the usability, usefulness, and adherence to Standard Treatment Guidelines in Indian healthcare.
While most guidelines are created for pre-AI creation or pre-deployment stages, ensuring trustworthy AI is continuous. Considering that, organisations have also published metrics to measure the AI ethics or a checklist for measurement. Let’s explore some.
The UK government has detailed a resource page outlining the various frameworks for the governance of AI in the public sector. Keeping in mind these principles, the data ethics workbook by the City Police of London comprehensively looks at each principle through a set of questions developers can ask themselves to determine the level of AI ethics their product meets.
A snapshot of the framework: Source
Microsoft recently released a checklist describing fairness in AI as being a ‘socio-technical’ challenge. The list consists of actions for developers to take during the various stages of the AI implementation process: prototyping, defining, building, launching and evolving. It also contains aspects to be wary of and ensure during these stages.
The San Francisco Government has collaborated with Harvard to create a practical toolkit for cities. It aims to create awareness about algorithmic implications and list out possible risks and suggestions to mitigate them. The risk management toolkit consists of a two-part assessment identifying and assessing algorithmic risks.
- Reference for further resources: Awesome production machine learning