Artificial intelligence is growing at a rapid pace to the point where it is making important decisions for us. While this can be beneficial in some ways, AI algorithms which discriminate or have a bias in the decision-making process can result in unprecedented repercussions for individuals or sections of society.
Algorithms are, in the end, developed by human beings, and humans come with biases which can reflect in algorithms. This has happened in the past. As the tech enterprises developing these algorithms come under fire, many are taking initiatives to address the issue.
Analytics India Magazine collated a list of some of the top initiatives taken by tech firms.
Sign up for your weekly dose of what's up in emerging technology.
Launch of Institute for Ethics in AI By Facebook
In Jan 2019, Facebook announced $7.5 million in funding for its new AI-ethics based research centre – The Institute for Ethics in Artificial Intelligence that was created in collaboration with Technical University of Munich (TUM).
With inter-disciplinary expertise from academia at TUM and industry at Facebook, the institute researches the applicability of ethical AI along with addressing issues that come with the use of AI such as safety, privacy, fairness, and transparency.
Reviews Based On Defined AI Principles By Google
In 2017, Google started defining AI principles that were published in 2018 after multiple iterations.
Based on these principles, any employee of Google can approach the Fairness and Responsible AI team for an ‘AI Principle Review’. After identifying the potential benefits and harms of a particular project, the review team and product team then determine whether the project should be launched and if so, what ethical and thoughtful practices should be taken into consideration.
Diverse Hiring At Microsoft
Tech companies have also found that hiring a diverse staff can help reduce algorithmic biases. With an aim to do just that, Microsoft has gone beyond racial and gender diversity for hiring a team to develop Cortana, an AI-based virtual chat assistant.
The team members for Cortana development have included a poet, a playwright, a comic-book author, a philosophy major, a songwriter, a screenwriter, an essayist, and a novelist, whose professional skills equip them to write ‘upbeat language’ for the bots and anticipate diverse users’ reactions. This helps them come up with ‘pleasant’ and ‘non-judgemental’ responses for Cortana.
Addition of Responsible AI Module To Education Platforms By Salesforce
Trailhead, launched in 2014 by Salesforce, is a free platform for upskilling employees of the firm and to bridge skill gaps.
Last year, Salesforce came up with a new module called Responsible Creation of Artificial Intelligence, “to empower developers, designers, researchers, writers, product managers to learn how to use and build AI in a responsible and trusted way”.
An External AI Ethics Advisory Panel by SAP
The panel consisting of experts from academia, politics, and industry was created to ensure adoption of the guiding principles and further develop them in collaboration with the AI steering committee at SAP. A first of such panels in Europe, the group was created, “to propose ethical guidelines relating to fairness, safety, transparency, the future of work, and democracy by early 2019.”
An Internal AI Ethics Board By IBM
IBM has formed an internal AI Ethics Board, which is led by their AI Ethics Global Leader and the firm’s Chief Privacy Officer.
IBM has put in place a centralised and multi-dimensional AI governance framework centred around the ethics board, which supports both technical and non-technical initiatives to operationalise the IBM principles of trust and transparency. This also helps in advancing efforts to tackle multiple dimensions of this concept, including fairness, explainability, robustness, privacy and transparency.
Science Festivals By The FAIR Future Campaign By Samsung
Samsung UK started an initiative called the FAIR Future Campaign to organise science festivals, to let young people get hands-on experience with their latest tech and to let them share their opinion on the ethical implications of AI.
The event which reached out to more than 5000 people across the country to know their opinions on the future of AI, also planned to share all the information with the UK Government, to help inform broader thinking about AI ethics.
‘Justice League for AI’ By Tech Giants
Partnership on Artificial Intelligence to Benefit People and Society was formed in 2016 by Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined in 2017.
The partnership defines four main goals that include the development and sharing of best practices to research and develop AI; to advance public understanding of AI; to open an inclusive discussion of AI ensuring all the key stakeholders are involved; and to identify and foster aspirational efforts in AI for socially benevolent applications.
While the article highlights some of the biggest initiatives taken by several tech companies in order to ensure fairness in their algorithms to produce ethical AI, the effectiveness of them remains to be judged. However, the cognisance of the damage, unethical and biased algorithms can do and take initiatives accordingly, is a welcome step in the right direction.