Private enterprises use several algorithms for decision making on a day-to-day basis. Many of them have also realised or have been made aware of the consequences of using algorithms that might be biased towards individuals or sections of society.
To avoid such incidents from occurring again, many tech companies are taking steps to ensure FATE – Fairness, Accountability, Transparency, and Explainability in their algorithms. Some of them have also developed tools not just for internal audits but also for others to buy them.
While internal audits can help, they present a conflict of interest. An external audit or a third-party initiative, on the other hand, could potentially help ensure FATE for algorithms in its true sense. Several initiatives in the direction have been taken by many entrepreneurs and researchers.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Auditing & Consultancy
O’Neil Risk Consulting & Algorithmic Auditing (ORCAA) is a private consultancy firm that helps companies and organisations manage and audit algorithmic risk. The two main questions it answers are — ‘What does it mean for the algorithm to work?’ and ‘How could this algorithm fail and for whom?’, as the audit aims to incorporate and address concerns from all stakeholders in an algorithm.
Along with providing audit services in terms of identifying issues of fairness, bias, and discrimination and recommending steps for remediation, the firm also provides legal expertise to public agencies and law firms in legal actions related to algorithmic discrimination and harm. It also gives talks and training on algorithmic auditing and fairness.
Similar to the transparency provided by the food industry by providing ingredients and nutrition labels on its products, Open Ethics Label helps companies label various aspects of their algorithms to achieve transparency.
AI algorithm owners can voluntarily provide three main aspects of the code: training data, algorithms and decision spaces and get them labelled. Providing this information can also help these organisations understand the inherent biases in their data, evaluate security and privacy risks associated with their code, and understand the robustness and safety of the decision spaces in their algorithms.
Along with that, Open Ethics also provides a vector rating called Open Ethics Vector, which reflects the values on which data-driven decisions are made. These are based on a set of principles which consider parameters like values, guiding behaviours and attitudes towards religion, gender, relationships, money, food, or health.
The vector provides transparency in terms of the information on which the ethical choices were made. The end-users can test the application to get a vector rating by selecting personal ethical preferences. This helps users decide which apps are best for them.
Open-sourcing Audit Algorithms
Pymetrics, a predictive analytics firm, feared they might be using a biased algorithm for hiring. Hence they started ‘Audit AI’ to audit its own algorithm, and the resulting tool has been made publicly available to encourage other people to use it.
Available to download from Github, it helps to measure and mitigate biases that might have been introduced due to aspects like training data, that unfavourably discriminate against underrepresented people in the dataset.
Initiatives To Encourage Inclusivity
One of the main reasons why AI algorithms end up being biased in the US is because they are predominantly developed by one demographic, i.e. male whites. Hence, with an aim to undertake initiatives that will increase the presence of black people in the field of AI, Rediet Abebe and Timnit Gehru, both computer science academics, started a small community called Black in AI. Initially set out as a small group of individuals, it has now more than 1,200 members on its Facebook group.
Similarly, another initiative, AI4ALL aims to undertake summer programs at prestigious universities for underrepresented groups and expose them to the possibilities of AI. The organisation believes that ‘when people of all identities and backgrounds work together to build AI systems, the results better reflect society at large.’
Research In AI Algorithms
AlgorithmWatch is a non-profit research and advocacy organisation that is supported by organisations like Bertelsmann Stiftung. They conduct in-depth research and try to shed light on ‘algorithms that have social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.’
They publish several stories or studies that will have an in-depth description of automated systems and their social and technical impact. The algorithms could be owned by public organisations or private enterprises.
Third-party auditing can help remove the conflict of interest if firms want to bring transparency and explainability to their algorithms or ensure their algorithms are fair and ethical in its true sense. However, private enterprises are not legally binding to get their algorithms audited or certified for a certain level of fairness or transparency or ensure a certain threshold of diversity in their teams.
Hence, incentives to ensure the FATE of algorithms can help boost the number of enterprises will get their algorithms audited or encourage them to ensure inclusivity in their AI teams. This will encourage others to follow, as well.