How ‘Bias Bounties’ May Put Ethics Principles Into Practice

Bias bounties

In a paper published recently with the title ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims’, a team of researchers from the Google Brain, Intel, OpenAI and other top labs from the US and Europe have launched a toolbox that will turn AI ethics principles into practice. The kit for organisations developing the AI models also includes the idea of rewarding developers for successfully detecting bias in AI, which is similar to security software getting rewarded with bug bounties. As per the authors of the paper, the bug bounty hunting community is still at its nascent stage, but can be useful in discovering biases.

The initial idea of bias bounties was suggested in the year 2018 by co-author JB Rubinovitz. The recently published paper suggests ten different approaches to turn AI ethics principles into practice. Taking a look at the recent efforts, more than 80 organisations have come up with different AI ethics principles. However, the authors of the paper firmly believe that the present set of norms and regulations is insufficient to develop a responsible AI. The team has also advised on ‘red-teaming’ to detect susceptibility, along with aligning with third-party auditing and government policies to create new regulations specific to market needs. The team also makes several other recommendations, such as:

  • Create a centralized incident database by sharing incidents about AI as a community
  • Maintain an audit trail during the development and deployment of AI systems for safety-critical applications
  • Stringent scrutiny of commercial models along with alternative open sources for commercial AI systems
  • Better support for privacy-centric techniques, such as federated learning, differential privacy, and encrypted computation
  • Verify hardware performance claims made by researchers through increased government funding

This recent paper is an amalgamation of ideas from a workshop that was held in April 2019 at San Francisco that included more than 35 representatives from industry lab, civil society organisations, and academia. The authors made the recommendations mentioned above after they realized that the workshop failed to address certain claims made by AI practitioners.

Since the use of AI in recent years has rapidly increased in several businesses and institutions, a rise of concern and activism has emerged around AI with a focus on issues such as bias amplification, ethics washing, loss of privacy, digital addictions, facial recognition misuse, and disinformation. AI systems have also been witnessed in strengthening the existing race and gender bias, which has resulted in biased facial recognition by the police and poor healthcare service for millions around the world. As per a report by The Leadership Conference on Civil and Human Rights, the usage of PATTERN risk assessment tool by the US Department of Justice was heavily denounced since the tool was found to be racially biased and used to send prisoners home early as a way to enforce social distancing amid COVID-19. 

The authors press for the need to move ahead of nugatory principles that are incapable of holding developers accountable. The paper reads that with rapid technological progress in AI and the spread of AI-based applications over the past several years, there is a growing concern about how to ensure that the development and deployment of AI are beneficial — and not detrimental — to humanity.

“Artificial intelligence has the potential to transform society in ways that are both beneficial and harmful. Beneficial applications are more likely to be realised, and risks more likely to be avoided if AI developers earn the trust of society and one another. This report has fleshed out one way of earning such trust, namely the making and assessment of verifiable claims about AI development through a variety of mechanisms. If the widespread articulation of ethical principles can be seen as a first step toward ensuring responsible AI development, insofar as it helped to establish a standard against which behaviour can be judged, then the adoption of mechanisms to make verifiable claims represents a second,” concludes the author.

Download our Mobile App

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR