This month Twitter announced its first-ever artificial intelligence bug bounty program. The challenge is to find bias in its image cropping algorithm. Twitter removed the image cropping algorithm from public use after small instances of bias were uncovered in research by the META team earlier this year.
The selection of the winners will be based on a rubric created by Twitter’s Machine Learning Ethics, Transparency and Accountability or META team. The first-place prize, which will be announced at this year’s DEF CON AI Village, will be $3,500.
Faulty AI systems
AI-based companies usually turn to the black box and avoid critical analysis of the results obtained from their models. This has led stakeholders to be concerned about the absence of transparency and the inability of organisations to communicate the significant factors for the delivery of their AI-based products.
Bug bounties, which reward hackers for discovering vulnerabilities in software code before malicious actors exploit them, have become essential to the security field. Such bug bounty programs can be helpful for companies when it comes to evaluating their outputs based on explainability.
Thus, Bug bounty programs assist in ensuring the company’s trust by finding loopholes but also help in evaluating the in-house cybersecurity department.
Bug bounties for AI projects
Last year, researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces to formulate a toolbox for turning AI ethics principles into practice. It has set several guidelines, including paying developers for finding bias in AI, akin to the bug bounties offered in security software.
The paper read, “If companies were more open earlier in the development process about possible faults, and if users were able to raise (and be compensated for raising) concerns about AI to institutions, users might report them directly instead of seeking recourse in the court of public opinion.”
Bug bounties programs reward researchers for identifying security flaws, have transformed the way the technology sector approaches vulnerabilities, says Andrew Cormack, a chief regulatory officer at Jisc. Cormack called bug bounties a transformation point for the way the tech sector approaches vulnerabilities. Today, vendors and security researchers are proactively engaging with security as a standard practice.
While bounty competitions are not a replacement for structured testing and analysis, they open up the gates for organisations to challenge their systems to tests they might not have considered on their own.
Popular bug bounty programs for AI bias
- Logically’s Bug Bounty Program: Logically has been working with security professions to protect the customer’s from harmful networks and mobile applications.
- The Mozilla Security Bug Bounty Program is designed to enforce security research in Mozilla software and provide an incentive to those who help make the internet a safer place.
- Its CRASH project- Community Reporting of Algorithmic System Harms (CRASH), brings key stakeholders together for discovery, scoping and iterative prototyping of tools. This is to enable more accountable and harmless AI systems.
- HackerOne, a “hacker-based” security testing platform, is hosts ‘The Internet Bug Bounty’. This program rewards hackers who manage to uncover security vulnerabilities in some of the most important softwares on the internet. The program, managed by a panel of volunteers selected from the security community, is sponsored by Facebook, GitHub, Microsoft, Hackerone and Ford Foundation.
- Crowdsourced security platform, Bugcrowd combines analytics, automated security workflows, and human expertise to find and fix critical vulnerabilities. Bugcrowd announced Series D funding in April 2020 of $30 million. It has an expansive list of clients they have worked with, including Tesla, Atlassian, Fitbit, Square, and Mastercard. They review platforms for big tech giants and retail space like Amazon and eBay.