AI Transparency: Let’s Talk About AI Accountability

In recent years, academicians and corporate professionals have requested greater transparency in the inner workings of artificial intelligence (AI) models, and for many good reasons.

In a Harvard Business Review post, Andrew Burt, Immuta’s chief legal officer, points out that transparency can help mitigate certain problems, such as fairness, discrimination and trust, in a scenario in which, for example, the new Apple’s credit card has been accused of sexist loan models, while Amazon scrapped an AI tool to hire after discovering it discriminated against women.

At the same time, it is becoming clear that the disclosure of AI information poses its own risks: greater disclosure of information can make AI more vulnerable to attack, while the more information is reported, the more companies can be susceptible to lawsuits or regulatory actions.

“Let’s call it the AI ​​transparency paradox: while generating more information about AI could bring real benefits, it could also create new risks. To navigate this paradox, organizations will need to think carefully about how they handle AI risks, the information they generate about these risks, and how that information is shared and protected, ”says Burt.

Some recent studies illustrate these trends. A research paper by academicians at Harvard and the University of California, Irvine, explains how LIME and SHAP variants, two popular techniques used to explain so-called black box algorithms, could be hacked.

To illustrate the power of LIME, a 2016 article announcing the tool explained how an otherwise incomprehensible image classifier recognized different objects in one image: an acoustic guitar was identified by the bridge and fingerboard parts, while a Labrador retriever was identified by specific facial features on the right side of the dog’s face.

LIME, and the explainable movement of AI in general, have been praised as advances capable of making opaque algorithms more transparent. In fact, the benefit of explaining AI has been a widely accepted precept, promoted by academicians and technology experts. Now, the potential for further attacks in LIME and SHAP highlights an overlooked drawback: As the study illustrates, explanations can be intentionally manipulated, leading to a loss of confidence not only in the model but also in their explanations.

This research is not the only one that demonstrates the potential dangers of transparency in AI. Recently, the researcher Reza Shokri and her colleagues illustrated how exposing information about machine learning algorithms can make them more vulnerable to attack. Meanwhile, researchers at the University of California, Berkeley, have shown that complete algorithms can be stolen based only on their explanations.

Burt says that as security and privacy researchers focus more energy on AI, these studies, along with many others, suggest the same conclusion: the more model-makers reveal about the algorithm, the more damage a malicious actor. This means that disclosing information about the internal workings of a model can actually decrease its security or expose a company to greater liability. All data, in summary, carries risks.

What is the positive side?

Burt notes that the good thing about all of this is that organizations have long faced the paradox of transparency in the areas of privacy, security and elsewhere. Today they just need to update their AI methods.

“To start, companies trying to use artificial intelligence must recognize that there are costs associated with transparency. Of course, this does not suggest that it is not worth achieving, it simply poses disadvantages that must be fully understood. These costs must be incorporated into a broader risk model that governs how to interact with explainable models and to what extent the information on the model is available to others, ”he points out.

The expert says that organizations must also recognize that security is becoming a growing concern in the AI ​​world. As AI becomes more widely adopted, more vulnerabilities and security bugs will surely be discovered. In fact, security can be one of the biggest long-term barriers to AI adoption.

Lastly, Burt notes that it is important to engage with attorneys as early and often as possible when creating and deploying AI. Ensures that involving legal departments can facilitate an open and legal environment.


References

Burt, A., 2019, Harvard Business Review, https://hbr.org/2019/12/the-ai-transparency-paradox

Shokri, R., Strobel, M., Zick, Y., 2019, Privacy Risks of Explaining Machine Learning Models https://arxiv.org/abs/1907.00164

Tulio Ribeiro, M., Singh, S., Guestrin, C., 2016,“Why Should I Trust You?” Explaining the Predictions of Any Classifier https://arxiv.org/pdf/1602.04938v1.pdf

Download our Mobile App

Dr. Raul V. Rodriguez
Dean at Woxsen School of Business. He is a registered expert in Artificial intelligence, Intelligent Systems, Multi-agent Systems at the European Commission, and has been nominated for the Forbes 30 Under 30 Europe 2020 list.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring