MITB Banner

Will The Latest IBM Proposal For Supplier’s Declaration Improve Transparency in AI Algorithms?

Share

Deep learning has had enormous impact on the fields of computer vision, natural language and many other fields. But deep learning models have also been plagued with unexplainability and lack of transparency. The black box nature of DL models is the chief cause for non-interpretability. Now, to overcome these shortcomings, researchers are focusing on ‘Explainable AI’ wherein scientists can understand DL models and trace how the output was achieved. So far, DL models have achieved near human accuracy in image recognition, but through brute force techniques wherein they are fed terabytes of data.

This a cause of concern, specifically for two sections of audience — AI vendors who care about the accuracy and reliability of their models and the parameters and consumers who would like to understand the accuracy and reliability. They also care about parameters like safety, transparency and security.

Now, researchers from IBM have put forth a proposal which would state how AI solutions/algorithms  fare in terms of purpose, performance, safety, fairness and risk factors. IBM has proposed a supplier’s declaration of conformity (SDoC) for AI services. This is with an aim to help increase trust in AI services and the service providers. According to the researchers, “We envision an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers. Importantly, it conveys product level rather than component-level functional testing. We suggest a set of declaration items tailored to AI and provide examples for two fictitious AI services.”

Naftali Tishby, one of the foremost thinkers and scientists in the field of understanding deep learning models, strongly believes that the black box of  deep learning should be opened.  He says, “The most important part of learning is actually forgetting.” This is quite appropriate since his ideas suggest that leaving some details behind can help us build models which learn better and are easy to understand.

Supplier’s Declarations of Conformity

An SDoC is a declaration that asks supplier to explain the steps they have ensure performance, safety and security in the machine learning models. Every aspect have been well defined. As the researchers put it:

  1. Performance covers the issues of appropriate accuracy or risk measures along with timing information.
  2. Safety, entails the minimization of both risk and epistemic uncertainty, will include explainability, algorithmic fairness, and robustness to concept drift.
  3. Security will cover the aspects of robustness to adversarial attacks.

IBM researchers also proposed that clients should know how the supplier created the machine learning model. The supplier also should give out information regarding the training data used and how it may react in scenarios that are not known to the model. IBM researchers also propose that this declaration should be voluntary but there will obviously some trust attached with the supplier who undertakes the declaration because of peer and market pressures.

One of the main effects will be that enterprises will now get information about the AI service and vendors will improve their business performance. The researcher also compare similar regulation and declarations in other markets and industries. They talk about regulations and standardization organizations IEEE  and ISO which, define standards for many products and services. In the USA, researchers cite the example of United States Consumer Product Safety Commission (CPSC). The organisation requires a manufacturer to make a declaration in written or electronic form which says the product is compliant of all the standards.

Trust And AI Systems

According to researchers, specific measures can help improve trust in AI:

  1. Applying safety and reliability engineering methods in AI services.
  2. Identifying AI specific issues and solving them very fast.
  3. Building good tests and transparent reporting mechanism.

The research also listed specific factors that can be listed in SDoC:

  1. What is the intended use of the AI service?
  2. What algorithms or techniques does this AI service use?
  3. Which are the dataset on which the AI service is tested?
  4. Describe the testing methodology.
  5. Describe the results in detail.
  6. Do you know about about possible biases or ethical problems with the AI service?

These questions are not at all exhaustive and many questions can be added to the list.

Conclusion

It remains to be seen how AI suppliers take notice and implement these kind of regulatory actions and how they cope up with the efforts and investments that will be needed to produce the declaration. The size of the declaration may also keep expanding and it might also discourage many AI suppliers from strict consumers and clients. But on the other hand it is really important to think forward and have such policies to produce AI that is more understandable. It is clear that without such requirements AI suppliers would be happy to build black box AIs and ship them to their customers. Hence IBM’s proposal is obviously a good step in this conversation. AI service SDoCs ushering in a new era of trusted AI end points and bootstrapping broader adoption.

 

PS: The story was written using a keyboard.
Picture of Abhijeet Katte

Abhijeet Katte

As a thorough data geek, most of Abhijeet's day is spent in building and writing about intelligent systems. He also has deep interests in philosophy, economics and literature.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed