Now Reading
Understanding AI: Google Brain Scientist Been Kim Is Developing An AI Translator

Understanding AI: Google Brain Scientist Been Kim Is Developing An AI Translator


As human beings, we need a reason for each and everything that we do. But what can we do about that as we all are programmed to understand the mechanics behind everything that we possibly can?



Explaining the phenomenon in a similar way, a research scientist at Google Brain, Been Kim, gives a thumbs up to the idea that one should expect nothing less than that from artificial intelligence as well. As interpretable machine learning expert, her dream as of now is to build an AI software that can explain itself to anyone.

Kim believes that AI is at a critical moment where the whole humankind is trying to decide whether this technology is good for us or not. “If we don’t solve this problem of interpretability, I don’t think we’re going to move forward with this technology as we will be forced to drop AI as technology in future,” she said.

This researcher with Google Brain has mitigated this problem through translation and interpretability research to bridge the gap between AI and humans. Testing with Concept Activation Vectors (TCAV) — a system that has been developed by her along with her team — she describes it as the translator for humans that allows a user to ask a black box AI how much a specific, high-level concept has been played into its final reasoning that it has done.

For example, if a machine-learning model is trained to identify zebras in images, a person can use TCAV to determine how much weight the system gives to the concept of stripes when deciding. While the TCAV was normally designed and tested for image recognition AI, it can be adjusted for use with another AI model too.

Key Features Of TCAV

  • TCAV was for the first time was tested on machine-learning models trained to recognize images, but it works with other models as well that are trained on text and certain kinds of data visualizations, like EEG waveforms.
  • TCAV can be plugged into machine learning algorithms to get out the information, of how much they weighted different factors or types of data before churning out results
  • Tools like TCAV are in high demand as AI finds itself under greater scrutiny for the racial and gender bias that plagues artificial intelligence and the training data used to develop it.
  • With TCAV, people using a facial recognition algorithm would be able to determine also how much it factored in the race when, say, matching up people against a database of known criminals or evaluating their job applications.
  • So, TCAV also enables people at large to have the choice to question, reject and even fix a neural network’s conclusions rather than blindly trusting the machine to be objective and fair.

 

How TCAV Works

TCAV can be also used to ask a trained model about irrelevant concepts. For example, doctors using AI to make cancer predictions, the doctors might suddenly think. It looks like the machine is giving positive predictions of cancer for a lot of images that have a kind of bluish coloured spots. But that’s just an absurd analysis to be made.

See Also
simpson's paradox

But a proficient Doctor would want to know how much bluish spots mattered to the model in making positive predictions of cancer. So, for that, some images are collected say 20. Now, these images will be plugged to those labelled examples into the model.

  • Then TCAV engages in an internal process called sensitivity testing. A numerical output which will be given as a number between zero and one, which thereby will give the probability of a positive prediction for whether cancer has increased or not.
  • That’s your TCAV score. If the probability is increased, it means it has just identified a problem in their machine-learning model.

Outlook

Interpretability as such has two major divisions and one of them deals with the interpretability for science. The second type being interpretability that’s responsible for AI. The goal of this second branch of interpretability is to simply understand a tool so that it can be used safely. And that understanding can be created only if its confirmed that relevant, useful human knowledge is reflected in that tool that’s being used.

This is what the Google Brain scientist Been Kim has accomplished by building a tool that can help artificial intelligence systems explain how they arrived at final conclusions, a task that’s nearly impossible for ML algorithms to gauge.

Though the project is in its developmental stage, Kim is of the opinion that there is no need for a tool as such to totally explain AI’s decision-making process. But, it’s good enough for now to have something that can flag potential issues and give human beings the much-needed insight into where something may or may have gone wrong. The goal of interpretability for machine learning for her is to tell if a system is safe to use or not. It’s about ultimately revealing the truth.

 


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top