MITB Banner

Making sense of ‘Black Box’ in Artificial Intelligence — should we trust AI completely?

Design by The Black Box Technique for AI

The Black Box Technique for AI

Neural networks, machine learning algorithms, and other subsets of AI are finding their way into several critical domains, which include healthcare, transportation, law, and more. And AI algorithms are affecting people’s lives in more ways than one, from credit scoring to loan disbursal to skewed image matching.

With newer developments around the field, researchers are encountering a brand-new set of challenges. As AI algorithms grow more advanced, it becomes a challenging task to make sense of the inner workings. Moreover, companies that develop them do not allow the scrutiny of their proprietary algorithms, which is another reason AI is becoming increasingly opaque and complex today.

Instances where AI has had an alarming impact:

  • An AI-powered opponent in the game Elite Dangerous went berserk and started creating super-weapons to hunt players.
  • Microsoft’s AI chatbot Tay started spewing out racist comments within a day of its launch.
  • Google face recognition started making some offending labeling of pictures
  • More notorious is  COMPAS Recidivism algorithm used to decide the freedom or incarceration of defendants passing through the US criminal justice system was alleged to be biased against African Americans, online investigative journalism site ProPublica claimed.

At the end of the day, the concern mostly surfaces from the lack of human control. When humans make mistake, they can explain it, or take responsibility for that case. The same cannot be said for AI. Therefore, we need to make sense of how the underlying algorithms work. In fact, writer Cathy O’Neil, author of Weapons of Math Destruction discussed the downside of living in algorithmic world where  mathematical models have invaded lives and spawned unaccountability. The premise is how far should people go in trusting Neural Networks and Deep Learning.

What is Black Box in AI?

AlphaGo defeats Lee Sedol

Neural networks, the key defining components of AI applications such as image recognition, natural language processing, speech recognition, machine translation among others have long been regarded as the “Black Box” as it is hard to understand how the results have been generated. Deployed in many real world applications, neural networks mimic the human brain but are not transparent as opposed to rules-based system.

Engineers often fail to explain why their algorithms make certain decision. Neural networks form the key components for several AI applications, and can be considered as a “black box.” Google’s Go-playing AI program, AlphaGo is a brilliant instance. The program stunned the world when it executed moves, even the professional couldn’t think of. Basically, the network organizes itself when it is instructed to, however, that doesn’t necessarily imply that it will tell you how it was done.

Decoding the ‘Black box’ technique?

It’s never an easy task to train a neural network, as it can take hours to get them ready, no matter to what extent you make use of computing power. To address this problem, researchers at OpenAI came up with the ‘Black Box’ technique, that promises more robust AI systems.

The technique uses something called “Black Box” in place of standard reinforcement training. The approach entails forgetting the fact that environment and neural networks are involved within the ‘Black box’ created. Essentially, the technique involves optimizing a given function in isolation, and sharing it as necessary.

Black Box technique will help us understand neural networks

The approach initiates with several random parameters, makes guesses, and then tweaks follow-up guesses to favor the more successful candidates. This helps in gradually diminishing things down to the ideal answer.

Benefits:

  • It eliminates a lot of the traditional craft in training neural networks, making the code both easier to implement and roughly two to three times faster.
  • The method scales elegantly the more processor cores you throw at a problem, as the ‘workers’ in this scheme only need to share tiny bits of data with each other.

Besides, this technique has other far-reaching advantages. With ‘Black Box’ technique, neural network operators can spend more time using their systems, instead of spending time on training them.

There’s quite some time before the technique is extensively used for real-world AI applications. However, as computers get increasingly fast, the chances of such learning to occur in real-time only increases. The technique will also assist in building robots that are very quick to adapt to new tasks and learn from mistakes.

Need for AI transparency

Elon Musk

To ensure that Artificial Intelligence as a technology is more transparent, firms have to spread awareness about it and evangelize people about its applications. OpenAI is a great example of such a firm. The nonprofit research company was founded by Tesla’s Elon Musk and YCombinator’s Sam Altman. The organization aims at opening AI research and development to everyone, independent of commercial interests.

Partnership on AI is another great example of such an organization. The firm aims to create awareness for and deal with AI challenges such as bias. The company was founded by tech giants including Microsoft, IBM, and Google. The organization will also focus on AI ethics and best practices.

Can we trust Artificial Intelligence?

Can we trust AI

Mistakes made in case of non-critical tasks, such as advertising, games, and Netflix suggestions, are tolerable. Thus, AI can be easily used for such applications without the fear of facing any major error. However, if such a mistake happens across domains such as the social, legal, economic, and political; consequences can be heavy.

Thus, AI cannot be completely trusted by itself at the moment with applications like enterprise customer service, where transactions are involved, or computer-assisted clinical documentation improvement. In scenarios as such, the AI co-works with a human being, instead of working in isolation.

As we move into the future, we will have to embrace AI and develop trust on the technology.  It will become necessary as AI will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But to really trust the technology, we must better comprehend techniques such as deep learning, and make it accountable to the end users. Black Box technique is a positive step in this direction, showing researchers the way to understand neural networks and how they work.

 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Amit Paul Chowdhury

Amit Paul Chowdhury

With a background in Engineering, Amit has assumed the mantle of content analyst at Analytics India Magazine. An audiophile most of the times, with a soul consumed by wanderlust, he strives ahead in the disruptive technology space. In other life, he would invest his time into comics, football, and movies.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories