Now Reading
Deep Learning Is Not So Much A Black Box Anymore, And That’s A Great Development

Deep Learning Is Not So Much A Black Box Anymore, And That’s A Great Development

Image Source: MIT Technology Review

Although deep learning models have made remarkable progress in a range of tasks such as image recognition, speech recognition and language translation, the interpretability of models has been the subject of various research papers. Netherlands-based company Riscure which specialises in security services for connected and embedded devices, defines DL as an intelligent algorithm used to analyse large data sets and identify patterns using a deep neural network. In this case, the results are achieved by training a network on a data set with a known result (generally a number of classes that has to be identified in a data set). This trained neural network is further applied to a new data set to extract unknown features and classify them.

One of the most common examples of DL algorithm in industries today is an image recognition system used for identifying objects which is trained on a large set of photos. In this case, the target and algorithm identify the necessary properties in the image themselves. According to one DL practitioner, deep networks are performing better than expected in a wide range of tasks and neural networks are very powerful because they can be arbitrarily extended.

The networks generate an enormous function space and then gradient descent finds a suitable choice in that function space. So, with enough data and computing power, these models can follow human-understandable logic or procedures and make accurate predictions. However, model interpretability — identifying how a model makes a prediction is an area that has gained considerable interest amongst researchers.

With DL finding more commercial applications, a lot of academicians have trained their eyes on decoding its black box aspect. Even though there are many models to understand interpretability in terms of predictive features, developers usually want to isolate a small set of training examples which can have a high impact on prediction. But as one Redditor points out, it is not necessary that each training example contributes to a prediction.

There has been considerable progress in several key areas, like, understanding what features neural nets learn, why GANs or autoencoders can learn all features instead of label-dependent ones, or why they generalise well even without regularisation (it was already shown that SGD acts as an implicit regulariser via inductive bias).

A Look At Recent Research Papers Which Break Down Algorithmic Explainability:

DeepBase: Another brick in the wall to unravel black box conundrum, DeepBase is a system that inspects neural network behaviours through a query-based interface. The paper presents DeepBase, a method to analyse recurrent neural network models, and propose a set of simple and effective optimisations to speed up existing analysis approaches by up to 413 times. As cited in the paper, the researchers grouped and analysed different portions of a real-world neural translation model and show that learns syntactic structure, which is consistent with prior NLP studies, and can be performed with only three DeepBase queries.

CapsNet Proposed By Hinton: According to O’Reilly, Geoffrey Hinton from the University of Toronto first introduced CapsNets in 2011 in his research paper titled Transforming Auto-encoders. It proposed to overcome the shortcomings in traditional Convolutional Neural Networks (CNNs). CNNs are trained on huge amount of data and in this process, the deep networks discover patterns and not hierarchies, whereas CapsNets are designed to incorporate hierarchies and train on less data. Hinton’s paper said that CNNs are misguided and neural networks should use local “capsules” that perform complicated internal computations on their inputs and then encapsulate the results of these computations into a small vector of highly informative outputs. CapsNet is also more transparent and interpretable than CNNs and indicates how a feature was identified in each layer.

See Also
Anycost GAN

Classifying Research in Explainability of AI

While the volume of research in explanatory AI has exponentially expanded, the explanation has been largely concentrated in two areas — explaining the representation of data inside a network or the processing of data by the network. Another key approach has been designing explanation producing systems with architectures that simplify the interpretation of their behaviour.

So far the research can be classified under these sections:

  • Research done to explain black box models
  • Research carried out to explain the black box outcomes
  • New methods designed for transparent systems  

Outlook

For AI systems to gain wider acceptance, trust and commercial use, it is imperative to provide satisfactory explanations of decisions and there has been continuous research in establishing trust in DL systems, understanding the need for transparent explanations, insights into the decision making process of neural network. DL community has also made considerable headway in the development of FAT (Fair, Accountable and Transparent) algorithms, focused on interpretability of methods used to achieve the output.  

What Do You Think?

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top