Now Reading
To Err Is AI, Too: Researchers Have Built Neural Networks That Tell When They Make A Mistake

To Err Is AI, Too: Researchers Have Built Neural Networks That Tell When They Make A Mistake

“Neural networks are really good at knowing the right answer 99% of the time. But 99% won’t cut it when lives are on the line.”

Deep neural networks are good at recognising patterns in large, complex datasets to aid in decision-making. But can we trust them to be correct all the time? The successes are equally marred by the goof-ups and the inherent inexplicability of the algorithms. So, what if the neural networks are trained in such a way that they figure out for themselves when they are wrong. To address this issue, Alexander Amini and his colleagues at MIT and Harvard University attempted to find if this is possible in their latest work titled, “Deep Evidential Regression”, which was selected for NeurIPS 2020.

The idea here is to estimate uncertainty in neural networks. Currently the methods used for estimating the uncertainty of neural networks are computationally expensive and relatively slow for split-second decisions. According to the authors, deep evidential regression accelerates the process and could lead to safer outcomes.

Register for our upcoming Masterclass>>

Overview Of Evidential Regression

(Source: Paper by Alexander Amini et al.,)

The researchers have developed a quick way for a neural network to crunch data, and output, not just a prediction but also the model’s confidence level based on the quality of the available data. They designed the neural network that produces not only a decision but also a new probabilistic distribution capturing the evidence in support of that decision. These distributions are called evidential distributions

Evidential regression, wrote the authors, simultaneously learns a continuous target along with aleatoric (data) and epistemic (model) uncertainty. Given an input, the network is trained to predict the parameters of an evidential distribution, which models a higher-order probability distribution over the individual likelihood parameters. The authors followed these steps:

  • Firstly, this method enables simultaneous learning of the desired regression task, along with aleatoric and epistemic uncertainty estimation, by enforcing evidential priors and without leveraging any out-of-distribution data during training. 
  • Secondly, since the evidential prior is a higher-order NIG distribution, the maximum likelihood Gaussian can be computed analytically from the expected values of the (µ, σ2) parameters, without sampling. 
  • Third, one can effectively estimate the epistemic or model uncertainty associated with the network’s prediction by simply evaluating the variance of the inferred evidential distribution.

According to the authors, they directly capture the model’s uncertainty present in the underlying input data, as well as in the model’s final decision. This can be used to find out whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.

Looking for a job change? Let us help you.

To show if the neural networks can flag themselves, the researchers provide the network projected higher uncertainty for “out-of-distribution” data. This data consists of entirely new types of images never included in the training. First, the network was trained on indoor home scenes; then it was fed with a batch of outdoor driving scenes. The authors claim that the network consistently warned that its responses to the novel outdoor scenes were uncertain. The test, they wrote, highlighted the network’s ability to flag when users don’t place full trust in its decisions. 

See Also

Key Takeaways

According to the authors, the contributions of their work can be summarised as follows:

  • This method is widely applicable across regression tasks including temporal forecasting, property prediction, and control learning
  • Has the ability to sound the alarm on falsified data could help detect and deter adversarial attacks, a growing concern in the age of deep fakes.

The authors believe it is also important to recognise potential societal challenges that may arise. And, with increased performance and uncertainty estimation capabilities, humans will inevitably become increasingly trusting in a model’s predictions, as well as its ability to catch dangerous or uncertain decisions before they are executed.

Know more here.

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top