MITB Banner

Are Easy-To-Interpret Neurons Necessary? New Findings By Facebook AI Researchers

Share

“Are easy-to-interpret neurons actually necessary? It might be like studying automobile exhaust to understand automobile propulsion.”

Understanding the underlying mechanisms of deep neural networks (DNNs) typically rely on building intuition by emphasising sensory or semantic features of individual examples. Knowing what a model understands and why it does so is crucial for reproducing and improving AI systems. 

Class selectivity is commonly used for analysing the properties of individual neurons to understand the neural networks. The researchers wrote that the preference of an individual neuron for a specific image, say neurons that activate for images of cats but not for other types of images is called “class selectivity.” Selectivity is intuitive and easy-to-understand. However, it remains to be known whether it is necessary and/or sufficient to learn class selectivity in individual units. To explore if easy to interpret neural networks actually hinder the learning process in deep learning models, the researchers at Facebook AI published three works covering interpretability and selectivity. 

Why Class Selectivity Can Be Bad For AI

Despite the widespread association of class selectivity with that of interpretability in deep neural networks, exclaimed the researchers, there’s been surprisingly little research into whether easy-to-interpret neurons are actually necessary. The research has recently begun, and the results are conflicting, wrote the researchers at Facebook AI.

“…class selectivity in individual units is neither necessary nor sufficient for convolutional neural networks (CNNs).” 

The AI community was hit by a new wave of interpretability. More researchers started focussing on these challenges and making AI more understandable and intuitive. This led to, what the FB team claims, to over-reliance on intuition-based methods, which they warn, can be misleading if not rigorously tested and verified. Models should not just be intuitive but also empirically grounded.

To make the best of class selectivity, the researchers developed a technique that acts like a knob to increase or decrease the class selectivity in neurons. While training a network to classify images, they added an incentive to decrease (or increase) the amount of class selectivity in its neurons. They did this by adding a term for class selectivity to the loss function used to train the networks and controlling the importance of class selectivity to the network using a single parameter. 

This work underlines the importance of falsifiability in the domain of AI; falsifiability, the foundation of all scientific endeavours. The researchers showed how we can falter even in the process of making things more understandable and why it is necessary to interpret the intent behind ML interpretability rightly. 

Key Findings

In one of the related papers, the authors discussed how interpretability research suffers from an over-reliance on intuition-based approaches that in some cases have led people to an illusion of progress.

Furthermore, to verify the falsifiability of interpretability, the authors recommend remembering the “human” in “human explainability”.

The experiments conducted by the researchers at FB AI team led to the following results:

  • Class selectivity is not integral to DNN function and can sometimes have a negative effect. 
  • Reduced class selectivity can make neural networks more robust against noise. 
  • Decreasing class selectivity also makes neural networks more vulnerable to targeted attacks in which images are intentionally manipulated in order to fool the networks.

More broadly, concluded the researchers, these results warn against focusing on the properties of single neurons as the key to understanding how DNNs function. Researchers also hope that future work will address the question: Why do networks learn class selectivity if it’s not necessary for good performance?

Read the paper on class selectivity here.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.