MITB Banner

IIT Hyderabad Researchers Develop New Method To Understand Causality In Deep Learning

Share

NVIDIA AI Technology Centre (NVAITC)

Researchers from the Indian Institute of Technology, Hyderabad (IIT-H) have developed a method by which the inner workings of artificial intelligence models can be understood in terms of causal attributes.

Artificial Neural Networks (ANN) are AI models and programs that mimic the working of the human brain so that machines can learn to make decisions in a more human-like manner. Modern ANNs, often also called Deep Learning, have increased tremendously in complexity such that machines can train themselves to process and learn from data that has been supplied to them as input, and almost match human performance in many tasks. However, how they arrive at decisions is unknown, making them less useful when the reason for decisions is necessary.

This work has been performed by Dr Vineeth N Balasubramanian, Associate Professor, Department of Computer Science and Engineering, IIT Hyderabad, and his students Aditya Chattopadhyay, Piyushi Manupriya, and Anirban Sarkar. Their work has recently been published in the Proceedings of 36th International Conference on Machine Learning, considered worldwide to be one of the highest-rated conferences in the area of Artificial Intelligence and Machine Learning.

Speaking about this research, Dr Vineeth Balasubramanian said, “The simplest applications that we know of Deep Learning (DL) is in machine translation, speech recognition or face detection. It enables voice-based control in consumer devices such as phones, tablets, television sets and hands-free speakers. New algorithms are being used in a variety of disciplines including engineering, finance, artificial perception and control and simulation.  Much as the achievements have wowed everyone,  there are challenges to be met.”

A key bottleneck in accepting such Deep Learning models in real-life applications, especially risk-sensitive ones, is the ‘interpretability problem.’ The DL models, because of their complexity and multiple layers, become virtual black boxes that cannot be deciphered easily.  Thus, when a problem arises in the running of the DL algorithm, troubleshooting becomes difficult, if not impossible, said Dr Vineeth Balasubramanian.

The DL algorithms are trained on a limited amount of data that are most often different from real-world data.  Furthermore, human error during training and unnecessary correlations in data can result in errors that must be corrected, which becomes hard.  “If treated as black boxes, there is no way of knowing whether the model actually learned a concept or a high accuracy was just fortuitous,” added Dr Vineeth Balasubramanian.

The practical implications of the lack of transparency in DL models are that end-users can lose their trust over the system.  There is thus a need for methods that can access the underbelly of the AI programs and unravel their structure and functions. The IIT Hyderabad team approached this problem with ANN architectures using causal inference with what is known in the field as a ‘Structural Causal Model.’

Explaining this area of work, Dr Balasubramanian said, “Thanks to our students’ efforts and hard work, we have proposed a new method to compute the Average Causal Effect of an input neuron on an output neuron. It is important to understand which input parameter is ‘causally’ responsible for a given output; for example in the field of medicine, how does one know which patient attribute was causally responsible for the heart attack? Our (IIT Hyderabad researchers’) method provides a tool to analyze such causal effects.”

Transparency and understandability of the workings of DL models are gaining importance as discussions around the ethics of Artificial intelligence grows, added Dr Balasubramanian on the importance of his team’s work on ‘explainable machine learning.’ This makes sense given that the European Union General Data Protection Regulation (GDPR) regulation requires that an explanation must be provided if a machine learning model is used for any decisions made on its citizens, on any domain, be it banking, security or health.

Share
Picture of Prajakta Hebbar

Prajakta Hebbar

Prajakta is a Writer/Editor/Social Media diva. Lover of all that is 'quaint', her favourite things include dogs, Starbucks, butter popcorn, Jane Austen novels and neo-noir films. She has previously worked for HuffPost, CNN IBN, The Indian Express and Bose.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Udio AI Music Generation

Now Everyone’s a Musician, Thanks to Udio

“With Udio, we are targeting a tool that is simultaneously easy to use for regular consumers, but also extremely powerful in the hands of top content creators,” said David Ding,

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.