Causal Representation Is Now Getting Its Due Importance In Machine Learning

Researchers at work find a way to enhance AI systems working through causal learning to overcome challenges.
Causal Representation Is Now Getting Its Due Importance In Machine Learning

Bernhard Scholkopf and Stefan Bauer from Max Planck Institute for Intelligent Systems; Francesco Locatello and Nal Kalchbrenner as Google researchers; Yoshua Bengio, Nan Rosemary Ke, and Anirudh Goyal from Montreal Institute for Learning Algorithms (Mila) came together for research. 

The research paper titled “Towards Causal Representation Learning” provides the way through which the artificial intelligent systems can learn causal representations and how the absence of the same in machine learning algorithms and models is giving rise to challenges in front of us. 

Consider an Example

Let’s look at the causal relations between different elements while observing the girl on the horse trying to jump over a barrier. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

We can clearly observe that the girl, the horse, and the motion of their bodies are in unison. The girl is pulling the horse’s collar with her hands in order to jump over. Similarly, we as humans should think about cases, like what would happen if the horse’s legs hit the barrier? What if the collar around the horse’s neck slipped away from the girl’s hand? These are counterfactuals, and it’s natural to think this way too. We have observed things around us from childhood, learned from nature, and looked out for every other possibility associated with an event. This is the basic intuitive nature of a human being.

However, various machine learning algorithms can execute complex tasks, identify patterns from huge databases, play chess, discover new molecules at a lightning-fast speed. But they fail to make simple causal inferences which we have made out while observing the picture above. 

Researchers Pitch For Causal Models

As per the researchers of the paper, “Machine learning often disregards information that animals use heavily: interventions in the world, domain shifts, temporal structure — by and large, we consider these factors a nuisance and try to engineer them away. In accordance with this, the majority of current successes of machine learning boil down to large scale pattern recognition on suitably collected independent and identically distributed (i.i.d.) data.”

ML algorithms are based on predefined sets of data, and the engineers use them to feed the system with multiple examples to ensure greater accuracy. In convolutional neural networks (CNN), millions of images are provided to a system to identify a particular object. But, let’s talk of a different scenario – the moment lighting conditions change, or a completely new background has taken into account, it tends to make inaccurate predictions.

A simple intervention can change the statistical distributions of a problem. Just as the pandemic hits, it brought about a change in millions’ lifestyles, changed tastes, and preferences. As a result, many machine learning systems begin to fail because of a lack of causal learning. Thus, causal models can remain robust even after the intervention and respond effectively to any unexpected situation in a much better way.

Better AI Systems Through Causal Models

The AI researchers have compiled a list of concepts and principles that can help develop causal machine learning models in their research paper. The two concepts adopted, namely — the structural causal model and the independent causal model. The basic idea behind the models adopted is that instead of relying too much on fixed correlations between sets of data, or the instructions fed, AI systems should have the capability to register causal variables, and understand their effects on the environment, separately.

As per the authors of the research paper, “Once a causal model is available, either by external human knowledge or a learning process, causal reasoning allows drawing conclusions on the effect of interventions, counterfactuals, and potential outcomes.” Moreover, by combining causal graphs and machine learning, AI agents will be able to generate modules that can be applied to a variety of tasks without requiring extensive training. Take the example of the picture below:

The idea is to make AI systems understand and observe the data only like images and not get confused with the lighting, design, or other noises in the background. This will eventually help to make systems deal with variable inputs to come out with more precise outputs.

Wrapping Up

Although the idea presented before the world through this paper is still at a conceptual level, but it is only the conflicting ideas that lead us towards better models in the future to come. Causal learning can reduce adversarial vulnerability, strengthen reinforcement learning mechanisms, and bring us closer to the way the human mind works under unexpected circumstances.

More Great AIM Stories

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM