Now Reading
Top Research Papers On Causal Inference

Top Research Papers On Causal Inference

Ram Sagar

As researchers pursued the inevitable AGI in machines, there has been a renewed interest in the idea of causality in models. There are significant implications to applying machine learning to problems of causal inference in fields such as healthcare, economics and education. 

Here are a few top works that acknowledge the challenges and offer solutions to the causal inference in machines:

The Seven Tools Of Causal Inference

2018

In this paper, Judea Pearl who has championed the notion of causal inference in machines, argues that causal reasoning is an indispensable component of human thought that should be formalized and algorithimitized towards achieving human-level machine intelligence. Pearl, in this paper, analyses some of the challenges in the form of a three-level hierarchy, and shows that inference to different levels requires a causal model of one’s environment. He has also described seven cognitive tasks that require tools from those two levels of inference.



Check paper here.

A Causal Bayesian Networks Viewpoint on Fairness

2019

In this paper by DeepMind, the researchers offer a graphical interpretation of unfairness in a dataset as the presence of an unfair causal path in the causal Bayesian network representing the data-generation mechanism. They have used this viewpoint to point out that fairness evaluation on a model requires careful consideration on the patterns of unfairness underlying the training data. They also show that causal Bayesian networks can function as a powerful tool to measure unfairness in a dataset, and to design fair models in complex unfairness scenarios.

Check paper here

Causal Inference And The Data-fusion Problem

2016

The authors address the problem of data fusion—piecing together multiple datasets collected under heterogeneous conditions. These conditions can be different populations or sampling methods. The diversity of these datasets offer new opportunities for better insights. However, there is always a challenge of bias seeping into these datasets. In this work, the authors present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks.

Check paper here.

Reinforcement Knowledge Graph Reasoning for Explainable Recommendation

2019 

Personalized recommendation with the help of knowledge graphs has been gaining traction lately. In this work, the authors perform explicit reasoning with knowledge for decision-making, so that the recommendations are generated and supported by an interpretable causal inference procedure. 

They propose a method called Policy-Guided Path Reasoning (PGPR), which couples recommendation and interpretability by providing actual paths in a knowledge graph. Our contributions include four aspects. Their experiments on several large-scale real-world benchmark datasets led to obtaining favorable results compared with state-of-the-art methods.

Check paper here

Double/Debiased Machine Learning for Treatment and Causal Parameters

2016 

Supervised machine learning (ML) methods are explicitly designed to solve prediction problems very well. However, causal parameters can behave very poorly due to regularization bias. And, this regularization bias, assert the authors, can be removed by solving auxiliary prediction problems using machine learning tools. 

See Also

The authors discuss an orthogonal score that combines auxiliary and main ML predictions, which is then used to build an estimator for debiasing of the target parameter and be approximately unbiased and normal. They call this method a ‘double ML’ method and claim that it can be used on a broad set of ML predictive methods, such as random forest, lasso, ridge, deep neural nets, boosted trees, as well as various hybrids and aggregators of these methods.

Check paper here

Causal Regularization

2017

Causal interpretability of predictive models in domains such as healthcare models is critical. To facilitate such interpretability, the authors propose a causal regularizer to steer predictive models towards causally-interpretable solutions. Their analysis on a large-scale Electronic Health Records (EHR) shows that their causally-regularized model outperforms its L1-regularized counterpart in causal accuracy, and is competitive in predictive performance. They have also demonstrated that the proposed causal regularizer can be used together with neural representation learning algorithms to yield up to 20% improvement over multilayer perceptron in detecting multivariate causation, a situation common in healthcare, where many causal factors should occur simultaneously to have an effect on the target variable.

Check paper here

Unbiased Scene Graph Generation

2020 

Traditional machine learning debiasing methods cannot distinguish between good and bad bias. Example, good context prior (‘person read book’ rather than ‘eat’), and bad long-tailed bias (‘near’ dominating ‘behind / in front of’). In this paper, the authors present a novel framework based on causal inference and build a causal graph to perform traditional biased training with the graph. This framework, the authors claim, is agnostic and thus, can be widely applied in the community who seeks unbiased predictions. 

Check paper here

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top