“Causal reasoning is an indispensable component of human thought that should be formalized and algorithimitized towards achieving human-level machine intelligence.”Judea Pearl
Incorporating insights of psychology research into algorithms is tricky as the former is not exactly a quantifiable metric. But, it can be quite useful as algorithms are venturing into a world full of “trolley problems” in the form of self-driving cars and medical diagnosis.
Tobias Gerstenbeg, assistant professor of psychology at Stanford, believes that by providing a more quantitative characterisation of a theory of human behavior and instantiating that in a computer program, we can make it easier for a computer scientist to incorporate such insights into an AI system. Gerstenbeg and his colleagues at Stanford have developed a computational model to understand how humans judge causation in dynamic physical situations.
About the model
Billiard board simulation experiment (Image credits: Paper by Gerstenberg et al.,)
In their paper on the counterfactual simulation model (CSM) of causal judgment, the researchers begin my making three key assumptions:
- Causal judgments are about difference-making.
- Difference-making for particular events is best expressed in terms of counterfactual contrasts over causal models.
- There are multiple aspects of causation which correspond to different ways of making a difference to the outcome that jointly determine people’s causal judgments.
As a case study, the researchers first applied the CSM to explain people’s causal judgments about dynamic collision events. They considered a simulated billiard ball B as shown above that enters from the right, headed straight for an open gate in the opposite wall. Blocking the path, they placed a brick. Ball A would then enter from the upper right corner and collide with ball B, which bounces off the bottom wall and back up through the gate.
So, now the question is: did ball A cause ball B to go through the gate? It’s obvious that without ball A, ball B would have run into the brick rather than go through the gate.
Without the brick in ball B’s path, it would have gone through the gate anyway without any assistance from ball A. The causation relationship between ball A and ball B in presence and absence of an external factor is being checked here. Gerstenberg and his colleagues ran such scenarios through a computer model designed to predict how a human evaluates causation. The idea here is that people judge causation by comparing what actually happened with what would have happened in relevant counterfactual situations. Indeed, as the billiards example above demonstrates, human’s sense of causation differs when the counterfactuals are different – even when the actual events are unchanged.
Extending CSM to AI
Now the researchers are working on extending the same theory of counterfactual simulation model of causation onto AI systems. The goal here is to develop AI systems that understand causal explanations the way humans do. And, to be able to demonstrate scenarios where AI systems are made to analyse a soccer game and pick out the key events causally relevant to the final outcome; whether it is the goals that caused the win or counterfactuals such as if saves by goalkeeper contributed more. This is a task that would need the AI system to mimic the smartest of team managers. However, Gerstenberg admits that their research is still in a nascent stage. “We can’t do that yet, but at least in principle, the kind of analysis that we propose should be applicable to these sorts of situations,” he added.
In the SEE (the science and engineering of explanation) project, funded by Stanford HAI, the researchers are using natural language processing to develop a more refined linguistic understanding of how humans think about causation.. Through their study on CSM, the researchers have tried to answer the fundamental question: how do people make causal judgments? The results revealed that people’s judgments are influenced by different aspects of causation, such as whether the candidate cause was necessary and sufficient for the outcome to occur, as well as whether it affected how the outcome came about. By modeling these aspects in terms of counterfactual contrasts, the CSM accurately captures participants’ judgments in a wide variety of physical scenes involving single and multiple causes. Researchers believe that CSM can be of great significance in many subfields of AI, including in robotics, where AI is required to exhibit more common sense to collaborate with humans intuitively and appropriately.