How to model uncertainty with Dempster-Shafer’s theory?

One of the main advantages of Dempster-Shafer theory is that we can utilize it for generating a degree of belief by taking all the evidence into account

Almost every model in machine learning is built with data. The model itself can have uncertainties due to the uncertainties present in the data. These uncertainties lead to having less belief in the prediction results generated by the models. So there is always a need to model such uncertainties if present so that we can build a robust model. There are various theories that help in improving the results but Dempster-Shafer’s theory is there to model the uncertainty of the model. In this article, we are going to discuss Dempster-Shafer’s theory and we will also see how we can implement it in python. The major points to be discussed in the article are listed below. 

Table of content 

  1. What is Dempster-Shafer’s theory?
  2. Implementing Dempster-Shafer’s theory
    1. Importing package 
    2. Creating the conditions 
    3. Defining the mass 
    4. Making lattice 
    5. Calculating plausibility 
    6. Calculating belief

Let’s start with understanding Dempster-Shafer’s theory.    


Sign up for your weekly dose of what's up in emerging technology.

What is Dempster-Shafer’s theory?

In most machine learning modelling, uncertainty plays an important role that teaches us not to believe thoroughly in the modelling. We can just say when a learned machine learning algorithm is taking a decision we can not thoroughly trust it there always be some kind of uncertainty in the decision. This uncertainty can be measured using Dempster-Shafer’s theory. 

Are you looking for a complete repository of Python libraries used in data science, check out here.

We can also refer to this theory as a theory of belief and evidence theory. Talking about the history it was first given by Arthur P. Dempster and again introduced by Glenn Shafer. However, the theory given by Arthur P. Dempster was contextually about statistical inference and the theory given by Glenn Shafer was about modelling uncertainty which we mainly know as the theory of evidence.

One of the main advantages of this theory is that we can utilize it for generating a degree of belief by taking all the evidence into account. This evidence can be obtained from different sources. The degree of belief using this theory can be calculated by a mathematical function called the belief function.   

We can also think of this theory as a generalization of the Bayesian theory of subjective probability. While talking about the degree of belief in some cases we find them as the property of probability and in some cases, they are not mathematical. Using this theory we can make answers to the questions that have been generated using the probability theory. 

This theory mainly consists of two fundamentals: Degree of belief and plausibility.  We can understand this theory using some examples.

Let’s say we have a person diagnosed with covid-19 symptoms and have a belief of 0.5 for a proposition that the person is suffering from covid-19. This will mean that we have evidence that makes us think strongly that the person is suffering from covid(a proposition is true) with a confidence of 0.5. However, there is a contradiction that a person is not suffering from covid with a confidence of 0.2. The remaining 0.3 is intermediate, which means the difference between confidence and contrast. We can simply explain intermediate by either and or condition. For example, a person is either stuffing from covid or not suffering. This is what represents the uncertainty of the system based on the evidence. In this theory, we need to calculate the following thing 

  • Mass: These are the subjective probabilities assigned to all subsets 
  • Belief: Level or amount which represents the belief in the hypothesis. From the example 0.5 is the confidence of the proposition person is having covid-19.
  • Plausibility: It is an upper bound on the possibility that the hypothesis could be true 

Let’s refer to the below table, 

HypothesisMassBelief Plausibility 
Not suffering
Either (suffering or not)

In the above table, we can see we have drawn two fundamentals of Dempster-Shafer’s theory. The below image is a representation of the belief function.

Image source

In the image, we can see that we have 3 elements and indicates all the sets of the mass of every belief and notations in the image are as follows 

  • Q = set of elements 
  • M = mass
  • Pl = plausibility 
  • Bel = belief 

This image represents the observation of some evidence of conditions a and b that is stating a is right but b can be right or any of them are not right. 

Implementing Dempster-Shafer’s theory

In the above, we have discussed the Dempster-Shafer theory that can also be implemented in python using the Dempster Shafer theory package. This package is fully developed using the python package and can be found here. We can install this package using the following lines of codes 

!pip install dempster_shafer

After installation, we are ready to use this package.  

Importing package: 

We can import this package using the following lines of codes.

import dempster_shafer as ds

Creating the conditions: 

In this section, we are going to make a discernment frame for the items a, b, c, d. 

discernment = ds.FrameOfDiscernment(['a', 'b', 'c', 'd'])



Defining the mass:

We can define masses based on the results of the classifier and here we are just taking a demo so we are defining it randomly. 

mass = ds.FocalSet(discernment, 
        "abc": 0.4,
        "abdc": 0.3,
        "a": 0.3


Making lattice 

Let’s create a lattice using the above frame of discernment and masses.

lat = ds.Lattice(discernment, mass)

Now using this lattice we are able to calculate the plausibility and belief.

Calculating plausibility: 

We can calculate plausibility using the following code


Calculating belief    

We can calculate the degree of belief for evidence using the following code.



Here we can see how we can implement the Dempster-Shafer theory of evidence. 

Final words 

In the article, we have discussed the  Dempster-Shafer theory which can be used for calculating the uncertainty of the results from the machine learning models. Along with this, we have discussed how we can implement this theory in python language.


More Great AIM Stories

Yugesh Verma
Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM