MITB Banner

How to model uncertainty with Dempster-Shafer’s theory?

One of the main advantages of Dempster-Shafer theory is that we can utilize it for generating a degree of belief by taking all the evidence into account

Share

Almost every model in machine learning is built with data. The model itself can have uncertainties due to the uncertainties present in the data. These uncertainties lead to having less belief in the prediction results generated by the models. So there is always a need to model such uncertainties if present so that we can build a robust model. There are various theories that help in improving the results but Dempster-Shafer’s theory is there to model the uncertainty of the model. In this article, we are going to discuss Dempster-Shafer’s theory and we will also see how we can implement it in python. The major points to be discussed in the article are listed below. 

Table of content 

  1. What is Dempster-Shafer’s theory?
  2. Implementing Dempster-Shafer’s theory
    1. Importing package 
    2. Creating the conditions 
    3. Defining the mass 
    4. Making lattice 
    5. Calculating plausibility 
    6. Calculating belief

Let’s start with understanding Dempster-Shafer’s theory.    

What is Dempster-Shafer’s theory?

In most machine learning modelling, uncertainty plays an important role that teaches us not to believe thoroughly in the modelling. We can just say when a learned machine learning algorithm is taking a decision we can not thoroughly trust it there always be some kind of uncertainty in the decision. This uncertainty can be measured using Dempster-Shafer’s theory. 

Are you looking for a complete repository of Python libraries used in data science, check out here.

We can also refer to this theory as a theory of belief and evidence theory. Talking about the history it was first given by Arthur P. Dempster and again introduced by Glenn Shafer. However, the theory given by Arthur P. Dempster was contextually about statistical inference and the theory given by Glenn Shafer was about modelling uncertainty which we mainly know as the theory of evidence.

One of the main advantages of this theory is that we can utilize it for generating a degree of belief by taking all the evidence into account. This evidence can be obtained from different sources. The degree of belief using this theory can be calculated by a mathematical function called the belief function.   

We can also think of this theory as a generalization of the Bayesian theory of subjective probability. While talking about the degree of belief in some cases we find them as the property of probability and in some cases, they are not mathematical. Using this theory we can make answers to the questions that have been generated using the probability theory. 

This theory mainly consists of two fundamentals: Degree of belief and plausibility.  We can understand this theory using some examples.

Let’s say we have a person diagnosed with covid-19 symptoms and have a belief of 0.5 for a proposition that the person is suffering from covid-19. This will mean that we have evidence that makes us think strongly that the person is suffering from covid(a proposition is true) with a confidence of 0.5. However, there is a contradiction that a person is not suffering from covid with a confidence of 0.2. The remaining 0.3 is intermediate, which means the difference between confidence and contrast. We can simply explain intermediate by either and or condition. For example, a person is either stuffing from covid or not suffering. This is what represents the uncertainty of the system based on the evidence. In this theory, we need to calculate the following thing 

  • Mass: These are the subjective probabilities assigned to all subsets 
  • Belief: Level or amount which represents the belief in the hypothesis. From the example 0.5 is the confidence of the proposition person is having covid-19.
  • Plausibility: It is an upper bound on the possibility that the hypothesis could be true 

Let’s refer to the below table, 

HypothesisMassBelief Plausibility 
null000
Suffering 0.50.50.8
Not suffering 0.20.20.5
Either (suffering or not) 0.31.01.0

In the above table, we can see we have drawn two fundamentals of Dempster-Shafer’s theory. The below image is a representation of the belief function.

Image source

In the image, we can see that we have 3 elements and indicates all the sets of the mass of every belief and notations in the image are as follows 

  • Q = set of elements 
  • M = mass
  • Pl = plausibility 
  • Bel = belief 

This image represents the observation of some evidence of conditions a and b that is stating a is right but b can be right or any of them are not right. 

Implementing Dempster-Shafer’s theory

In the above, we have discussed the Dempster-Shafer theory that can also be implemented in python using the Dempster Shafer theory package. This package is fully developed using the python package and can be found here. We can install this package using the following lines of codes 

!pip install dempster_shafer

After installation, we are ready to use this package.  

Importing package: 

We can import this package using the following lines of codes.

import dempster_shafer as ds

Creating the conditions: 

In this section, we are going to make a discernment frame for the items a, b, c, d. 

discernment = ds.FrameOfDiscernment(['a', 'b', 'c', 'd'])

discernment

Output:

Defining the mass:

We can define masses based on the results of the classifier and here we are just taking a demo so we are defining it randomly. 

mass = ds.FocalSet(discernment, 
    {
        "abc": 0.4,
        "abdc": 0.3,
        "a": 0.3
    })
 mass

Output:

Making lattice 

Let’s create a lattice using the above frame of discernment and masses.

lat = ds.Lattice(discernment, mass)

Now using this lattice we are able to calculate the plausibility and belief.

Calculating plausibility: 

We can calculate plausibility using the following code

lat.pl()

Output:

Calculating belief    

We can calculate the degree of belief for evidence using the following code.

lat.bel()

Output:

Here we can see how we can implement the Dempster-Shafer theory of evidence. 

Final words 

In the article, we have discussed the  Dempster-Shafer theory which can be used for calculating the uncertainty of the results from the machine learning models. Along with this, we have discussed how we can implement this theory in python language.

References 

Share
Picture of Yugesh Verma

Yugesh Verma

Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.