Almost every model in machine learning is built with data. The model itself can have uncertainties due to the uncertainties present in the data. These uncertainties lead to having less belief in the prediction results generated by the models. So there is always a need to model such uncertainties if present so that we can build a robust model. There are various theories that help in improving the results but Dempster-Shafer’s theory is there to model the uncertainty of the model. In this article, we are going to discuss Dempster-Shafer’s theory and we will also see how we can implement it in python. The major points to be discussed in the article are listed below.

**Table of content **

- What is Dempster-Shafer’s theory?
- Implementing Dempster-Shafer’s theory
- Importing package
- Creating the conditions
- Defining the mass
- Making lattice
- Calculating plausibility
- Calculating belief

Let’s start with understanding Dempster-Shafer’s theory.

#### Subscribe to our Newsletter

##### Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy

##### Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

**What is Dempster-Shafer’s theory?**

In most machine learning modelling, uncertainty plays an important role that teaches us not to believe thoroughly in the modelling. We can just say when a learned machine learning algorithm is taking a decision we can not thoroughly trust it there always be some kind of uncertainty in the decision. This uncertainty can be measured using Dempster-Shafer’s theory.

** Are you looking for a complete repository of Python libraries used in data science, **check out here

*.*We can also refer to this theory as a theory of belief and evidence theory. Talking about the history it was first given by Arthur P. Dempster and again introduced by Glenn Shafer. However, the theory given by Arthur P. Dempster was contextually about statistical inference and the theory given by Glenn Shafer was about modelling uncertainty which we mainly know as the theory of evidence.

One of the main advantages of this theory is that we can utilize it for generating a degree of belief by taking all the evidence into account. This evidence can be obtained from different sources. The degree of belief using this theory can be calculated by a mathematical function called the belief function.

We can also think of this theory as a generalization of the Bayesian theory of subjective probability. While talking about the degree of belief in some cases we find them as the property of probability and in some cases, they are not mathematical. Using this theory we can make answers to the questions that have been generated using the probability theory.

This theory mainly consists of two fundamentals: Degree of belief and plausibility. We can understand this theory using some examples.

Let’s say we have a person diagnosed with covid-19 symptoms and have a belief of 0.5 for a proposition that the person is suffering from covid-19. This will mean that we have evidence that makes us think strongly that the person is suffering from covid(a proposition is true) with a confidence of 0.5. However, there is a contradiction that a person is not suffering from covid with a confidence of 0.2. The remaining 0.3 is intermediate, which means the difference between confidence and contrast. We can simply explain intermediate by either and or condition. For example, a person is either stuffing from covid or not suffering. This is what represents the uncertainty of the system based on the evidence. In this theory, we need to calculate the following thing

**Mass:**These are the subjective probabilities assigned to all subsets**Belief:**Level or amount which represents the belief in the hypothesis. From the example 0.5 is the confidence of the proposition person is having covid-19.**Plausibility:**It is an upper bound on the possibility that the hypothesis could be true

Let’s refer to the below table,

Hypothesis | Mass | Belief | Plausibility |

null | 0 | 0 | 0 |

Suffering | 0.5 | 0.5 | 0.8 |

Not suffering | 0.2 | 0.2 | 0.5 |

Either (suffering or not) | 0.3 | 1.0 | 1.0 |

In the above table, we can see we have drawn two fundamentals of Dempster-Shafer’s theory. The below image is a representation of the belief function.

In the image, we can see that we have 3 elements and indicates all the sets of the mass of every belief and notations in the image are as follows

- Q = set of elements
- M = mass
- Pl = plausibility
- Bel = belief

This image represents the observation of some evidence of conditions a and b that is stating a is right but b can be right or any of them are not right.

**Implementing Dempster-Shafer’s theory**

In the above, we have discussed the Dempster-Shafer theory that can also be implemented in python using the Dempster Shafer theory package. This package is fully developed using the python package and can be found here. We can install this package using the following lines of codes

!pip install dempster_shafer

After installation, we are ready to use this package.

**Importing package: **

We can import this package using the following lines of codes.

import dempster_shafer as ds

**Creating the conditions: **

In this section, we are going to make a discernment frame for the items a, b, c, d.

discernment = ds.FrameOfDiscernment(['a', 'b', 'c', 'd'])

discernment

Output:

**Defining the mass:**

We can define masses based on the results of the classifier and here we are just taking a demo so we are defining it randomly.

mass = ds.FocalSet(discernment,

{

"abc": 0.4,

"abdc": 0.3,

"a": 0.3

})

mass

Output:

Making lattice

Let’s create a lattice using the above frame of discernment and masses.

lat = ds.Lattice(discernment, mass)

Now using this lattice we are able to calculate the plausibility and belief.

**Calculating plausibility: **

We can calculate plausibility using the following code

lat.pl()

Output:

**Calculating belief **

We can calculate the degree of belief for evidence using the following code.

lat.bel()

Output:

Here we can see how we can implement the Dempster-Shafer theory of evidence.

**Final words **

In the article, we have discussed the Dempster-Shafer theory which can be used for calculating the uncertainty of the results from the machine learning models. Along with this, we have discussed how we can implement this theory in python language.