# A complete tutorial on visualizing probability distributions in python

In mathematics, especially in probability theory and statistics, probability distribution represents the values of a variable that holds the probabilities of an experiment. distribution.

The probability distribution is one of the major concepts in the field of data science and machine learning. It has a great significance in data analytics especially when knowing the properties of data is concerned. In theory, we may have gone through the concepts of these distributions multiple times. But there is always curiosity that how to demonstrate these probability distributions in python.  In this article, we will go through the popular probability distributions and will try to understand the difference between them. Along with this, we will also learn how to visualize the popular probability distribution in python. The major points to be discussed in this article are listed below.

1. What is a probability distribution?
2. Types of data
3. Elements of the probability distribution
1. Probability mass function
2. probability density function
4. Discrete probability distribution
1. Binomial distribution
2. Poisson distribution
5. Continuous probability distribution
1. Normal distribution
2. Uniform distribution

Let’s start by understanding what the probability distribution is.

#### AIM Daily XO

##### Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy

What is a probability distribution?

In mathematics, especially in probability theory and statistics, probability distribution represents the values of a variable that holds the probabilities of an experiment. In machine learning and data science, there is a huge use of probability distribution. In the context of machine learning, we are required to deal with a lot of data and the process of finding patterns in data acquires a lot of studies depending on the probability distribution.

We can understand that most of the models related to machine learning are required to learn the uncertainty in data. Their outcomes and increment in uncertainty make probability theory more relevant to the process. To explore more on the subject of probability distribution in machine learning requires categorization of a probability distribution that can be followed by categorization of the data. Let’s start by understanding categories of data.

Types of data

In machine learning, most of the time we find ourselves working with different formats of data. The datasets can be considered as differentiated samples from a population of samples. These differentiated samples from the population require recognizing patterns in themselves so that we can build predictions for the whole dataset or whole population.

For example, we want to predict the price of vehicles given a certain set of features of one company’s vehicles. After some statistical analysis on a few samples of vehicles dataset, we can be able to predict the vehicle prices of different companies(our population).

By looking at the above scenario, we can say a dataset can consist of two types of data elements:

• Numerical(integers, float, etc): This type of data can further be categorized into two types:
1. Discrete: This type of numerical data can be only a certain value like the number of apples in the basket and number of people in a team, etc.
2. Continuous: This type of numerical data can be real or fractional values, for example, the height of the tree, the width of the tree, etc.
• Categorical (names, labels, etc): It can be the categories such as gender, state, etc.

Using the discrete random variables from the dataset we can calculate the probability mass function and using the continuous random variable we can calculate the probability density function.

Here, we have seen how we can categorize the data types. Now we can easily understand that probability distribution can represent the distribution of the probability of different possible outcomes from an experiment. Let’s explore it more by categorization of the probability distribution.

Elements of the probability distribution

There are the following functions used to obtain the probability distributions:

• Probability mass function: This function gives the similarity probability which is the probability of a discrete random variable to be equal to some value. We can also call it a discrete probability distribution.

Image source

The above image is a representation of the probability mass function where the conditions that state “all the values must be positive and sum up to 1” are followed.

• Probability density function: This function represents the density of a continuous random variable lying between a specific range of values. We can also call it continuous probability distribution.

Image source

In the above image, we can see a representation of the probability density function of a normal distribution.  The above-given types are the two main types of probability distribution.

When we talk about the categories by nature, we can categorize the probability distribution as in the following image:

In the above sections, we have seen what is a discrete probability distribution and continuous probability distribution. In the next sections, we will describe the sub-categories of these two main categories.

Discrete probability distribution

The popular distributions under the discrete probability distribution categories are listed below how they can be used in python.

Binomial distribution

This distribution is a function that can summarize the likelihood that a variable will take one of two values under a pre-assumed set of parameters. We mainly use this distribution in the sequence of experiments where we require  solutions in the form of yes and no, positive and negative, etc. These kinds of experiments are called the Bernoulli trial or Bernoulli experiment. The probability mass function for binomial is:

Where k is {0,1,….,n,}, 0<=p<=1

Using the below lines of codes in python, we can generate binomial discrete random variables.

``````import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import matplotlib.pyplot as plt
for prob in range(3, 10, 3):
x = np.arange(0, 25)
binom = stats.binom.pmf(x, 20, 0.1*prob)
plt.plot(x, binom, '-o', label="p = {:f}".format(0.1*prob))
plt.xlabel('Random Variable', fontsize=12)
plt.ylabel('Probability', fontsize=12)
plt.title("Binomial Distribution varying p")
plt.legend()
``````

Output:

In the graph, we can see a visualization of the binomial distribution.

Poisson distribution

It is a subcategory of a discrete probability distribution that represents the probability of a number of events that can happen in a fixed range of time. More formally it represents how many times an event can occur over a specific time period. This distribution is named after the mathematician Siméon Denis Poisson. We mainly use this distribution when the variable of interest in data is discrete. The probability mass function for poisson distribution is:

For k>= 0

Using the below lines of we represent the poisson distribution

``````for lambd in range(2, 8, 2):
n = np.arange(0, 10)
poisson = stats.poisson.pmf(n, lambd)
plt.plot(n, poisson, '-o', label="λ = {:f}".format(lambd))
plt.xlabel('Number of Events', fontsize=12)
plt.ylabel('Probability', fontsize=12)
plt.title("Poisson Distribution varying λ")
plt.legend()
``````

Output:

Continuous probability distribution

The popular distributions under the continuous probability distribution categories are listed below how they can be used in python.

Normal distribution

This is a subcategory of continuous probability distribution which can also be called a Gaussian distribution. This distribution represents a probability distribution for a real-valued random variable. The probability density function for normal distribution is:

for a real number x.

Using the below lines of codes, we represent the normal distribution of a real value.

``````from seaborn.palettes import color_palette
n = np.arange(-70, 70)
norm = stats.norm.pdf(n, 0, 10)
plt.plot(n, norm)
plt.xlabel('Distribution', fontsize=12)
plt.ylabel('Probability', fontsize=12)
plt.title("Normal Distribution of x")``````

Output:

Uniform distribution

This distribution is a subcategory of a continuous distribution, this distribution represents a similar probability for all the events to occur. The probability density function for uniform distribution is:

We can understand it by the example of rolling fair dice where the occurrence of any face on the dice has the same probability.

Using the following lines of code we can represent the distribution of probabilities of rolling a fair dice.

``````probs = np.full((6), 1/6)
face = [1,2,3,4,5,6]
plt.bar(face, probs)
plt.ylabel('Probability', fontsize=12)
plt.xlabel('Dice Roll Outcome', fontsize=12)
plt.title('Fair Dice Uniform Distribution', fontsize=12)
axes = plt.gca()
axes.set_ylim([0,1])``````

Output:

Final words

In this article, we have discussed the probability distribution and we understood how we can categorize them. There are various probability distributions available according to nature and we covered some of the important distributions with their visualizations in python.

## The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.

## Our Upcoming Events

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

### Telegram group

Discover special offers, top stories, upcoming events, and more.

### Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

### 5 Sci-Fi HBO Shows to Watch Before They Leave Disney+ Hotstar

The OTT platform has confirmed that it will no longer carry HBO content in India starting April 1

### How Instahyre Tackles AI Biases in Hiring

Instahyre has developed an AI-based platform that helps companies source, screen, and match job candidates to suitable job roles

### Meet the Tech Kins

When it comes to innovation and leadership, excellence truly runs in the blood

### Discovering Cohere AI and How It’s Different from OpenAI

Cohere is focused on the customer’s actual problems and the requests from many of their customers have been language-centric

### When Redis Helped ChatGPT Keep the Conversation Going

Redis’ open-source software is silently helping OpenAI scale ChatGPT

### Chip War: NVIDIA and Intel Accelerator Face Off

NVIDIA’s AI inference platforms might give Intel a run for their money.

### AIM High Tea Roundtable in partnership with Cloudera: An Exclusive Discussion with CXO Power Players about The Future of Data in Motion

The roundtable will bring together CXO power players from various industries to share their insights and perspectives on how data in motion is transforming business operations and creating new growth opportunities.

### Microsoft is Making Employees Lazy

With a wide range of Microsoft Office products, Loop enables a multi-faceted approach where all products can be used at a single destination.

### Microsoft Makes a Sly Move with Bing Search

Microsoft wants to bring Bing AI to the top, and has taken a big move to achieve that.

### Reaping the Synergies Between Quantum Computing and Generative AI

“Quantum generative AI has the potential to revolutionise drug and material design by helping researchers to generate new and innovative compounds that could not be discovered using traditional methods.”