MITB Banner

What Is The Bootstrap Method In Statistical Machine Learning?

Share

Re-sampling is the method of taking samples iteratively from the original data samples. The method of Re-sampling is a non-parametric method of statistical inference which means that the parametric assumptions that ignore the nature of the underlying data distribution are avoided.

Commonly Used Resampling methods:

  • Sampling with and without replacement
  • Bootstrap (using sampling with replacement)
  • Jackknife (using subsets)
  • Cross-validation and LOOCV (using subsets)
  • Permutation resampling (switching labels)

The Bootstrap method is a technique for making estimations by taking an average of the estimates from smaller data samples.

A dataset is resampled with replacement and this is done repeatedly. This method can be used to estimate the efficacy of a machine learning model, especially on those models which predict on data which is not a part of the training dataset. Bootstrap methods are generally superior to ANOVA for small data sets or where sample distributions are non-normal.

How Is It Done

This method becomes extremely useful to quantify the uncertainties present in an estimator.

  • Select the sample size
  • Select an observation from the training data randomly
  • Now add this observation to the previously selected sample

The samples not selected are usually referred to as the “out-of-bag” samples. For a given iteration of Bootstrap resampling, a model is built on the selected samples and is used to predict the out-of-bag samples.

The resulting sample of estimations often leads to a Gaussian distribution. And a confidence interval can be calculated to bound the estimator.

For getting better results, such as that of mean and standard deviation, it is always better to increase the number of repetitions.

It may also be used for constructing hypothesis tests. It is often used as an alternative to statistical inference based on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation of standard errors.

When Should One Use It

  • When the sample size is small on which the null hypothesis tests have to be run.
  • To account for the distortions caused by certain sample data which could be a bad representation of the overall data.
  • To indirectly assess the properties of the distribution underlying the sample data.

Bootstrapping In Python

Example 1 via Source: Using sci-kit learn()

oob = [x for x in data if x not in boot]
from sklearn.utils import resample
data = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
boot = resample(data, replace=True, n_samples=4, random_state=1)
print('Bootstrap Sample: %s' % boot)
oob = [x for x in data if x not in boot]
print('OOB Sample: %s' % oob)

Example 2: Visualisation of the Bootstrap method for convergence in the Monte Carlo integration

import numpy as np
import matplotlib.pyplot as plt
def f(x):
return x * np.cos(60x) + np.sin(10x)
n = 100
x = f(np.random.random(n))
reps = 1000
xb = np.random.choice(x, (n, reps), replace=True)
yb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)
upper, lower = np.percentile(yb, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
plt.show()

If one performs the naive Bootstrap on the sample mean which lacks a finite variance, then the Bootstrap distribution will not converge to the same limit as the sample mean.

So in cases where there is an uncertainty associated with the underlying distribution and heavy-tailedness, Monte Carlo simulation of the Bootstrap could be misleading.

Conclusion

Over the years Bootstrap method has seen a tremendous improvement in the accuracy levels with improved computational powers as the sample size used for estimation can be increased and larger sample size usually has substantial real-world consequences in regards to increase in the accuracy of estimating errors in the data. There is also evidence of successful Bootstrap deployments for sample sizes as small as n= 50.

Statisticians like Tshibirani define this Bootstrapping as a computer-based method for assigning measures of accuracy to sample estimates whereas, there are other definitions which say that this technique allows estimation of the sample distribution of almost any statistic using only very simple methods.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.