Now Reading
Here’s The ML Paper That Received Test Of Time Award

Here’s The ML Paper That Received Test Of Time Award

Ambika Choudhury
ICML test of time 2020
W3Schools

Recently, the committee members of the International Conference on Machine Learning (ICML) 2020 announced the Test of Time Award. The receivers of the Test of Time award is a team of researchers from the California Institute of Technology, University of Pennsylvania and Saarland University.

The test of time award is bestowed to a paper from ICML 10 years ago that has made a substantial impact on the field of machine learning, including both research and practice. This year, the award goes to the research paper “Gaussian Process Optimisation in the Bandit Setting: No Regret and Experimental Design.”

The award recipient was selected by the ICML 2020 Conference General Chair and Program Co-Chairs in discussion with the current Test of Time Award committee, including the General and Program Chairs from ICML 2010. 



The ICML committee stated that the outcomes of this paper have shaped the prevailing methods of hyperparameter search in advanced deep learning systems, and the algorithm continues to be a successful acquisition algorithm even after ten years. The theorems, as well as the lemmas of this research paper, are quoted by various researchers in their recent papers, and their proof technique is also borrowed in several follow-up research papers. Speaking on the duo of technical depth and impact, this research paper clearly stands out.

The committee added, “Bayesian optimisation using Gaussian processes is an important technique for black-box hyperparameter tuning and AutoML. However, there was no explicit finite-sample convergence theory for this method. This paper treats Bayesian optimisation as a bandit problem and obtained a cumulative regret bound for GP-UCB in terms of the information gain. This, in result, leads to a first finite sample theoretical analysis for Bayesian optimisation, and the paper will have significant long term impact both theoretically and practically.”

About the Paper

In most stochastic optimisation settings, evaluating unknown functions is expensive, and sampling is to be minimised. Some of the examples include choosing advertisements in sponsored search to maximising profit in a click-through model and learning optimal control strategies for robots. 

Several applications often require optimising an unknown, noisy function that is expensive to evaluate. This is where the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm comes into play. 

The researchers at the California Institute of Technology, University of Pennsylvania and Saarland University formalised this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm.

In this work, the researchers analysed a simple and intuitive Bayesian upper confidence bound based sampling method known as the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm. 

See Also

The contributions made by the researchers are mentioned below: –

  • The researchers analysed Gaussian Process Upper Confidence Bound (GP-UCB), which is an intuitive algorithm for GP optimisation when the function is either sampled from a known GP or has low RKHS norm.
  • They bound the cumulative regret for GP-UCB in terms of the information gain due to sampling, establishing a novel connection between experimental design and GP optimisation.
  • By bounding the information gain for popular classes of kernels, the researchers established sublinear regret bounds for GP optimisation for the first time.
  • Further, they evaluated GP-UCB on real sensor network data, demonstrating that it compares favourably to existing algorithms for GP optimisation approaches.

Wrapping Up

Bayesian optimisation is one of the most useful approaches for object functions which are noisy as well as expensive to evaluate. It has become an important tool in various machine learning projects in the last few years. Also, Bayesian optimisation using the Gaussian process is considered as a crucial technique for black-box hyperparameter tuning as well as AutoML. 

According to a comment by the ICML committee, this paper leads to a first finite sample theoretical analysis for the Bayesian optimisation and had influenced many researchers to work on experimental designs, hyperparameter tuning, among others.

Read the paper here.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top