Top 8 Approaches For Tuning Hyperparameters Of Machine Learning Models

Hyperparameter tuning is one of the fundamental steps in the machine learning routine. Also known as hyperparameter optimisation, the method entails searching for the best configuration of hyperparameters to enable optimal performance. Machine learning algorithms require user-defined inputs to achieve a balance between accuracy and generalisability. This process is known as hyperparameter tuning. There are various tools and approaches available to tune hyperparameters

We have curated a list of top eight approaches for tuning hyperparameters of machine learning models.

(The list is in alphabetical order)

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

1| Bayesian Optimisation

About: Bayesian Optimisation has emerged as an efficient tool for hyperparameter tuning of machine learning algorithms, more specifically, for complex models like deep neural networks. It offers an efficient framework for optimising the highly expensive black-box functions without knowing its form. It has been applied in several fields including learning optimal robot mechanics, sequential experimental design, and synthetic gene design. 

Know more here.


Download our Mobile App



2| Evolutionary Algorithms

About: Evolutionary algorithms (EA) are optimisation algorithms that work by modifying a set of candidate solutions (population) according to certain rules called Operators. One of the main advantages of the EA is their generality: Meaning EA can be used in a broad range of conditions due to their simplicity and independence from the underlying problem. In hyperparameter tuning problems, evolutionary algorithms have proved to perform better than grid search techniques based on an accuracy-speed ratio.

Know more here.

3| Gradient-Based Optimisation

About: Gradient-based optimisation is a methodology to optimise several hyperparameters, based on the computation of the gradient of a machine learning model selection criterion with respect to the hyperparameters. This hyperparameter tuning methodology can be applied when some differentiability and continuity conditions of the training criterion are satisfied.

Know more here.

About: Grid search is a basic method for hyperparameter tuning. It performs an exhaustive search on the hyperparameter set specified by users. This approach is the most straightforward leading to the most accurate predictions. Using this tuning method, users can find the optimal combination. Grid search is applicable for several hyper-parameters, however, with limited search space. 

Know more here.

5| Keras’ Tuner

About: Keras tuning is a library that allows users to find optimal hyperparameters for machine learning or deep learning models. The library helps to find kernel sizes, learning rate for optimisation, and different hyper-parameters. Keras tuner can be used for getting the best parameters for various deep learning models for the highest accuracy.

Know more here.

6| Population-based Optimisation

About: Population-based methods are essentially a series of random search methods based on

genetic algorithms, such as evolutionary algorithms, particle swarm optimisation, among others. One of the most widely used population-based methods is population-based training (PBT), proposed by DeepMind. PBT is a unique method in two aspects:

  • It allows for adaptive hyper-parameters during training
  • It combines parallel search and sequential optimisation

Know more here.

7| ParamILS  

About: ParamILS (Iterated Local Search in Parameter Configuration Space) is a versatile stochastic local search approach for automated algorithm configuration. ParamILS is an automated algorithm configuration method that helps dvelop high-performance algorithms and their applications.

ParamILS uses default and random settings for initialisation and employs iterative first improvement as a subsidiary local search procedure. It also uses a fixed number of random moves for perturbation and always accepts better or equally-good parameter configurations, but re-initialises the search at random with probability. 

Know more here.

About: Random search can be said as a basic improvement on grid search. The method refers to a randomised search over hyper-parameters from certain distributions over possible parameter values. The searching process continues till the desired accuracy is reached. Random search is similar to grid search but has proven to create better results than the latter. The approach is often applied as the baseline of HPO to measure the efficiency of newly designed algorithms. Though random search is more effective than grid search, it is still a computationally intensive method.

Know more here.

Support independent technology journalism

Get exclusive, premium content, ads-free experience & more

Rs. 299/month

Subscribe now for a 7-day free trial

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES

All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges