Advertisement

Top 8 Approaches For Tuning Hyperparameters Of Machine Learning Models

Hyperparameter tuning is one of the fundamental steps in the machine learning routine. Also known as hyperparameter optimisation, the method entails searching for the best configuration of hyperparameters to enable optimal performance. Machine learning algorithms require user-defined inputs to achieve a balance between accuracy and generalisability. This process is known as hyperparameter tuning. There are various tools and approaches available to tune hyperparameters

We have curated a list of top eight approaches for tuning hyperparameters of machine learning models.

(The list is in alphabetical order)

1| Bayesian Optimisation

About: Bayesian Optimisation has emerged as an efficient tool for hyperparameter tuning of machine learning algorithms, more specifically, for complex models like deep neural networks. It offers an efficient framework for optimising the highly expensive black-box functions without knowing its form. It has been applied in several fields including learning optimal robot mechanics, sequential experimental design, and synthetic gene design. 

Know more here.

2| Evolutionary Algorithms

About: Evolutionary algorithms (EA) are optimisation algorithms that work by modifying a set of candidate solutions (population) according to certain rules called Operators. One of the main advantages of the EA is their generality: Meaning EA can be used in a broad range of conditions due to their simplicity and independence from the underlying problem. In hyperparameter tuning problems, evolutionary algorithms have proved to perform better than grid search techniques based on an accuracy-speed ratio.

Know more here.

3| Gradient-Based Optimisation

About: Gradient-based optimisation is a methodology to optimise several hyperparameters, based on the computation of the gradient of a machine learning model selection criterion with respect to the hyperparameters. This hyperparameter tuning methodology can be applied when some differentiability and continuity conditions of the training criterion are satisfied.

Know more here.

About: Grid search is a basic method for hyperparameter tuning. It performs an exhaustive search on the hyperparameter set specified by users. This approach is the most straightforward leading to the most accurate predictions. Using this tuning method, users can find the optimal combination. Grid search is applicable for several hyper-parameters, however, with limited search space. 

Know more here.

5| Keras’ Tuner

About: Keras tuning is a library that allows users to find optimal hyperparameters for machine learning or deep learning models. The library helps to find kernel sizes, learning rate for optimisation, and different hyper-parameters. Keras tuner can be used for getting the best parameters for various deep learning models for the highest accuracy.

Know more here.

6| Population-based Optimisation

About: Population-based methods are essentially a series of random search methods based on

genetic algorithms, such as evolutionary algorithms, particle swarm optimisation, among others. One of the most widely used population-based methods is population-based training (PBT), proposed by DeepMind. PBT is a unique method in two aspects:

  • It allows for adaptive hyper-parameters during training
  • It combines parallel search and sequential optimisation

Know more here.

7| ParamILS  

About: ParamILS (Iterated Local Search in Parameter Configuration Space) is a versatile stochastic local search approach for automated algorithm configuration. ParamILS is an automated algorithm configuration method that helps dvelop high-performance algorithms and their applications.

ParamILS uses default and random settings for initialisation and employs iterative first improvement as a subsidiary local search procedure. It also uses a fixed number of random moves for perturbation and always accepts better or equally-good parameter configurations, but re-initialises the search at random with probability. 

Know more here.

About: Random search can be said as a basic improvement on grid search. The method refers to a randomised search over hyper-parameters from certain distributions over possible parameter values. The searching process continues till the desired accuracy is reached. Random search is similar to grid search but has proven to create better results than the latter. The approach is often applied as the baseline of HPO to measure the efficiency of newly designed algorithms. Though random search is more effective than grid search, it is still a computationally intensive method.

Know more here.

Download our Mobile App

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR