Advertisement

How to use Hyperopt for Distributed Hyperparameter Optimisation?

Hyperopt is a tool for hyperparameter optimization. It helps in finding the best value over a set of possible arguments to a function that can be a scalar-valued stochastic function

In machine learning, finding the best-fit models and hyperparameters for the model to fit on data is a crucial task in the whole modelling procedure. Various hyperparameter optimizers are available for this task, like BayesianOptimisation, GPyOpt, Hyperopt, and many more. In this article, we are going to discuss the Hyperopt optimization package in Python. This package can be used for hyperparameter optimization using the Bayesian optimization technique. The major points that we will discuss here are listed below.

Table of Contents

  1. What is Hyperopt?
  2. Simple Implementation of Hyperopt
  3. Model Selection using Hyperopt

Let’s start with understanding what hyperopt is.

What is Hyperopt?

Hyperopt is a tool for hyperparameter optimization. It helps in finding the best value over a set of possible arguments to a function that can be a scalar-valued stochastic function. One of the major differences between other optimizers and this tool is that other optimizers assume that input vectors are drawn from a space vector where using hyperopt we can describe our search space in a more explainable way. It helps us in describing more information about the space where the function is defined and the space where we think the best values are presented. We can search more efficiently by allowing algorithms in hyperopt. 

We can use the various packages under the hyperopt library for different purposes. The list of the packages are as follows:

  • Hyperopt: Distributed asynchronous hyperparameter optimization in Python.
  • Hyperopt-sklearn: Hyperparameter optimization for sklearn models.
  • Hyperopt-convnet: Convolutional computer vision architectures that can be tuned by hyperopt.
  • Hyperopt-nnet: Hyperparameter optimization for neural networks.
  • Hyperopt-gpsmbo: Gaussian process optimization algorithm for Hyperopt.

In this article, we will discuss how we can perform hyperparameter optimization using it. Let’s start by discussing different calling conventions that help in defining the communication between hyperopt, search space, and an objective function. 

Using the following lines of codes, we can install the hyperopt.

!pip install hyperopt

Output:

Since I am using Google Colab in this article, it already has the facility of hyperopt for hyperparameter optimization. Let’s start with a simple implementation of it.

Simple Implementation of Hyperopt

Using the following lines of codes, we can define a search space.

from hyperopt import hp
space = hp.uniform('x', -10, 10)

Using the above code snippet, we have defined a search space bounded between -10 to 10.

As we have seen above, we have defined a space where the it’s optimization algorithm can search for an optimal value so that any objective function can receive a valid point. Let’s see in the simplest way how we can perform it.

from hyperopt import fmin, tpe
fn=lambda x: x ** 2
algo=tpe.suggest
max_evals=100
best = fmin(fn, space, algo, max_evals)
print(best)

Output:

Here using the above codes, we can see how the codes are easy to write using the it where we just need to have a function and iteration value. In the output, we can see that it came with floating-point loss. 

The above example is the simplest example of finding an optimal value for our objective function. We can use various trial objects provided by hyperopt to make the process more explainable. There is always a need to save more statistics and diagnostic information in a nested dictionary. We can pass some more keys with the fmin function. Two important key values are:

  • Status: This key presents results in the form of OK( when successful completion of the process) and fail ( when the completion is failed or the function turned out to be undefined).
  • Loss: This is a value for the floated valued function that is required to be minimized. 

There are also many optional keys that can be used like:

  • attachments
  • loss_variance 
  • true_loss
  • true_loss_variance 

How to use these key values and trial objects in our codes can be found here. Here we can find how we can save and represent information and diagnosis using the trial object shown. To make the size of the article compact we are not discussing this here.

Since our main motive here is to perform hyperparameter optimization using the this tool, in the next section we will see an approach to perform this. Before performing this, we are required to know about the parameter expressions for defining space which can be used with hyperopt optimization algorithms. Some of these expressions are listed below.

  • hp.choice(label, options): Returns one of the options, which should be a list or tuple.
  • hp.randint(label, upper): Provides a random integer in the range between 0 to upper.
  • hp.uniform(label, low, high) : Provides a value uniformly between low and high.
  • hp.quniform(label, low, high, q): Provides a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
  • hp.qloguniform(label, low, high, q): provides a value like round(exp(uniform(low, high)) / q) * q

In the above list, we have seen important expressions for making a search space for our objective function. Now we can move towards an implementation of a simple modelling procedure where we will perform hyperparameter optimization using the hyperopt-sklearn.

Note: hyperopt-sklearn is a model based on the hyperopt tool where we can perform model selection using the machine learning algorithms of scikit-learn.

Model Selection using Hyperopt

In this article, we are using the hyperopt-sklearn for performing classification model selection on the iris dataset. This dataset can be found in the sklearn library so we will be importing it from there. Let’s start by installing and importing some necessary libraries. We only need to install hyperopt-sklearn, which can be done using the following codes.

pip install git+https://github.com/hyperopt/hyperopt-sklearn

Output:

Now we are ready to use the library. We also required sklearn, NumPy, and Pandas library for this implementation.

Importing libraries

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
import hyperopt.tpe
import hpsklearn
import hpsklearn.demo_support

Importing the data

iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['species_name'] = pd.Categorical.from_codes(iris.target, iris.target_names)
df

Output:

Splitting the data

y = df['species_name']
X = df.drop(['species_name'], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

Defining estimator using the hyperopt estimator

estimator = hpsklearn.HyperoptEstimator(
    preprocessing=hpsklearn.components.any_preprocessing('pp'),
    classifier=hpsklearn.components.any_classifier('clf'),
    algo=hyperopt.tpe.suggest,
    trial_timeout=15.0, # seconds
    max_evals=20,
    )

Performing models selection using estimator on a subset of data where different models have been tried to perform on the data as,

# Demo version of estimator.fit()
fit_iterator = estimator.fit_iter(X_train,y_train)
fit_iterator.__next__()
plot_helper = hpsklearn.demo_support.PlotHelper(estimator,
                                                mintodate_ylim=(-.01, .10))
while len(estimator.trials.trials) < estimator.max_evals:
    fit_iterator.send(1) # -- try one more model
    plot_helper.post_iter()
plot_helper.post_loop()

training the best model on whole data 

estimator.retrain_best_model_on_full_data(X_train, y_train)

Output:

Now we can see the results of the model selection process as,

print('Best preprocessing pipeline:')
for pp in estimator._best_preprocs:
    print(pp)
print('\n')
print('Best classifier:\n', estimator._best_learner)
test_predictions = estimator.predict(X_test)
acc_in_percent = 100 * np.mean(test_predictions == y_test)
print('\n')
print('Prediction accuracy in generalization is %.1f%%' % acc_in_percent)

Output:

Here in the output, we can see the results. We have a range of best features, the best model with parameters and accuracy of the model. 

Final Words 

Here in the article, we have introduced the hyperopt tool for hyperparameter optimization. Along with that, we discussed some of the features from this tool and we have successfully implemented an example for model selection using the hyperopt-sklearn tool that is provided by hyperopt for the models of the SK-Learn library.

References

Download our Mobile App

Yugesh Verma
Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR