Search

# Hyperparameter Tuning With TensorBoard In 6 Steps

Visualization helps us understand big data with ease. It helps us identify patterns and get deeper insights or at least make the process easier.

In the machine learning and data science spectrum, we often emphasise the importance of visualisations. Visualisations help in uncovering the truth by helping us understand important facts and figures.

TensorBoard is a tool from Tensorflow that helps in understanding a neural network through interactive graphs and statistics.

##### Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy

In this tutorial, we are just interested in HyperParameter tuning which by itself is a great deal in Machine learning.

So let’s begin:

### Data Set

FEATURES:

• Name: The brand and model of the car.
• Location: The location in which the car is being sold or is available for purchase.
• Year: The year or edition of the model.
• Kilometers_Driven: The total kilometres driven in the car by the previous owner(s) in KM.
• Fuel_Type: The type of fuel used by the car.
• Transmission: The type of transmission used by the car.
• Owner_Type: Whether the ownership is Firsthand, Second hand or other.
• Mileage: The standard mileage offered by the car company in kmpl or km/kg
• Engine: The displacement volume of the engine in cc.
• Power: The maximum power of the engine in bhp.
• Seats: The number of seats in the car.
• New_Price: The price of a new car of the same model.
• Price: The price of the used car in INR Lakhs.

We will train a neural network to predict the price of a used car based on the above list of features.

The data preparation and preprocessing part has already been taken care of and you can find it in the complete code at the end of this article.

### Hyperparameter Tuning With TensorBoard

Let us assume that we have an initial Keras sequential model for the given problem as follows:

Here we have an input layer with 26 nodes, a hidden layer with 100 nodes and relu activation function, a dropout layer with a dropout fraction of 0.2, an output layer with a single node for regression and an Adam optimizer.

Note:

The input shape is 26 since after preprocessing the dataset there are 26 independent features.

We notice that there are a few hyperparameters that we randomly initialized such as the number of nodes in the hidden layer, the dropout ratio and the optimizer function.

There are a number of possible values we can assign for each of these parameters and the effect of these parameters can only be assessed while training the model. Hence trying out different values and combinations for each of the parameters is not a feasible solution.

We will use the power of TensorBoard to visualize the performance of the network for each of the different parameters and all in one go.

Note:

The following example was done on Google Colab with Tensorflow 2.0.

So let’s begin!

#### Importing Tensorboard Plugin

`from tensorboard.plugins.hparams import api as hp`

We will start by importing the hparams plugin available in the tensorboard.plugin module.

#### Initializing HyperParameters

In the above code block, we initialize values for the hyperparameters that need to be assessed. We then set the metrics of the model to RMSE. Since Tensorboard works with log files that are created during the training process we create logs for the training process that records the losses, metrics and other measures during training.

#### A function To Train And Validate

Now we create a function to train and validate the model which will take the hyperparameters as arguments. Each combination of hyperparameters will run for 6 epochs and the hyperparameters are provided in an hparams dictionary and used throughout the training function.

#### A function to log the training process

The following function will initiate the training process with the hyperparameters to be assessed and will create a summary based on the RMSE value returned by the train_test_model function and writes the summary with the hyperparameters and final accuracy(RMSE) in logs.

#### Training The Model

We will now train the model for each combination of the hyperparameters.

#### Launching TensorBoard

It’s time to launch TensorBoard. Use the following commands to launch tensorboard.

`%load_ext tensorboard` `%tensorboard --logdir logs/hparam_tuning`

### Table View

Once it is launched, you will see a beautiful dashboard. Click on the HPARAMS tab to see the hyperparameter logs.

In Table View all the hyperparameter combinations and the respective accuracy will be displayed in a beautiful table as shown below.

The left side of the dashboard provides a number of filtering capabilities such as sorting based on the metric, filtering based on specific type or value of hyperparameter, filtering based on status etc.

### Parallel Coordinates View

The Parallel Coordinates View shows each run as a line going through an axis for each hyperparameter and metric. The interactive plot allows us to mark a region which will highlight only the runs that pass through it. The units if each hyperparameter can also be changed between linear, logarithmic and quantile values.

This is extremely useful in understanding the relationships between the hyperparameters.

We can select the optimum hyperparameters just by selecting the least RMSE as shown below:

### Scatter Plot Matrix View

The Scatter Plot View plots each of the hyperparameter and the given metric against the metric.

This helps us understand how different values of each parameter correlates to the metric.

### Complete Code

Thus for complex networks, TensorBoard can give us valuable insights to optimize the model for better performance and accuracy.

A Computer Science Engineer turned Data Scientist who is passionate about AI and all related technologies. Contact: amal.nair@analyticsindiamag.com

### Telegram group

Discover special offers, top stories, upcoming events, and more.

### Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

### Is GPT-4 Really Better than Radiologists?

“Radiology report summaries created by GPT-4 are comparable, and in some cases, even preferred over

### TSMC: The Wizard Behind AI’s Curtain

TSMC anticipates a substantial CAGR of nearly 50% in the AI sector from 2022 to 2027.

Not really.

### Google Gemini To Arrive Sooner Than Expected

This is after announcing the AI at the Google I/O 2023, the company had postponed

### ByteDance to Launch Platform to Build Custom Chatbots

This comes just a few days after OpenAI had delayed its plan to launch a

### This New AI tool Could Mark the Beginning of the End for TikTok and Instagram Influencers

Alibaba Group announces a model framework that can transform still images into dynamic character videos

### Embracing Identity: The Journey of Sujoy Das

“Why is it that corporate diversity efforts are often limited to specific times of the

### The Biggest Data Breaches of 2023

The most significant breaches that impacted the global landscape in 2023.

### NVIDIA Planning Big Expansions in Japan

Prime Minister Fumio Kishida has extended billions of dollars in financial support to bolster TSMC

### Runway Partners with Getty to Build Video Generation Model for Enterprises

Runway enterprise users can refine RGM with their proprietary datasets, benefiting various industries like Hollywood,