MITB Banner

What Does Freezing A Layer Mean And How Does It Help In Fine Tuning Neural Networks

Share

Freezing a layer in the context of neural networks is about controlling the way the weights are updated. When a layer is frozen, it means that the weights cannot be modified further.

This technique, as obvious as it may sound is to cut down on the computational time for training while losing not much on the accuracy side.

Techniques like DropOut and Stochastic depth have already demonstrated how to efficiently train the networks without the need to train every layer.

Freezing a layer, too, is a technique to accelerate neural network training by progressively freezing hidden layers.

For instance, during transfer learning, the first layer of the network are frozen while leaving the end layers open to modification.

This  means that if a machine learning model is tasked with object detection, putting an image through it during the first epoch and doing the same image through it again during the second epoch result in the same value through that layer.

In other words, consider a network that has 2 layers. The first layer is frozen and the second layer not frozen. If we run 100 epochs we are doing an identical computation through the first layer for each of the 100 epochs.

The same images are run through the same layers without updating the weights. this means for every epoch the inputs to the first layer are the same(the images). The weights in the first layer are the same and the outputs from the first layer are the same(images * weights + bias).

By default the pre trained part is frozen and only last layers are trained and how big the change is on the weights in a layer is governed by learning rate.

Accelerated Training By Freezing

In this paper, we can see that the authors used learning rate annealing i.e, to change the learning rate layer by layer instead of the whole model.

Once a layer’s learning rate reaches zero, it gets set to inference mode and excluded from all future backward passes, resulting in an immediate per-iteration speedup proportional to the computational cost of the layer.

The results from the experiments made on popular models show a promising speedup versus accuracy tradeoff.

“For every strategy, there was  a speedup of up to 20%, with a maximum relative 3% increase in test error. Lower speedup levels perform better and occasionally outperform the baseline, though given the inherent level of non-determinism in training a network, we consider this margin insignificant,” say the authors in their paper titled FreezeOut

Whether this tradeoff is acceptable is up to the user.  If one is prototyping many different designs and simply wants to observe how they rank relative to one another, then employing higher levels of FreezeOut may be tenable.

If, however, one has set one’s network design and hyperparameters and simply wants to maximize performance on a test set, then a reduction in training time is likely of no value, and FreezeOut is not a desirable technique to use.

Based on these experiments, the authors recommend a default strategy of cubic scheduling with learning rate scaling, using a t_0 value of 0.8 before cubing (so t_0= 0.5120) for maximizing speed while remaining within an envelope of 3% relative error.

Key Takeaways

  • Freezing reduces training time as the backward passes go down in number.
  • Freezing the layer too early into the operation is not advisable.
  • Freezing all the layers but the last 5 ones, you only need to backpropagate the gradient and update the weights of the last 5 layers. This results in a huge decrease in computation time.

Here is a sample code snippet showing how freezing is done with Keras:

from keras.layers import Dense, Dropout, Activation, Flatten

from keras.models import Sequential from keras.layers.normalization

import BatchNormalization from keras.layers import

Conv2D,MaxPooling2D,ZeroPadding2D,GlobalAveragePooling2D model = Sequential()

#Setting trainable = False for freezing the layer

model.add(Conv2D(64,(3, 3),trainable=False))

Check the full code here

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India