Does Neural Network Compression Impact Transfer Learning

The popularity of compression techniques grew with the increasing sizes of machine learning models, which grew in order to cater to the growing number of parameters (we are talking billions) among other factors. Compression comes in handy when a model has to be downloaded and run on a smartphone, which runs on low resources like memory. Compression usually involves ditching the unnecessary. Mainly compression techniques are of following types: 1. Parameter Pruning And Sharing Reducing redundant parameters that are not sensitive to the performanceRobust to various settingsRedundancies in the model parameters are explored, and the uncritical yet redundant ones are removed. 2. Low-Rank Factorisation Uses matrix decomposition to estimate the informative parameters of the deep convolutional neural networks 3. Transferred/Compact Convolutional Filters Special structural convolutional filters are designed to reduce the parameter space and save storage/computation 4.
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ram Sagar
Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed