Google Introduces A New Algorithm For Training Sparse Neural Networks

Since the explosion of AlexNet paper onto the computer vision scene, machine learning results have improved greatly due to deeper models with high complexity, increased computational power and the availability of large-scale labelled data. The room to refine neural networks still exists as they sometimes fumble and end up using brute force for lightweight tasks. To address this, the researchers at Google, have come up with a RigL, an algorithm for training sparse neural networks that use a fixed parameter count and computational cost throughout training, without sacrificing accuracy. So, what is sparse in the context of neural networks? Each layer of neurons in a network is represented by a matrix. Each entry in the matrix can be thought of as representative of the connection between two neurons. A matrix in which most entries are 0 is called a sparse matrix. When a matrix is large and sparse, storing these entries becomes more efficient and so will be the computations. Neural
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ram Sagar
Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed