AIM Banners_978 x 90

How Synthetic Gradients Are Used To Optimise Training of Large Neural Networks

syngrad-bn
The recent developments in neural networks have accelerated in a fast pace, so much so that to a beginner it may seem confounding to restrict himself/herself to just one specific area of expertise. As we go forward, researchers try to find ways in which there is a constant improvement in the science of neural networks. In this article we explore a recent improvement in neural network called ‘synthetic gradients’. This is research coming out of Google’s DeepMind which has a different and interesting take on backpropagation in neural networks. Synthetic gradients have proved to improve communication between multiple neural networks. This is a good sign since it makes the complications involved in deep learning much easier. Backpropagation: How neural networks learn from data Neural
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Abhishek Sharma
Abhishek Sharma
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed