How To Optimise Deep Learning Models

Increasing number of parameters, latency, resources required to train etc have made working with deep learning tricky. Google researchers, in an extensive survey, have found common challenging areas for deep learning practitioners and suggested key checkpoints to mitigate these challenges.  For instance, deep learning practitioner might face the following challenges when deploying a model: Training could be a one-time cost, deploying and letting inference run for over a long period of time could still turn out to be expensive in terms of consumption of server-side RAM, CPU, etc. Using as little data as possible for training is critical when the user-data might be sensitive. New applications come with new constraints (around model quality or footprint) that existing off-the-shelf models might not be able to address. Deploying multiple models on the same infrastructure for different applications might end up exhausting the available resources. Most of these challen
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ram Sagar
Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed