MITB Banner

Workshop Alert: Accelerating Deep Learning Inference Workloads at Scale

Learn how to scale and accelerate deep learning workloads with NVIDIA’s comprehensive tech stack.
Workshop Alert: Accelerating Deep Learning Inference Workloads at Scale

Design by Workshop Alert: Accelerating Deep Learning Inference Workloads at Scale

Listen to this story

Deep learning is having somewhat of an iPhone moment with the launch of ChatGPT, and enterprises are eager to jump on board with the generative AI trend. However, to build a solid product on new AI technologies, developers need to stay up-to-date with the latest advancements in the field as well as frequently brush up on the basics of deep learning. 

To upskill hungry software engineers, NVIDIA, along with Analytics India Magazine is holding a webinar on the 25th of April from 3-4 PM. This webinar will tackle the subject of accelerating deep learning inference workloads at scale —— a must-know set of skills for developers in the generative AI boom. The webinar will also have a Q&A section hosted on the AI Forum, a groundbreaking community for AI developers and data scientists that aims to empower AI professionals across India.

Who Should Attend?

  • Programmers looking to learn about how to scale AI workloads efficiently 
  • Software developers building AI products for their companies
  • AI engineers looking to interact with India’s growing data science community
  • Enthusiasts looking to brush up on the basics of AI inference in the cloud
  • Professionals looking to upskill themselves and accelerate their AI workloads

Apart from tackling scalable inference across multiple GPUs, the webinar will also delve into other topics like model orchestration and management, large language model inference, and optimal model configuration. The session will also delve deeper into the technicalities of how to interface with the Triton inference server, especially by using the FIL backend, along with the potential benefits of dynamic batching on performance. 

Date and Time: 25th April | 3:00 – 4:00 PM

Register Now on Discord

About the webinar

This webinar will illustrate how to accelerate and streamline AI inference workloads on any framework using Triton. NVIDIA’s open-source Triton Inference Server software has the potential to supercharge deep learning workloads, enabling developers to deliver high-performance inference across a multitude of cloud architectures. 

Whether it’s cloud service providers, on-premises servers or edge and embedded devices, Triton allows inference workloads to scale according to the available compute. Over the course of the webinar, the speaker will provide an in-depth tutorial on Triton’s capabilities, as well as some example deployments to give a better idea of its capabilities. 

This webinar will also include a question and answer session hosted on the AI Forum for India, powered by NVIDIA. The AI Forum is a groundbreaking community for AI developers and data scientists hosted on Discord that aims to empower AI professionals across India. Apart from interaction in this webinar, the AI forum is a one-stop spot for any AI enthusiast or professional to interact with like-minded individuals and explore the world of artificial intelligence, machine learning and data analytics. 

What to expect?

  • Illustration on how to accelerate and streamline AI inference workloads on any framework
  • How NVIDIA Triton allows developers to deliver high-performance inference across cloud, on-prem, edge, and embedded devices
  • 10,000-feet overview of industrial adoption of neural networks – From GPT-4 to Transformer
  • Inferencing Server Platform: Triton Server
  • Compiler Optimization: TensorRT 
  • Hands-on & Demos
  • QnA

Date and Time: 25th April | 3:00 – 4:00 PM

Register Now on Discord

About the speaker: 

This webinar is conducted by Megh Makwana, Manager Solution Architect, Applied Deep Learning at NVIDIA. He has 6 years of experience in the machine learning and artificial intelligence space, with research work in AI workload optimisation. His session is sure to provide some insight into how deep learning workloads can be scaled efficiently. With his 2 years of experience at NVIDIA, a global leader in distributed computing and AI accelerators, he will provide some unique insights into how to scale up large deep learning workloads. 

Using the skills in this webinar, developers will be able to leverage NVIDIA’s new open-source Triton Inference Server software to accelerate deep learning workloads to the next level. Sign up for the webinar and join the Discord server to make sure you don’t miss this informative webinar. 

The webinar will give you a chance to join the AI revolution and embrace the future with our AI Forum on Discord. Register Now on Discord

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories