MITB Banner

Workshop Alert: Accelerating Deep Learning Inference Workloads at Scale

Learn how to scale and accelerate deep learning workloads with NVIDIA’s comprehensive tech stack.

Share

Workshop Alert: Accelerating Deep Learning Inference Workloads at Scale

Illustration by Workshop Alert: Accelerating Deep Learning Inference Workloads at Scale

Listen to this story

Deep learning is having somewhat of an iPhone moment with the launch of ChatGPT, and enterprises are eager to jump on board with the generative AI trend. However, to build a solid product on new AI technologies, developers need to stay up-to-date with the latest advancements in the field as well as frequently brush up on the basics of deep learning. 

To upskill hungry software engineers, NVIDIA, along with Analytics India Magazine is holding a webinar on the 25th of April from 3-4 PM. This webinar will tackle the subject of accelerating deep learning inference workloads at scale —— a must-know set of skills for developers in the generative AI boom. The webinar will also have a Q&A section hosted on the AI Forum, a groundbreaking community for AI developers and data scientists that aims to empower AI professionals across India.

Who Should Attend?

  • Programmers looking to learn about how to scale AI workloads efficiently 
  • Software developers building AI products for their companies
  • AI engineers looking to interact with India’s growing data science community
  • Enthusiasts looking to brush up on the basics of AI inference in the cloud
  • Professionals looking to upskill themselves and accelerate their AI workloads

Apart from tackling scalable inference across multiple GPUs, the webinar will also delve into other topics like model orchestration and management, large language model inference, and optimal model configuration. The session will also delve deeper into the technicalities of how to interface with the Triton inference server, especially by using the FIL backend, along with the potential benefits of dynamic batching on performance. 

Date and Time: 25th April | 3:00 – 4:00 PM

Register Now on Discord

About the webinar

This webinar will illustrate how to accelerate and streamline AI inference workloads on any framework using Triton. NVIDIA’s open-source Triton Inference Server software has the potential to supercharge deep learning workloads, enabling developers to deliver high-performance inference across a multitude of cloud architectures. 

Whether it’s cloud service providers, on-premises servers or edge and embedded devices, Triton allows inference workloads to scale according to the available compute. Over the course of the webinar, the speaker will provide an in-depth tutorial on Triton’s capabilities, as well as some example deployments to give a better idea of its capabilities. 

This webinar will also include a question and answer session hosted on the AI Forum for India, powered by NVIDIA. The AI Forum is a groundbreaking community for AI developers and data scientists hosted on Discord that aims to empower AI professionals across India. Apart from interaction in this webinar, the AI forum is a one-stop spot for any AI enthusiast or professional to interact with like-minded individuals and explore the world of artificial intelligence, machine learning and data analytics. 

What to expect?

  • Illustration on how to accelerate and streamline AI inference workloads on any framework
  • How NVIDIA Triton allows developers to deliver high-performance inference across cloud, on-prem, edge, and embedded devices
  • 10,000-feet overview of industrial adoption of neural networks – From GPT-4 to Transformer
  • Inferencing Server Platform: Triton Server
  • Compiler Optimization: TensorRT 
  • Hands-on & Demos
  • QnA

Date and Time: 25th April | 3:00 – 4:00 PM

Register Now on Discord

About the speaker: 

This webinar is conducted by Megh Makwana, Manager Solution Architect, Applied Deep Learning at NVIDIA. He has 6 years of experience in the machine learning and artificial intelligence space, with research work in AI workload optimisation. His session is sure to provide some insight into how deep learning workloads can be scaled efficiently. With his 2 years of experience at NVIDIA, a global leader in distributed computing and AI accelerators, he will provide some unique insights into how to scale up large deep learning workloads. 

Using the skills in this webinar, developers will be able to leverage NVIDIA’s new open-source Triton Inference Server software to accelerate deep learning workloads to the next level. Sign up for the webinar and join the Discord server to make sure you don’t miss this informative webinar. 

The webinar will give you a chance to join the AI revolution and embrace the future with our AI Forum on Discord. Register Now on Discord

Share
Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.