MITB Banner

Join this masterclass on ‘Speed up deep learning inference with Intel® Neural Compressor’

The workshop will introduce Intel® Optimisation for Tensorflow to help Tensorflow users get better performance on Intel platforms.

Share

Join this masterclass on ‘Speed up deep learning inference with Intel® Neural Compressor’

Intel®, in collaboration with Analytics India Magazine, is organising an oneAPI workshop on Intel® Neural Compressor on May 13th from 3:00 PM to 5:00 PM.

Quantization is an important acceleration method, and to support it in hardware, Intel developed Intel® Deep Learning Boost. Additionally, to help developers quantize AI models easily and quickly, Intel has developed a tool called Intel® Neural Compressor.

The workshop will introduce Intel® Optimisation for Tensorflow to help Tensorflow users get better performance on Intel platforms. Intel® Optimisation for TensorFlow is the binary distribution of TensorFlow with Intel® oneAPI deep neural network (oneDNN) library primitives. 

oneDNN is an open-source, cross-platform performance library for deep learning applications. The optimisations are directly upstreamed and made available in the official TensorFlow release via a simple flag update, enabling developers to benefit from the Intel® optimisations seamlessly. 

Intel® has already released Intel® Optimization for Tensorflow. 

Click here to view the installation guide. 

Register for the workshop

The workshop will cover 

  • oneAPI AI Analytics Toolkit overview
  • Introduction to Intel® Optimization for Tensorflow
  • Intel optimisations for Tensorflow 
  • Intel Neural Compressor
  • Hands-on demo to showcase usage and performance boost on DevCloud

Please create an Intel® DevCloud Account here.

The workshop will also offer a demo on how to speed up the AI model by quantization using Intel®’s Neural Compressor, alongside comparing the performance increase online with Intel® Deep Learning Boost.

The demo will walk you through an end-to-end pipeline to train a Tensorflow model with a small customer dataset and speed up the model based on quantization by Intel® Neural Compressor, including:

  • Train a model by Keras and Intel Optimization for Tensorflow.
  • Get an INT8 model by Intel® Neural Compressor.
  • Compare the performance of the FP32 and INT8 models by the same script.

Register for the workshop 

Agenda:

SessionContentDurationOwner
IntroductionDeveloper Ecosystem Program3:00 – 3:10 PMKavita Aroor
OverviewoneAPI AI Analytics Toolkit Overview, DevCloud Setup 3:10 – 3:25 PMAditya
Intel® Extension for TensorFlowIntroduction of Intel® Optimization for TensorFlow + Intel® Extension for TensorFlow hands-on 3:30 –  4:10 PMAditya
Intel Neural Compressor Hands-on +QAIntroduction of Intel Neural CompressorHands-on with quantization workload  4:10 – 5:00 PM Jianyu Zhang

Who should attend 

  • Software Developers
  • IT managers 
  • ML developers 
  • AI engineers 
  • Data science professionals 
  • IT, technology, and software architects 
  • Senior managers of technology/engineering/software

Register for the workshop

___________________________________________________

Exclusive Contests — Participate & Win!

Lucky Draw Contest

  • Analytics India Magazine is running a Lucky Draw wherein at the end of the workshop 10 lucky participants will get a chance to WIN Amazon Vouchers worth INR 2000/- each. 

Note: The winners will be selected based on their engagement on Discord throughout the workshop.

_______________________________________________________

Speaker details 

Zhang Jianyu 

Zhang Jianyu (Neo) is a senior AI software solution engineer (SSE) of SATG AIA in the PRC. Focus on optimising, consulting and supporting the AI frameworks on the Intel® Platform. Jianyu graduated from Northwestern Polytechnical University (China) with a master’s in Pattern Recognition and Intelligence System. Senior software engineer with rich experience in AI, virtualisation, high concurrency communication and embedded systems.

Kavita Aroor

Kavita Aroor is a developer marketing lead – Asia Pacific & Japan at Intel. She has over 16+ years of experience in marketing. 

Aditya Sirvaiya 

Aditya Sirvaiya is an AI Software Solutions Engineer at Intel. He specialises in Intel Optimized AI frameworks and OpenVINO Toolkit.

Register now 

Share
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.