MITB Banner

Watch More

Join this masterclass on ‘Speed up deep learning inference with Intel® Neural Compressor’

The workshop will introduce Intel® Optimisation for Tensorflow to help Tensorflow users get better performance on Intel platforms.
Join this masterclass on ‘Speed up deep learning inference with Intel® Neural Compressor’

Intel®, in collaboration with Analytics India Magazine, is organising an oneAPI workshop on Intel® Neural Compressor on May 13th from 3:00 PM to 5:00 PM.

Quantization is an important acceleration method, and to support it in hardware, Intel developed Intel® Deep Learning Boost. Additionally, to help developers quantize AI models easily and quickly, Intel has developed a tool called Intel® Neural Compressor.

The workshop will introduce Intel® Optimisation for Tensorflow to help Tensorflow users get better performance on Intel platforms. Intel® Optimisation for TensorFlow is the binary distribution of TensorFlow with Intel® oneAPI deep neural network (oneDNN) library primitives. 

oneDNN is an open-source, cross-platform performance library for deep learning applications. The optimisations are directly upstreamed and made available in the official TensorFlow release via a simple flag update, enabling developers to benefit from the Intel® optimisations seamlessly. 

Intel® has already released Intel® Optimization for Tensorflow. 

Click here to view the installation guide. 

Register for the workshop

The workshop will cover 

  • oneAPI AI Analytics Toolkit overview
  • Introduction to Intel® Optimization for Tensorflow
  • Intel optimisations for Tensorflow 
  • Intel Neural Compressor
  • Hands-on demo to showcase usage and performance boost on DevCloud

Please create an Intel® DevCloud Account here.

The workshop will also offer a demo on how to speed up the AI model by quantization using Intel®’s Neural Compressor, alongside comparing the performance increase online with Intel® Deep Learning Boost.

The demo will walk you through an end-to-end pipeline to train a Tensorflow model with a small customer dataset and speed up the model based on quantization by Intel® Neural Compressor, including:

  • Train a model by Keras and Intel Optimization for Tensorflow.
  • Get an INT8 model by Intel® Neural Compressor.
  • Compare the performance of the FP32 and INT8 models by the same script.

Register for the workshop 

Agenda:

SessionContentDurationOwner
IntroductionDeveloper Ecosystem Program3:00 – 3:10 PMKavita Aroor
OverviewoneAPI AI Analytics Toolkit Overview, DevCloud Setup 3:10 – 3:25 PMAditya
Intel® Extension for TensorFlowIntroduction of Intel® Optimization for TensorFlow + Intel® Extension for TensorFlow hands-on 3:30 –  4:10 PMAditya
Intel Neural Compressor Hands-on +QAIntroduction of Intel Neural CompressorHands-on with quantization workload  4:10 – 5:00 PM Jianyu Zhang

Who should attend 

  • Software Developers
  • IT managers 
  • ML developers 
  • AI engineers 
  • Data science professionals 
  • IT, technology, and software architects 
  • Senior managers of technology/engineering/software

Register for the workshop

___________________________________________________

Exclusive Contests — Participate & Win!

Lucky Draw Contest

  • Analytics India Magazine is running a Lucky Draw wherein at the end of the workshop 10 lucky participants will get a chance to WIN Amazon Vouchers worth INR 2000/- each. 

Note: The winners will be selected based on their engagement on Discord throughout the workshop.

_______________________________________________________

Speaker details 

Zhang Jianyu 

Zhang Jianyu (Neo) is a senior AI software solution engineer (SSE) of SATG AIA in the PRC. Focus on optimising, consulting and supporting the AI frameworks on the Intel® Platform. Jianyu graduated from Northwestern Polytechnical University (China) with a master’s in Pattern Recognition and Intelligence System. Senior software engineer with rich experience in AI, virtualisation, high concurrency communication and embedded systems.

Kavita Aroor

Kavita Aroor is a developer marketing lead – Asia Pacific & Japan at Intel. She has over 16+ years of experience in marketing. 

Aditya Sirvaiya 

Aditya Sirvaiya is an AI Software Solutions Engineer at Intel. He specialises in Intel Optimized AI frameworks and OpenVINO Toolkit.

Register now 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories