Listen to this story
We all know that the amount of data generated in today’s world is exponential. AI Inference involves the process of using a trained neural network model to predict an outcome. For a typical AI workflow, the workloads associated with all the steps involved follow a diverse mechanism and a single GPU or CPU cannot work for the entire pipeline smoothly.
Sign up for your weekly dose of what's up in emerging technology.
To this end, Intel is organising this webinar for the attendees to understand how to optimise a deep learning neural network model and achieve fast AI inference with a CPU.
The session will also introduce Intel’s OpenVINO™ toolkit, an open-source toolkit to enable neural network model optimisation and easy deployment across multiple hardware platforms. The webinar will also have a live demo on setting up and running OpenVINO to achieve real-time AI inference in a CPU.
What can you expect?
The participants will get to learn three core things:
- How to run fast AI inference with your CPU
- Learn about the set of tools OpenVINO provided in optimising and inferencing deep learning models
- How to set up and run OpenVINO in just 5 minutes
Zhuo Wu, AI evangelist, Intel
Zhuo works as an AI evangelist in PRC. She received her PhD from the University of York, the UK, in 2006. She has worked as an associate professor at Shanghai University from 2006-2014 and was responsible for research in next-generation wireless communications and supervising graduate students. After that, she worked as a research scientist in Bell Labs (China) from 2014 to 2018 and was responsible for 5G system standardisation and AI-related research for industrial applications. After that, she joined Accenture (China) as a data scientist and was responsible for AI-based solution design and delivery.
|05:00 – 05:05 PM||AIM introduction to the session|
|05:05 – 05:45 PM||Presentation to introduce OpenVINO|
|05:45 – 06:25 PM||Hands-on course to run OpenVINO for object detection and OCR with webcam (pre-work needed)|
|06:25 – 06:30 PM||Vote of thanks by AIM|
Who should attend?
- ML developers
- GPU & CPU programmers
- Data scientists
- AI & ML practitioners
- AI and ML enthusiasts