Voted as one of the best developer tools, Intel’s® OpenVINO™ toolkit has become the go-to tool for vision tasks. Earlier known as Computer Vision SDK, OpenVINO™ provides developers a single, unified software layer across hardware to allow developers to build AI solutions. The end goal here is to take away the complexity of working on Computer Vision applications by providing data scientists and developers a solution to accelerate performance on a range of hardware — CPU, GPU, FPGA and VPU. Well, you read it right!
The top-to-bottom optimised support is not just for Intel’s® own hardware, but for GPUs as well, enabling developers to build high-performance computer vision and deep learning based solutions. With simplicity at its core, the OpenVINO™ toolkit — a free download allows developers to deploy Computer Vision without the need to know much about neural networks.
The Intel Distribution of #OpenVINO Toolkit was awarded the @EmbVisionSummit Vision Product of the Year Award for Developer Tool of Year. Discover the #computervision capabilities here: https://t.co/EnNamVY9uk #intelAI pic.twitter.com/autYwT9bxa
— Intel AI (@IntelAI) May 21, 2019
This toolkit allows developers to deploy pre-trained Deep Learning models through a high-level C++ or Python* inference engine API integrated with application logic. It supports multiple Intel® platforms and is included in the Intel® Distribution of OpenVINO™ toolkit.
Short for Open Visual Inference & Neural Network Optimization, the Intel® Distribution of OpenVINO™ toolkit (formerly Intel® CV SDK) contains optimised OpenCV™ and OpenVX™ libraries, Deep Learning code samples, and pre-trained models to enhance computer vision development. It’s validated on 100+ open source and custom models, and is absolutely free.
Under The Hood Of OpenVINO™
Computer Vision comes from modelling image processing using the techniques of machine learning. Computer Vision applies machine learning to recognise patterns for interpretation of images. Much like the process of visual reasoning of human vision; we can distinguish between objects, classify them, sort them according to their size, and so forth.
A typical computer vision pipeline with deep learning may consist of regular vision functions (like image preprocessing) and convolutional neural networks (CNNs). The CNN graphs are accelerated on the FPGA add-on card or Intel® Movidius Neural Compute Sticks (NCS), while the rest of the vision pipelines run on a host processor.
The main components of OpenVINO™ toolkit includes:
- Deep Learning Model Optimizer which supports converting Caffe*, TensorFlow*, MXNet*, Kaldi*, ONNX* models
- Deep Learning Inference Engine — a unified API to allow high-performance inference on many hardware types including Intel® CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Movidius™ Neural Compute Stick, and Intel® Neural Compute Stick 2
- OpenCV™ library
From surveillance in the airports to object detection for driverless cars, the applications are vast. Having a plug and play kind of framework gives developers an extra edge while encouraging the amateurs by avoiding them the burden of dealing with the intricacies of the neural network and build them from scratch.
The Intel® Distribution of OpenVINO™ toolkit delivers access to OpenCV™ and OpenVX™ vision functions—complementary libraries that provide access to software algorithms and accelerate capabilities of CPUs and GPUs from Intel®
The users just have to write code once and leverage all the options available in the toolkit.
This toolkit also includes code samples in C++ and Python along with pre-trained models validated on more than 100 open source and custom models to experiment with.
When pitted against popular frameworks like Caffe, OpenVINO™ looks promising as can be seen below:
The above illustration represents how OpenVINO™ has done well with machine vision tasks like single shot detection(SSD).
Can a trained model be used without deploying the entire framework? Or use a small part of the framework just for inferencing? These are two common challenges faced by software developers and data scientist when deploying models. Addressing these challenges was the primary objective behind developing OpenVINO™ toolkit.
Accelerate computer vision tasks with OpenVINO™ toolkit in 3 steps:
- Convert the model from original framework format using the Model Optimizer tool. This will output the model in Intermediate Representation (IR) format
- Perform model calibration using the calibration tool within the Intel® Distribution of OpenVINO™ toolkit. It accepts the model in IR format and is framework-agnostic
- Use the updated model in IR format to perform inference
Upon start-up, the sample application reads command line parameters and loads a network and an image to the Inference Engine plugin. When inference is done, the application creates an output image and outputs data to the standard output stream.
Run the application with the -h option yields the usage message:
python3 classification_sample.py -h
To run the sample, the user can use AlexNet and GoogLeNet or other image classification models. And, users can download the pre-trained models with the OpenVINO™ Model Downloader.
Here is an example of pretrained model in OpenVINO™ for Human Pose estimation:
This is a multi-person 2D pose estimation network (based on the OpenPose approach) with tuned MobileNet v1 as a feature extractor.
It finds a human pose: body skeleton, which consists of keypoints and connections between them, for every person inside image. The pose may contain up to 18 keypoints — ears, eyes, nose, neck, shoulders, elbows, wrists, hips, knees and ankles.
name: "input" , shape: [1x3x256x456] - An input image in the format [BxCxHxW], where:
B - batch size, C - number of channels, H - image height, W - image width. Expected color order is BGR.
The net outputs two blobs with shapes: [1, 38, 32, 57] and [1, 19, 32, 57]. The first blob contains keypoint pairwise relations (part affinity fields), the second one contains keypoint heatmaps.
- 6th-8th Generation Intel® Core™
- Intel® Xeon® v5 family
- Intel® Xeon® v6 family
- Microsoft Windows* 10 64-bit
With computer vision tasks growing, Intel OpenVINO™ plug and play toolkit helps developers and machine learning enthusiasts experiment readily. The Intel® Distribution of OpenVINO™ toolkit delivers access to OpenCV™ and OpenVX™ vision functions — complementary libraries that provide access to software algorithms and accelerate capabilities of CPUs and GPUs from Intel®. With a growing number of computer vision use cases, for example, self driving cars becoming more realistic than ever before, better tools will ensure better outcomes and ease of operation shall encourage more players to participate in this domain.
Know more about the OpenVINO™ here
Download the toolkit here
Register for our upcoming events:
- Meetup: NVIDIA RAPIDS GPU-Accelerated Data Analytics & Machine Learning Workshop, 18th Oct, Bangalore
- Join the Grand Finale of Intel Python HackFury2: 21st Oct, Bangalore
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad