MITB Banner

Watch More

Intel Unveils oneAPI: What Is It?

At the recently-concluded Supercomputing 2019 event, Intel made its vision for AI loud and clear. The unveiling of oneAPI and a lot of talk about the convergence of high-performance computing (HPC) and artificial intelligence, along with building foundations of exascale computing were key takeaways from the event.

Intel’s ambitions to have a unified programming model is steered by the recent paradigm shift in the way hardware is being used for deep learning applications. 

With oneAPI, Intel marks a game-changing evolution from today’s limiting, proprietary programming approaches to an open standards-based model for cross-architecture developer engagement and innovation.

Overview Of oneAPI

via Intel

The oneAPI programming model simplifies the programming of CPUs and accelerators using modern C++ features to express parallelism with a programming language called Data Parallel C++ (DPC++). 

The DPC++ language enables code reuse for the host (such as a CPU) and accelerators (such as a GPU) using a single source language, with execution and memory dependencies clearly communicated.

Modern workload diversity necessitates the need for architectural diversity; no single architecture is best for every workload. A mix of scalar, vector, matrix, and spatial (SVMS) architectures deployed in CPU, GPU, AI, FPGA, and other accelerators is required to extract high performance.

Intel oneAPI products are aimed at delivering tools that deploy applications and solutions across SVMS architectures. They contain a base kit and speciality add-ons that simplify programming and help developers.

The features include:

  • oneAPI includes both an industry initiative based on open specifications and an Intel beta product. 
  • oneAPI preserves existing software investments with support for existing languages while delivering flexibility for developers to create versatile applications.
  • The oneAPI specification includes a direct programming language, powerful APIs and a low-level hardware interface. 
  • Intel’s oneAPI beta software provides developers with a comprehensive portfolio of developer tools that include compilers, libraries and analysers, packaged into domain-focused toolkits. 
  • The initial oneAPI beta release targets Intel® Xeon® Scalable processors, Intel® Core™ processors with integrated graphics, and Intel® FPGAs, with additional hardware support to follow in future releases. 

How Significant Is It For Deep Learning

via Raja Koduri twitter

oneAPI, as the name suggests, is aimed at unifying programming models, libraries and simplify cross-architecture development. It also supplements three important libraries tailor-made for data science and deep learning applications:

oneDNN

Intel oneAPI’s Deep Neural Network Library (oneDNN) is an open-source performance library for deep learning applications. The library is optimised for Intel Architecture Processors and Intel Processor Graphics and is intended for deep learning applications.

oneCCL

Whereas, oneAPI’s Collective Communications Library (oneCCL) is a scalable and high-performance communication library for Deep Learning (DL) and (ML) workloads. 

oneDAL

Intel oneAPI Data Analytics Library (oneDAL) library is designed to speed up big data analysis by providing highly optimised algorithmic building blocks for all stages of data analytics (preprocessing, transformation, analysis, modelling, validation, and decision making) in batch, online, and distributed processing modes of computation.

Worthy Idea, But A Tall Order

oneAPI low-level common interface to heterogeneous hardware so that HPC developers can code directly to the hardware, through languages and libraries that are shared across architectures and across vendors as well as making sure that middleware and frameworks are powered by one API and fully optimised for the developers that live on top of abstractions

This new launch by Intel was applauded by industry heads from many corners of the world. Speaking of the significance of oneAPI, Federico Carminati of CERN said that this model would make hardware transitions considerably less risky and error-prone and it is very suitable for high energy physics workloads.

So far, it has been a challenge to have a single programming environment that could render code without sacrificing performance across multiple hardware types. This repelled the developers from reusing the code. However, oneAPI’s promising code portability enables performance tuning CPUs and accelerators without having to compromise on anything.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories