OneAPI (compute acceleration)

Open standard for parallel computing From Wikipedia, the free encyclopedia

oneAPI is an open standard, adopted by Intel,[1] for a unified application programming interface (API) intended to be used across different computing accelerator (coprocessor) architectures, including GPUs, AI accelerators and field-programmable gate arrays. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.[2][3][4][5]

Quick facts Operating system, Platform ...
Close

oneAPI competes with other GPU computing stacks: CUDA by Nvidia and ROCm by AMD.

Specification

The oneAPI specification extends existing developer programming models to enable multiple hardware architectures through a data-parallel language, a set of library APIs, and a low-level hardware interface to support cross-architecture programming. It builds upon industry standards and provides an open, cross-platform developer stack.[6][7]

Data Parallel C++

DPC++[8][9] is a programming language implementation of oneAPI, built upon the ISO C++ and Khronos Group SYCL standards.[10] DPC++ is an implementation of SYCL with extensions that are proposed for inclusion in future revisions of the SYCL standard, including: unified shared memory, group algorithms, and sub-groups.[11][12][13]

Libraries

The set of APIs[6] spans several domains, including libraries for linear algebra, deep learning, machine learning, video processing, and others.

More information Library Name, Short Name ...
Library Name Short

Name

Description
oneAPI DPC++ Library oneDPL Algorithms and functions to speed DPC++ kernel programming
oneAPI Math Kernel Library oneMKL Math routines including matrix algebra, FFT, and vector math
oneAPI Data Analytics Library oneDAL Machine learning and data analytics functions
oneAPI Deep Neural Network Library oneDNN Neural networks functions for deep learning training and inference
oneAPI Collective Communications Library oneCCL Communication patterns for distributed deep learning
oneAPI Threading Building Blocks oneTBB Threading and memory management template library
oneAPI Video Processing Library oneVPL Real-time video encode, decode, transcode, and processing
Close

The source code of parts of the above libraries is available on GitHub.[14]

The oneAPI documentation also lists the "Level Zero" API defining the low-level direct-to-metal interfaces and a set of ray tracing components with its own APIs.[6]

Licensing

The licensing of oneAPI components falls into three major categories: open‑source permissive licences, proprietary vendor licences, and hybrid models that combine elements of both. Here an overview of some components:

More information Component, Typical license / notes ...
Component Typical license / notes Ref
oneAPI Threading Building Blocks (oneTBB) “Apache 2.0 – open‑source project under UXL Foundation” [15]
oneAPI Data Analytics Library (oneDAL) “Apache 2.0 – open‑source; Intel toolkit binaries may use Intel EULA” [16]
oneAPI Deep Neural Network Library (oneDNN) “Apache 2.0 – open‑source under UXL Foundation” [17]
oneAPI DPC++ Library (oneDPL) “Apache 2.0 – open‑source data‑parallel algorithms library” [18]
oneAPI Math Library (oneMath) “Apache 2.0 – unified math interface library” [19]
oneAPI Math Kernel Library (oneMKL) “Intel Simplified Software License (ISSL) – binary redistribution under Intel terms” [20]
oneAPI Collective Communications Library (oneCCL) “Apache 2.0 – open‑source communication layer” [21]
oneAPI Video Processing Library (oneVPL) “Apache 2.0 – open‑source media‑processing interface” [22]
oneAPI DPC++/C++ Compiler “Open‑source front‑end under Apache 2.0 with LLVM exceptions; Intel binaries under Intel EULA” [23]
oneAPI Level Zero Loader & Runtime “MIT License – open‑source GPU/accelerator runtime” [24]
Intel Integrated Performance Primitives (IPP) “Intel Simplified Software License (ISSL) – closed‑source library in oneAPI toolkit” [25]
Intel oneAPI Base Toolkit (bundle) “Commercial license (Intel EULA) – free download but subject to Intel’s terms” [26]
Intel oneAPI Base & IoT Toolkit (bundle) “Named‑user or seat‑based commercial license under Intel EULA” [27]
Intel oneAPI HPC Toolkit (bundle) “Commercial binaries under Intel EULA/ISSL; not fully open‑source” [28]
Intel oneAPI IoT Toolkit (bundle) “Commercial license for embedded/IoT workflows (Intel EULA)” [29]
Intel oneAPI Rendering Toolkit (bundle) “Some sub‑components open‑source (e.g., Embree/OSPRay) under Apache 2.0; toolkit packaging commercial” [30]
Close

Permissive open‑source licences

These licences (for example the Apache License 2.0) allow broad rights such as use, modification, distribution (in source or binary form) and carry OSI‑approval. Many oneAPI libraries use such licences, enabling community contribution and redistribution under minimal restriction.

Proprietary vendor licences

Some oneAPI components may be distributed under proprietary or commercial licences (for example Intel’s End‑User Licence Agreement (EULA) or the Intel Simplified Software Licence (ISSL)). Software released under the ISSL license are considered to not fully comply with standard open‑source definitions [31].

Hardware abstraction layer

oneAPI Level Zero,[32][33][34] the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.

Implementations

Intel has released oneAPI production toolkits that implement the specification and add CUDA code migration, analysis, and debug tools.[35][36][37] These include the Intel oneAPI DPC++/C++ Compiler,[38] Intel Fortran Compiler, Intel VTune Profiler[39] and multiple performance libraries.

Codeplay has released an open-source layer[40][41][42] to allow oneAPI and SYCL/DPC++ to run atop Nvidia GPUs via CUDA.

University of Heidelberg has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.[43]

Huawei released a DPC++ compiler for their Ascend AI Chipset[44]

Fujitsu has created an open-source ARM version of the oneAPI Deep Neural Network Library (oneDNN)[45] for their Fugaku CPU.

Unified Acceleration Foundation (UXL) and the future for oneAPI

Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[46]

References

Sources

Related Articles

Wikiwand AI