UML

oneAPI: – A Unified Cross-Architecture, High Performance Programming Model Designed to Help Shape the Future of Application Development


In this article, we’ll dive into
the newly announced oneAPI, a single, unified
programming model that aims to simplify development across multiple
architectures, such as CPUs, GPUs, FPGAs and other accelerators. The long-term journey is
represented by two important first-steps – the industry initiative and the Intel beta
product, as described below:

  • The oneAPIcross-architecture development model is based on industry standards coupled with an open specification to enable broad adoption of the technologies that will help inspire evolutionary progress in application development to address data-centric workloads.
  • The Intel® oneAPI Base Toolkit is Intel’s implementation of oneAPI that contains the oneAPI specification components with direct programming (Data Parallel C++), API-based programming with a set of performance libraries, advanced analysis and debug tools, and other components. Developers can test their code and workloads in the Intel DevCloud for oneAPI on multiple types of Intel architectures today, including Intel® Xeon® Scalable processors, Intel® Core™ processors with integrated graphics and Intel FPGAs (Intel Arria®, Stratix®).

Supporting Data-centric
Workloads with Multiple Architectures

oneAPI represents Intel’s commitment to a
“software-first” strategy that will define and lead programming for an
increasingly AI-infused, heterogenous and multi-architecture world.

The ability to program across diverse architectures (CPU, GPU, FPGA and other accelerators) is critical in supporting data-centric workloads where multiple architectures are required and will become the future norm. Today, each hardware platform typically requires developers to maintain separate code bases that must be programmed using different languages, libraries and software tools. This process is complex and time-consuming for developers, slowing progress and innovation.

oneAPI addresses this challenge by delivering a unified, open programming experience to developers on the architecture of their choice without compromising performance. oneAPI will also eliminate the complexity of separate code bases, multiple programming languages and different tools and workflows. The result: a compelling, modern alternative to today’s proprietary programming environments based on single-vendor architectures. Further, it enables developers to preserve their existing software investments, while delivering a seamless bridge to create versatile applications for the heterogeneous world of the future.

The oneAPI Open Specification

The open specification includes a
cross-architecture language, Data Parallel C++ (DPC++) for direct programming, a
set of libraries for API-based programming and a low-level interface to
hardware, oneAPI Level Zero. Together,
those components allow Intel and other companies to build their own
implementations of oneAPI to support their own products or create new products
based on oneAPI.

Intel has a decades-long history of working
with standards groups and industry/academia initiatives, such as the ISO
C++/Fortran groups, OpenMP* ARB, MPI Forum and The Khronos Group, to create and
define specifications in an open and collaborative process to achieve
interoperability and interchangeability. Intel’s efforts with the oneAPI
project are a continuation of this precedent. oneAPI will support and be
interoperable with existing industry standards.

The oneAPI specification is designed to
support a broad range of CPUs, GPUs, FPGAs and other accelerators from multiple
vendors. Intel’s oneAPI beta reference implementation currently supports Intel
CPUs (Intel Xeon, Core and Atom®), Intel Arria FPGAs and Gen9/Intel Processor
Graphics as a proxy development platform for future Intel discrete data center
GPUs. Additional Intel accelerator architectures will be added over time.

Many libraries and components already are, or
may soon be open sourced. Visit the oneAPI initiative site to view the latest oneAPI specification and to
see what elements have open source availability.

Intel® oneAPI Toolkits (Beta)

Intel® oneAPI toolkits deliver the tools to program
across multiple architectures. The Intel oneAPI Base Toolkit (Beta) is a core
set of tools and libraries for building and deploying high-performance,
data-centric applications across diverse architectures. It contains the oneAPI
open specification technologies (DPC++ language, domain-specific libraries) as
well as the Intel® Distribution for Python* for drop-in acceleration of
standard Python code. Enhanced profiling, design assistance, debug tools and
other components round out the kit.

For specialized HPC, AI and other workloads,
additional toolkits are available that complement the Intel oneAPI Base
Toolkit. These include:

  • Intel oneAPI HPC Toolkit (Beta) to deliver fast C++, Fortran and OpenMP
    applications that scale
  • Intel oneAPI DL Framework Developer Toolkit
    (Beta)
    to build deep learning frameworks or customize existing
    ones
  • Intel oneAPI Rendering Toolkit
    (Beta)
    to create high-performance, high-fidelity visualization
    applications (including sci-vis)
  • Intel AI Analytics Toolkit
    (Beta)
    powered by
    oneAPI, this toolkit is used by AI developers and data scientists
    to build applications that leverage machine learning and deep learning
    models

There are also two other
complementary toolkits powered by oneAPI: The Intel System Bring-Up Toolkit for system engineers and the
production-level Intel Distribution of
OpenVINO™ Toolkit
for deep learning inference and computer vision. Developers
can download the Intel oneAPI Beta Toolkits for local use from the Intel Developer Zone.

Data Parallel C++

Data Parallel C++ (DPC++) is a high-level
language designed for data parallel programming productivity.

Based on familiar C and C++ constructs, DPC++
is the primary language for oneAPI and incorporates SYCL* from The Khronos
Group to support data parallelism and heterogeneous programming for performance
across CPUs, GPUs, FPGAs and other accelerators. The goal is to simplify
programming and enable code reuse across hardware targets, while allowing for
tuning to specific accelerators.

DPC++ language enhancements will be driven through a community project with extensions to simplify data parallel programming. The project is open with collaborative development for continued evolution.

The oneAPI language, DPC++ and library
specifications are publicly available for use by other hardware and software vendors.
Vendors and others in the industry can create their own oneAPI implementations
and optimize for specific hardware.

Conclusion

Modern workloads are
incredibly diverse—and so are architectures. No single architecture is best for
every workload. Maximizing performance takes a mix of scalar, vector, matrix,
and spatial (SVMS) architectures deployed in CPU, GPU, FPGA, and other future
accelerators.

Intel® oneAPI products will
deliver the tools needed to deploy applications and solutions across SVMS
architectures. Its set of complementary toolkits—a base kit and specialty
add-ons—simplify programming and help improve efficiency and innovation.

Use it for:

  • High-performance computing (HPC)
  • Machine learning and analytics
  • IoT applications
  • Video processing
  • Rendering

Try your code in the Intel® DevCloud or Download the Toolkits to tailor your needs.



Source link

Most Popular

To Top