A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/intel/mkl-dnn below:

uxlfoundation/oneDNN: oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN project is part of the UXL Foundation and is an implementation of the oneAPI specification for oneDNN component.

The library is optimized for Intel(R) Architecture Processors, Intel Graphics, and Arm(R) 64-bit Architecture (AArch64)-based processors. oneDNN has experimental support for the following architectures: NVIDIA* GPU, AMD* GPU, OpenPOWER* Power ISA (PPC64), IBMz* (s390x), and RISC-V.

oneDNN is intended for deep learning applications and framework developers interested in improving application performance on CPUs and GPUs.

Deep learning practitioners should use one of the applications enabled with oneDNN:

oneDNN supports platforms based on the following architectures:

WARNING

Power ISA (PPC64), IBMz (s390x), and RISC-V (RV64) support is experimental with limited testing validation.

The library is optimized for the following CPUs:

On a CPU based on Intel 64 or on AMD64 architecture, oneDNN detects the instruction set architecture (ISA) at runtime and uses just-in-time (JIT) code generation to deploy the code optimized for the latest supported ISA. Future ISAs may have initial support in the library disabled by default and require the use of run-time controls to enable them. See CPU dispatcher control for more details.

WARNING

On macOS, applications that use oneDNN may need to request special entitlements if they use the hardened runtime. See the Linking Guide for more details.

The library is optimized for the following GPUs:

Requirements for Building from Source

oneDNN supports systems meeting the following requirements:

The following tools are required to build oneDNN documentation:

Configurations of CPU and GPU engines may introduce additional build time dependencies.

oneDNN CPU engine is used to execute primitives on Intel Architecture Processors, 64-bit Arm Architecture (AArch64) processors, 64-bit Power ISA (PPC64) processors, IBMz (s390x), and compatible devices.

The CPU engine is built by default but can be disabled at build time by setting ONEDNN_CPU_RUNTIME to NONE. In this case, GPU engine must be enabled. The CPU engine can be configured to use the OpenMP, TBB or SYCL runtime. The following additional requirements apply:

Some implementations rely on OpenMP 4.0 SIMD extensions. For the best performance results on Intel Architecture Processors we recommend using the Intel C++ Compiler.

On a CPU based on Arm AArch64 architecture, oneDNN CPU engine can be built with Arm Compute Library (ACL) integration. ACL is an open-source library for machine learning applications and provides AArch64 optimized implementations of core functions. This functionality currently requires that ACL is downloaded and built separately. See Build from Source section of the Developer Guide for details. The minimum supported version of ACL is 52.2.0.

Intel Processor Graphics and Xe Architecture graphics are supported by the oneDNN GPU engine. The GPU engine is disabled in the default build configuration. The following additional requirements apply when GPU engine is enabled:

WARNING

Linux will reset GPU when kernel runtime exceeds several seconds. The user can prevent this behavior by disabling hangcheck for Intel GPU driver. Windows has built-in timeout detection and recovery mechanism that results in similar behavior. The user can prevent this behavior by increasing the TdrDelay value.

WARNING

NVIDIA GPU support is experimental. General information, build instructions, and implementation limitations are available in the NVIDIA backend readme.

WARNING

AMD GPU support is experimental. General information, build instructions, and implementation limitations are available in the AMD backend readme.

When oneDNN is built from source, the library runtime dependencies and specific versions are defined by the build environment.

Common dependencies:

Runtime-specific dependencies:

Runtime configuration Compiler Dependency ONEDNN_CPU_RUNTIME=OMP GCC GNU OpenMP runtime (libgomp.so) ONEDNN_CPU_RUNTIME=OMP Intel C/C++ Compiler Intel OpenMP runtime (libiomp5.so) ONEDNN_CPU_RUNTIME=OMP Clang Intel OpenMP runtime (libiomp5.so) ONEDNN_CPU_RUNTIME=TBB any TBB (libtbb.so) ONEDNN_CPU_RUNTIME=SYCL Intel oneAPI DPC++ Compiler Intel oneAPI DPC++ Compiler runtime (libsycl.so), TBB (libtbb.so), OpenCL loader (libOpenCL.so) ONEDNN_GPU_RUNTIME=OCL any OpenCL loader (libOpenCL.so) ONEDNN_GPU_RUNTIME=SYCL Intel oneAPI DPC++ Compiler Intel oneAPI DPC++ Compiler runtime (libsycl.so), OpenCL loader (libOpenCL.so), oneAPI Level Zero loader (libze_loader.so)

Common dependencies:

Runtime-specific dependencies:

Runtime configuration Compiler Dependency ONEDNN_CPU_RUNTIME=OMP Microsoft Visual C++ Compiler No additional requirements ONEDNN_CPU_RUNTIME=OMP Intel C/C++ Compiler Intel OpenMP runtime (iomp5.dll) ONEDNN_CPU_RUNTIME=TBB any TBB (tbb.dll) ONEDNN_CPU_RUNTIME=SYCL Intel oneAPI DPC++ Compiler Intel oneAPI DPC++ Compiler runtime (sycl.dll), TBB (tbb.dll), OpenCL loader (OpenCL.dll) ONEDNN_GPU_RUNTIME=OCL any OpenCL loader (OpenCL.dll) ONEDNN_GPU_RUNTIME=SYCL Intel oneAPI DPC++ Compiler Intel oneAPI DPC++ Compiler runtime (sycl.dll), OpenCL loader (OpenCL.dll), oneAPI Level Zero loader (ze_loader.dll)

Common dependencies:

Runtime-specific dependencies:

Runtime configuration Compiler Dependency ONEDNN_CPU_RUNTIME=OMP Intel C/C++ Compiler Intel OpenMP runtime (libiomp5.dylib) ONEDNN_CPU_RUNTIME=TBB any TBB (libtbb.dylib)

You can download and install the oneDNN library using one of the following options:

x86-64 CPU engine was validated on RedHat* Enterprise Linux 8 with

on Windows Server* 2019 with

on macOS 14 (Sonoma) with

AArch64 CPU engine was validated on Ubuntu 22.04 with

on macOS 14 (Sonoma) with

GPU engine was validated on Ubuntu* 22.04 with

on Windows Server* 2019 with

Submit questions, feature requests, and bug reports on the GitHub issues page.

You can also contact oneDNN developers via UXL Foundation Slack using #onednn channel.

oneDNN project is governed by the UXL Foundation and you can get involved in this project in multiple ways. It is possible to join the AI Special Interest Group (SIG) meetings where the groups discuss and demonstrate work using this project. Members can also join the Open Source and Specification Working Group meetings.

You can also join the mailing lists for the UXL Foundation to be informed of when meetings are happening and receive the latest information and discussions.

We welcome community contributions to oneDNN. You can find the oneDNN release schedule and work already in progress towards future milestones in Github's Milestones section. If you are looking for a specific task to start, consider selecting from issues that are marked with the help wanted label.

See contribution guidelines to start contributing to oneDNN. You can also contact oneDNN developers and maintainers via UXL Foundation Slack using #onednn channel.

This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

oneDNN is licensed under Apache License Version 2.0. Refer to the "LICENSE" file for the full license text and copyright notice.

This distribution includes third party software governed by separate license terms.

3-clause BSD license:

2-clause BSD license:

Apache License Version 2.0:

Boost Software License, Version 1.0:

MIT License:

This third-party software, even if included with the distribution of the Intel software, may be governed by separate license terms, including without limitation,third party license terms, other Intel software license terms, and open source software license terms. These separate license terms govern your use of the third party programs as set forth in the "THIRD-PARTY-PROGRAMS" file.

Security Policy outlines our guidelines and procedures for ensuring the highest level of security and trust for our users who consume oneDNN.

Intel, the Intel logo, Arc, Intel Atom, Intel Core, Iris, OpenVINO, the OpenVINO logo, Pentium, VTune, and Xeon are trademarks of Intel Corporation or its subsidiaries.

Arm and Neoverse are trademarks, or registered trademarks of Arm Ltd.

* Other names and brands may be claimed as the property of others.

Microsoft, Windows, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.

(C) Intel Corporation


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4