A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.nvidia.com/deeplearning/dali/user-guide/docs/compilation.html below:

Compiling DALI from Source — NVIDIA DALI

Compiling DALI from Source# Using Docker builder - recommended#

Following these steps, it is possible to recreate Python wheels in a similar fashion as we provide as an official prebuilt binary.

Prerequisites# Building Python Wheel#

Change directory (cd) into docker directory and run ./build.sh. If needed, set the following environment variables:

It is worth to mention that build.sh should accept the same set of environment variables as the project CMake.

The recommended command line is:

CUDA_VERSION=Z ./build.sh

For example:

CUDA_VERSION=12.1 ./build.sh

Will build CUDA 12.1 based DALI for Python 3 and place relevant Python wheel inside DALI_root/wheelhouse The produced DALI wheel and TensorFlow Plugin are compatible with all Python versions supported by DALI.

Bare Metal build# Prerequisites#

DALI has several open-source dependencies. We keep them in two locations. First of all, the main DALI repository contains a third_party directory, which lists the source code based dependencies. Secondly, we maintain a separate DALI_deps repository, with the links to remaining dependencies. Please refer to the DALI_deps README file for instructions, how to install the dependencies from that repository.

The SHA of the currently used version of DALI_deps can be found in DALI_PROJECT_ROOT/DALI_EXTRA_VERSION.

**nvJPEG library**, **GPU Direct Storage**, **libjpeg-turbo** and **libtiff** have an unofficial option to disable them.

Note

TensorFlow installation is required to build the TensorFlow plugin for DALI.

Note

Items marked “unofficial” are community contributions that are believed to work but not officially tested or maintained by NVIDIA.

Note

This software uses the FFmpeg licensed code under the LGPLv2.1. Its source can be downloaded `from here<https://github.com/NVIDIA/DALI_deps>`__.

FFmpeg was compiled using the following command line:

./configure \
--prefix=/usr/local \
--disable-static \
--disable-programs \
--disable-doc \
--disable-avdevice \
--disable-swresample \
--disable-postproc \
--disable-w32threads \
--disable-os2threads \
--disable-dct \
--disable-dwt \
--disable-error-resilience \
--disable-lsp \
--disable-mdct \
--disable-rdft \
--disable-fft \
--disable-faan \
--disable-pixelutils \
--disable-autodetect \
--disable-iconv \
--enable-shared \
--enable-avformat \
--enable-avcodec \
--enable-avfilter \
--disable-encoders \
--disable-hwaccels \
--disable-muxers \
--disable-protocols \
--enable-protocol=file \
--disable-indevs \
--disable-outdevs  \
--disable-devices \
--disable-filters \
--disable-bsfs \
--disable-decoder=ipu \
--enable-bsf=h264_mp4toannexb,hevc_mp4toannexb,mpeg4_unpack_bframes && \
# adds "| sed 's/\(.*{\)/DALI_\1/' |" to the version file generation command - it prepends "DALI_" to the symbol version
sed -i 's/\$\$(M)sed '\''s\/MAJOR\/\$(lib$(NAME)_VERSION_MAJOR)\/'\'' \$\$< | \$(VERSION_SCRIPT_POSTPROCESS_CMD) > \$\$\@/\$\$(M)sed '\''s\/MAJOR\/\$(lib$(NAME)_VERSION_MAJOR)\/'\'' \$\$< | sed '\''s\/\\(\.*{\\)\/DALI_\\1\/'\'' | \$(VERSION_SCRIPT_POSTPROCESS_CMD) > \$\$\@/' ffbuild/library.mak \
make

Note

This software uses the libsnd licensed under the LGPLv2.1. Its source can be downloaded from here.

libsnd was compiled using the following command line:

Build DALI#
  1. Get DALI source code:

    git clone --recursive https://github.com/NVIDIA/DALI
    cd DALI
    
  2. Create a directory for CMake-generated Makefiles. This will be the directory, that DALI’s built in.

  3. Run CMake. For additional options you can pass to CMake, refer to Optional CMake Build Parameters.

    cmake -D CMAKE_BUILD_TYPE=Release ..
    
  4. Build. You can use -j option to execute it in several threads

Install Python Bindings#

In order to run DALI using Python API, you need to install Python bindings

cd build
pip install dali/python

Note

Although you can create a wheel here by calling pip wheel dali/python, we don’t really recommend doing so. Such whl is not self-contained (doesn’t have all the dependencies) and it will work only on the system where you built DALI bare-metal. To build a wheel that contains the dependencies and might be therefore used on other systems, follow DockerBuilderAnchor.

Verify the Build (Optional)# Obtain Test Data#

You can verify the build by running GTest and Nose tests. To do so, you’ll need `DALI_extra repository<https://github.com/NVIDIA/DALI_extra#nvidia-dali>`__, which contains test data. To download it follow DALI_extra README. Keep in mind, that you need git-lfs to properly clone DALI_extra repo. To install git-lfs, follow this tutorial.

Set Test Data Path#

DALI uses DALI_EXTRA_PATH environment variable to localize the test data. You can set it by invoking:

export DALI_EXTRA_PATH=PATH_TO_YOUR_DALI_EXTRA
e.g. export DALI_EXTRA_PATH=/home/yourname/workspace/DALI_extra
Run Tests#

DALI tests consist of 2 parts: C++ (GTest) and Python (usually Nose, but that’s not always true). To run the tests there are convenient targets for Make, that you can run after building finished

cd <path_to_DALI>/build
make check-gtest check-python
Building DALI with Clang (Experimental)#

Note

This build is experimental. It is neither maintained nor tested. It is not guaranteed to work. We recommend using GCC for production builds.

cmake -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_COMPILER=clang  ..
make -j"$(nproc)"
Optional CMake Build Parameters#

Warning

Enabling this option effectively results in only the most basic parts of DALI to compile (C++ core and kernels libraries). It is useful when wanting to use DALI processing primitives (kernels) directly without the need to use DALI’s executor infrastructure.

To run with sanitizers enabled issue:

LD_LIBRARY_PATH=. ASAN_OPTIONS=symbolize=1:protect_shadow_gap=0 ASAN_SYMBOLIZER_PATH=$(shell which llvm-symbolizer)
LD_PRELOAD=PATH_TO_LIB_ASAN/libasan.so.X PATH_TO_LIB_STDC/libstdc++.so.STDC_VERSION*PATH_TO_BINARY*

Where X depends on used compiler version, for example GCC 10.x uses 6. Tested with GCC 10.2.1, CUDA 12.0
and libasan.6. Any earlier version may not work.

STDC_VERSION used by the system. Usually 6.

Note

DALI release packages are built with the options listed above set to ON and NVTX turned OFF. Testing is done with the same configuration. We ensure that DALI compiles with all of those options turned OFF, but there may exist cross-dependencies between some of those features.

Following CMake parameters could be helpful in setting the right paths:

Cross-compiling for aarch64 Jetson Linux (Docker)#

Note

Support for aarch64 Jetson Linux platform is experimental. Some of the features are available only for x86-64 target and they are turned off in this build.

Build the aarch64 Jetson Linux Build Container#
docker build -t nvidia/dali:builder_aarch64-linux -f docker/Dockerfile.build.aarch64-linux .
Compile#

From the root of the DALI source tree

docker run -v $(pwd):/dali nvidia/dali:builder_aarch64-linux

The relevant python wheel will be in dali_root_dir/wheelhouse


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4