This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intent of Apex is to make up-to-date utilities available to users as quickly as possible.
Each apex.contrib
module requires one or more install options other than --cpp_ext
and --cuda_ext
. Note that contrib modules do not necessarily support stable PyTorch releases, some of them might only be compatible with nightlies.
NVIDIA PyTorch Containers are available on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch. The containers come with all the custom extensions available at the moment.
See the NGC documentation for details such as:
To install Apex from source, we recommend using the nightly Pytorch obtainable from https://github.com/pytorch/pytorch.
The latest stable release obtainable from https://pytorch.org should also work.
We recommend installing Ninja
to make compilation faster.
For performance and full functionality, we recommend installing Apex with CUDA and C++ extensions using environment variables:
Using Environment Variables (Recommended)git clone https://github.com/NVIDIA/apex cd apex # Build with core extensions (cpp and cuda) APEX_CPP_EXT=1 APEX_CUDA_EXT=1 pip install -v --no-build-isolation . # To build with additional extensions, specify them with environment variables APEX_CPP_EXT=1 APEX_CUDA_EXT=1 APEX_FAST_MULTIHEAD_ATTN=1 APEX_FUSED_CONV_BIAS_RELU=1 pip install -v --no-build-isolation . # To build all contrib extensions at once APEX_CPP_EXT=1 APEX_CUDA_EXT=1 APEX_ALL_CONTRIB_EXT=1 pip install -v --no-build-isolation .
To reduce the build time, parallel building can be enabled:
NVCC_APPEND_FLAGS="--threads 4" APEX_PARALLEL_BUILD=8 APEX_CPP_EXT=1 APEX_CUDA_EXT=1 pip install -v --no-build-isolation .
When CPU cores or memory are limited, the --parallel
option is generally preferred over --threads
. See pull#1882 for more details.
The traditional command-line flags are still supported:
# Using pip config-settings (pip >= 23.1) pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./ # For older pip versions pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./ # To build with additional extensions pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_multihead_attn" ./
APEX also supports a Python-only build via:
pip install -v --disable-pip-version-check --no-build-isolation --no-cache-dir ./
A Python-only build omits:
apex.optimizers.FusedAdam
.apex.normalization.FusedLayerNorm
and apex.normalization.FusedRMSNorm
.apex.parallel.SyncBatchNorm
.apex.parallel.DistributedDataParallel
and apex.amp
. DistributedDataParallel
, amp
, and SyncBatchNorm
will still be usable, but they may be slower.pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" .
may work if you were able to build Pytorch from source on your system. A Python-only build via pip install -v --no-cache-dir .
is more likely to work.
If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment.
If a requirement of a module is not met, then it will not be built.
You can also build all contrib extensions at once by setting APEX_ALL_CONTRIB_EXT=1
.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4