A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/NVIDIA/DeepLearningExamples below:

GitHub - NVIDIA/DeepLearningExamples: State-of-the-Art Deep Learning scripts organized by models

NVIDIA Deep Learning Examples for Tensor Cores

This repository provides State-of-the-Art Deep Learning examples that are easy to train and deploy, achieving the best reproducible accuracy and performance with NVIDIA CUDA-X software stack running on NVIDIA Volta, Turing and Ampere GPUs.

NVIDIA GPU Cloud (NGC) Container Registry

These examples, along with our NVIDIA deep learning software stack, are provided in a monthly updated Docker container on the NGC container registry (https://ngc.nvidia.com). These containers include:

Natural Language Processing Models Framework AMP Multi-GPU Multi-Node TensorRT ONNX Triton DLC NB BERT PyTorch Yes Yes Yes Example - Example Yes - GNMT PyTorch Yes Yes - Supported - Supported - - ELECTRA TensorFlow2 Yes Yes Yes Supported - Supported Yes - BERT TensorFlow Yes Yes Yes Example - Example Yes Yes BERT TensorFlow2 Yes Yes Yes Supported - Supported Yes - GNMT TensorFlow Yes Yes - Supported - Supported - - Faster Transformer Tensorflow - - - Example - Supported - - Models Framework AMP Multi-GPU Multi-Node ONNX Triton DLC NB DLRM PyTorch Yes Yes - Yes Example Yes Yes DLRM TensorFlow2 Yes Yes Yes - Supported Yes - NCF PyTorch Yes Yes - - Supported - - Wide&Deep TensorFlow Yes Yes - - Supported Yes - Wide&Deep TensorFlow2 Yes Yes - - Supported Yes - NCF TensorFlow Yes Yes - - Supported Yes - VAE-CF TensorFlow Yes Yes - - Supported - - SIM TensorFlow2 Yes Yes - - Supported Yes - Models Framework AMP Multi-GPU Multi-Node TensorRT ONNX Triton DLC NB Jasper PyTorch Yes Yes - Example Yes Example Yes Yes QuartzNet PyTorch Yes Yes - Supported - Supported Yes - Models Framework AMP Multi-GPU Multi-Node ONNX Triton DLC NB SE(3)-Transformer PyTorch Yes Yes - - Supported - - MoFlow PyTorch Yes Yes - - Supported - -

In each of the network READMEs, we indicate the level of support that will be provided. The range is from ongoing updates and improvements to a point-in-time release for thought leadership.

Multinode Training Supported on a pyxis/enroot Slurm cluster.

Deep Learning Compiler (DLC) TensorFlow XLA and PyTorch JIT and/or TorchScript

Accelerated Linear Algebra (XLA) XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. The results are improvements in speed and memory usage.

PyTorch JIT and/or TorchScript TorchScript is a way to create serializable and optimizable models from PyTorch code. TorchScript, an intermediate representation of a PyTorch model (subclass of nn.Module) that can then be run in a high-performance environment such as C++.

Automatic Mixed Precision (AMP) Automatic Mixed Precision (AMP) enables mixed precision training on Volta, Turing, and NVIDIA Ampere GPU architectures automatically.

TensorFloat-32 (TF32) TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs. TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.

Jupyter Notebooks (NB) The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.

We're posting these examples on GitHub to better support the community, facilitate feedback, as well as collect and implement contributions using GitHub Issues and pull requests. We welcome all contributions!

In each of the network READMEs, we indicate any known issues and encourage the community to provide feedback.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4