A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://en.wikipedia.org/wiki/CUDA below:

CUDA - Wikipedia

From Wikipedia, the free encyclopedia

Parallel computing platform and programming model

In computing, CUDA (Compute Unified Device Architecture) is a proprietary[2] parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs. CUDA was created by Nvidia in 2006.[3] When it was first introduced, the name was an acronym for Compute Unified Device Architecture,[4] but Nvidia later dropped the common use of the acronym and now rarely expands it.[5]

CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels.[6] In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.

CUDA is designed to work with programming languages such as C, C++, Fortran, Python and Julia. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which require advanced skills in graphics programming.[7] CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL.[8][6]

The graphics processing unit (GPU), as a specialized computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks. By 2012, GPUs had evolved into highly parallel multi-core systems allowing efficient manipulation of large blocks of data. This design is more effective than general-purpose central processing unit (CPUs) for algorithms in situations where processing large blocks of data is done in parallel, such as:

Ian Buck, while at Stanford in 2000, created an 8K gaming rig using 32 GeForce cards, then obtained a DARPA grant to perform general purpose parallel programming on GPUs. He then joined Nvidia, where since 2004 he has been overseeing CUDA development. In pushing for CUDA, Jensen Huang aimed for the Nvidia GPUs to become a general hardware for scientific computing. CUDA was released in 2007. Around 2015, the focus of CUDA changed to neural networks.[9]

The following table offers a non-exact description for the ontology of CUDA framework.

The ontology of CUDA framework memory
(hardware) memory (code, or variable scoping) computation
(hardware) computation
(code syntax) computation
(code semantics) RAM non-CUDA variables host program one routine call VRAM,
GPU L2 cache global, const, texture device grid simultaneous call of the same subroutine on many processors GPU L1 cache local, shared SM ("streaming multiprocessor") block individual subroutine call warp = 32 threads SIMD instructions GPU L0 cache,
register thread (aka. "SP", "streaming processor", "cuda core", but these names are now deprecated) analogous to individual scalar ops within a vector op Programming abilities[edit] Example of CUDA processing flow
  1. Copy data from main memory to GPU memory
  2. CPU initiates the GPU compute kernel
  3. GPU's CUDA cores execute the kernel in parallel
  4. Copy the resulting data from GPU memory to main memory

The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives such as OpenACC, and extensions to industry-standard programming languages including C, C++, Fortran and Python. C/C++ programmers can use 'CUDA C/C++', compiled to PTX with nvcc, Nvidia's LLVM-based C/C++ compiler, or by clang itself.[10] Fortran programmers can use 'CUDA Fortran', compiled with the PGI CUDA Fortran compiler from The Portland Group.[needs update] Python programmers can use the cuNumeric library to accelerate applications on Nvidia GPUs.

In addition to libraries, compiler directives, CUDA C/C++ and CUDA Fortran, the CUDA platform supports other computational interfaces, including the Khronos Group's OpenCL,[11] Microsoft's DirectCompute, OpenGL Compute Shader and C++ AMP.[12] Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Common Lisp, Haskell, R, MATLAB, IDL, Julia, and native support in Mathematica.

In the computer game industry, GPUs are used for graphics rendering, and for game physics calculations (physical effects such as debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more.[13][14][15][16][17]

CUDA provides both a low level API (CUDA Driver API, non single-source) and a higher level API (CUDA Runtime API, single-source). The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0,[18] which supersedes the beta released February 14, 2008.[19] CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. CUDA is compatible with most standard operating systems.

CUDA 8.0 comes with the following libraries (for compilation & runtime, in alphabetical order):

CUDA 8.0 comes with these other software components:

CUDA 9.0–9.2 comes with these other components:

CUDA 10 comes with these other components:

CUDA 11.0–11.8 comes with these other components:[20][21][22][23]

CUDA has several advantages over traditional general-purpose computation on GPUs (GPGPU) using graphics APIs:

This example code in C++ loads a texture from an image into an array on the GPU:

texture<float, 2, cudaReadModeElementType> tex;

void foo()
{
  cudaArray* cu_array;

  // Allocate array
  cudaChannelFormatDesc description = cudaCreateChannelDesc<float>();
  cudaMallocArray(&cu_array, &description, width, height);

  // Copy image data to array
  cudaMemcpyToArray(cu_array, image, width*height*sizeof(float), cudaMemcpyHostToDevice);

  // Set texture parameters (default)
  tex.addressMode[0] = cudaAddressModeClamp;
  tex.addressMode[1] = cudaAddressModeClamp;
  tex.filterMode = cudaFilterModePoint;
  tex.normalized = false; // do not normalize coordinates

  // Bind the array to the texture
  cudaBindTextureToArray(tex, cu_array);

  // Run kernel
  dim3 blockDim(16, 16, 1);
  dim3 gridDim((width + blockDim.x - 1)/ blockDim.x, (height + blockDim.y - 1) / blockDim.y, 1);
  kernel<<< gridDim, blockDim, 0 >>>(d_data, height, width);

  // Unbind the array from the texture
  cudaUnbindTexture(tex);
} //end foo()

__global__ void kernel(float* odata, int height, int width)
{
   unsigned int x = blockIdx.x*blockDim.x + threadIdx.x;
   unsigned int y = blockIdx.y*blockDim.y + threadIdx.y;
   if (x < width && y < height) {
      float c = tex2D(tex, x, y);
      odata[y*width+x] = c;
   }
}

Below is an example given in Python that computes the product of two arrays on the GPU. The unofficial Python language bindings can be obtained from PyCUDA.[36]

import pycuda.compiler as comp
import pycuda.driver as drv
import numpy
import pycuda.autoinit

mod = comp.SourceModule(
    """
__global__ void multiply_them(float *dest, float *a, float *b)
{
  const int i = threadIdx.x;
  dest[i] = a[i] * b[i];
}
"""
)

multiply_them = mod.get_function("multiply_them")

a = numpy.random.randn(400).astype(numpy.float32)
b = numpy.random.randn(400).astype(numpy.float32)

dest = numpy.zeros_like(a)
multiply_them(drv.Out(dest), drv.In(a), drv.In(b), block=(400, 1, 1))

print(dest - a * b)

Additional Python bindings to simplify matrix multiplication operations can be found in the program pycublas.[37]

 
import numpy
from pycublas import CUBLASMatrix

A = CUBLASMatrix(numpy.mat([[1, 2, 3], [4, 5, 6]], numpy.float32))
B = CUBLASMatrix(numpy.mat([[2, 3], [4, 5], [6, 7]], numpy.float32))
C = A * B
print(C.np_mat())

while CuPy directly replaces NumPy:[38]

import cupy

a = cupy.random.randn(400)
b = cupy.random.randn(400)

dest = cupy.zeros_like(a)

print(dest - a * b)

Supported CUDA compute capability versions for CUDA SDK version and microarchitecture (by code name):

Note: CUDA SDK 10.2 is the last official release for macOS, as support will not be available for macOS in newer releases.

CUDA compute capability by version with associated GPU semiconductors and GPU card models (separated by their various application areas):

* – OEM-only products

Version features and specifications[edit] Feature support (unlisted features are supported for all compute capabilities) Compute capability (version) 1.0, 1.1 1.2, 1.3 2.x 3.0 3.2 3.5, 3.7, 5.x, 6.x, 7.0, 7.2 7.5 8.x 9.0, 10.x, 12.x Warp vote functions (__all(), __any()) No Yes Warp vote functions (__ballot()) No Yes Memory fence functions (__threadfence_system()) Synchronization functions (__syncthreads_count(), __syncthreads_and(), __syncthreads_or()) Surface functions 3D grid of thread blocks Warp shuffle functions No Yes Unified memory programming Funnel shift No Yes Dynamic parallelism No Yes Uniform Datapath[57] No Yes Hardware-accelerated async-copy No Yes Hardware-accelerated split arrive/wait barrier Warp-level support for reduction ops L2 cache residency management DPX instructions for accelerated dynamic programming No Yes Distributed shared memory Thread block cluster Tensor memory accelerator (TMA) unit Feature support (unlisted features are supported for all compute capabilities) 1.0, 1.1 1.2, 1.3 2.x 3.0 3.2 3.5, 3.7, 5.x, 6.x, 7.0, 7.2 7.5 8.x 9.0, 10.x, 12.x Compute capability (version)

[58]

Floating-point types[edit] Data type Supported vector types Storage Length Bits
(complete vector) Used Length Bits
(single value) Sign Bits Exponent Bits Mantissa Bits Comments E2M1 = FP4 e2m1x2 / e2m1x4 8 / 16 4 1 2 1 E2M3 = FP6 variant e2m3x2 / e2m3x4 16 / 32 6 1 2 3 E3M2 = FP6 variant e3m2x2 / e3m2x4 16 / 32 6 1 3 2 UE4M3 ue4m3 8 7 0 4 3 Used for scaling (E2M1 only) E4M3 = FP8 variant e4m3 / e4m3x2 / e4m3x4 8 / 16 / 32 8 1 4 3 E5M2 = FP8 variant e5m2 / e5m2x2 / e5m2x4 8 / 16 / 32 8 1 5 2 Exponent/range of FP16, fits into 8 bits UE8M0 ue8m0x2 16 8 0 8 0 Used for scaling (any FP4 or FP6 or FP8 format) FP16 f16 / f16x2 16 / 32 16 1 5 10 BF16 bf16 / bf16x2 16 / 32 16 1 8 7 Exponent/range of FP32, fits into 16 bits TF32 tf32 32 19 1 8 10 Exponent/range of FP32, mantissa/precision of FP16 FP32 f32 / f32x2 32 / 64 32 1 8 23 FP64 f64 64 64 1 11 52 Data type Basic Operations Supported since
Atomic Operations Supported since
for global memory Supported since
for shared memory 8-bit integer
signed/unsigned loading, storing, conversion 1.0 — — 16-bit integer
signed/unsigned general operations 1.0 atomicCAS() 3.5 32-bit integer
signed/unsigned general operations 1.0 atomic functions 1.1 1.2 64-bit integer
signed/unsigned general operations 1.0 atomic functions 1.2 2.0 any 128-bit trivially copyable type general operations No atomicExch, atomicCAS 9.0 16-bit floating point
FP16 addition, subtraction,
multiplication, comparison,
warp shuffle functions, conversion 5.3 half2 atomic addition 6.0 atomic addition 7.0 16-bit floating point
BF16 addition, subtraction,
multiplication, comparison,
warp shuffle functions, conversion 8.0 atomic addition 8.0 32-bit floating point general operations 1.0 atomicExch() 1.1 1.2 atomic addition 2.0 32-bit floating point float2 and float4 general operations No atomic addition 9.0 64-bit floating point general operations 1.3 atomic addition 6.0

Note: Any missing lines or empty entries do reflect some lack of information on that exact item.[59]

FMA per cycle per tensor core[60] Supported since 7.0 7.2 7.5 Workstation 7.5 Desktop 8.0 8.6 Workstation 8.7 8.6 Desktop 8.9 Desktop 8.9 Workstation 9.0 10.0 10.1 12.0 Data Type For dense matrices For sparse matrices 1st Gen (8x/SM) 1st Gen? (8x/SM) 2nd Gen (8x/SM) 3rd Gen (4x/SM) 4th Gen (4x/SM) 5th Gen (4x/SM) 1-bit values (AND) 8.0 as
experimental No No 4096 2048 speed tbd 1-bit values (XOR) 7.5–8.9 as
experimental No 1024 Deprecated or removed? 4-bit integers 8.0–8.9 as
experimental 256 1024 512 4-bit floating point FP4 (E2M1) 10.0 No 4096 tbd 512 6-bit floating point FP6 (E3M2 and E2M3) 10.0 No 2048 tbd 8-bit integers 7.2 8.0 No 128 128 512 256 1024 2048 256 8-bit floating point FP8 (E4M3 and E5M2) with FP16 accumulate 8.9 No 256 8-bit floating point FP8 (E4M3 and E5M2) with FP32 accumulate 128 128 16-bit floating point FP16 with FP16 accumulate 7.0 8.0 64 64 64 256 128 512 1024 128 16-bit floating point FP16 with FP32 accumulate 32 64 128 64 16-bit floating point BF16 with FP32 accumulate 7.5[61] 8.0 No 64[62] 32-bit (19 bits used) floating point TF32 speed tbd (32?)[62] 128 32 64 256 512 32 64-bit floating point 8.0 No No 16 speed tbd 32 16 tbd

Note: Any missing lines or empty entries do reflect some lack of information on that exact item.[63][64] [65] [66] [67] [68]

Tensor Core Composition 7.0 7.2, 7.5 8.0, 8.6 8.7 8.9 9.0 Dot Product Unit Width in FP16 units (in bytes)[69][70][71][72] 4 (8) 8 (16) 4 (8) 16 (32) Dot Product Units per Tensor Core 16 32 Tensor Cores per SM partition 2 1 Full throughput (Bytes/cycle)[73] per SM partition[74] 256 512 256 1024 FP Tensor Cores: Minimum cycles for warp-wide matrix calculation 8 4 8 FP Tensor Cores: Minimum Matrix Shape for full throughput (Bytes)[75] 2048 INT Tensor Cores: Minimum cycles for warp-wide matrix calculation No 4 INT Tensor Cores: Minimum Matrix Shape for full throughput (Bytes) No 1024 2048 1024

[76][77][78][79]

FP64 Tensor Core Composition 8.0 8.6 8.7 8.9 9.0 Dot Product Unit Width in FP64 units (in bytes) 4 (32) tbd 4 (32) Dot Product Units per Tensor Core 4 tbd 8 Tensor Cores per SM partition 1 Full throughput (Bytes/cycle)[73] per SM partition[74] 128 tbd 256 Minimum cycles for warp-wide matrix calculation 16 tbd Minimum Matrix Shape for full throughput (Bytes)[75] 2048 Technical specifications[edit] Technical specifications Compute capability (version) 1.0 1.1 1.2 1.3 2.x 3.0 3.2 3.5 3.7 5.0 5.2 5.3 6.0 6.1 6.2 7.0 7.2 7.5 8.0 8.6 8.7 8.9 9.0 10.x 12.x Maximum number of resident grids per device
(concurrent kernel execution, can be lower for specific devices) 1 16 4 32 16 128 32 16 128 16 128 Maximum dimensionality of grid of thread blocks 2 3 Maximum x-dimension of a grid of thread blocks 65535 231 − 1 Maximum y-, or z-dimension of a grid of thread blocks 65535 Maximum dimensionality of thread block 3 Maximum x- or y-dimension of a block 512 1024 Maximum z-dimension of a block 64 Maximum number of threads per block 512 1024 Warp size 32 Maximum number of resident blocks per multiprocessor 8 16 32 16 32 16 24 32 Maximum number of resident warps per multiprocessor 24 32 48 64 32 64 48 64 48 Maximum number of resident threads per multiprocessor 768 1024 1536 2048 1024 2048 1536 2048 1536 Number of 32-bit regular registers per multiprocessor 8 K 16 K 32 K 64 K 128 K 64 K Number of 32-bit uniform registers per multiprocessor No 2 K[80]

[81]

Maximum number of 32-bit registers per thread block 8 K 16 K 32 K 64 K 32 K 64 K 32 K 64 K 32 K 64 K Maximum number of 32-bit regular registers per thread 124 63 255 Maximum number of 32-bit uniform registers per warp No 63[80]

[82]

Amount of shared memory per multiprocessor
(out of overall shared memory + L1 cache, where applicable) 16 KiB 16 / 48 KiB (of 64 KiB) 16 / 32 / 48 KiB (of 64 KiB) 80 / 96 / 112 KiB (of 128 KiB) 64 KiB 96 KiB 64 KiB 96 KiB 64 KiB 0 / 8 / 16 / 32 / 64 / 96 KiB (of 128 KiB) 32 / 64 KiB (of 96 KiB) 0 / 8 / 16 / 32 / 64 / 100 / 132 / 164 KiB (of 192 KiB) 0 / 8 / 16 / 32 / 64 / 100 KiB (of 128 KiB) 0 / 8 / 16 / 32 / 64 / 100 / 132 / 164 KiB (of 192 KiB) 0 / 8 / 16 / 32 / 64 / 100 KiB (of 128 KiB) 0 / 8 / 16 / 32 / 64 / 100 / 132 / 164 / 196 / 228 KiB (of 256 KiB) 0 / 8 / 16 / 32 / 64 / 100 KiB (of 128 KiB) Maximum amount of shared memory per thread block 16 KiB 48 KiB 96 KiB 48 KiB 64 KiB 163 KiB 99 KiB 163 KiB 99 KiB 227 KiB 99 KiB Number of shared memory banks 16 32 Amount of local memory per thread 16 KiB 512 KiB Constant memory size accessible by CUDA C/C++
(1 bank, PTX can access 11 banks, SASS can access 18 banks) 64 KiB Cache working set per multiprocessor for constant memory 8 KiB 4 KiB 8 KiB Cache working set per multiprocessor for texture memory 16 KiB per TPC 24 KiB per TPC 12 KiB 12 – 48 KiB[83] 24 KiB 48 KiB 32 KiB[84] 24 KiB 48 KiB 24 KiB 32 – 128 KiB 32 – 64 KiB 28 – 192 KiB 28 – 128 KiB 28 – 192 KiB 28 – 128 KiB 28 – 256 KiB Maximum width for 1D texture reference bound to a CUDA
array 8192 65536 131072 Maximum width for 1D texture reference bound to linear
memory 227 228 227 228 227 228 Maximum width and number of layers for a 1D layered
texture reference 8192 × 512 16384 × 2048 32768 x 2048 Maximum width and height for 2D texture reference bound
to a CUDA array 65536 × 32768 65536 × 65535 131072 x 65536 Maximum width and height for 2D texture reference bound
to a linear memory 65000 x 65000 65536 x 65536 131072 x 65000 Maximum width and height for 2D texture reference bound
to a CUDA array supporting texture gather — 16384 x 16384 32768 x 32768 Maximum width, height, and number of layers for a 2D
layered texture reference 8192 × 8192 × 512 16384 × 16384 × 2048 32768 x 32768 x 2048 Maximum width, height and depth for a 3D texture
reference bound to linear memory or a CUDA array 20483 40963 163843 Maximum width (and height) for a cubemap texture reference — 16384 32768 Maximum width (and height) and number of layers
for a cubemap layered texture reference — 16384 × 2046 32768 × 2046 Maximum number of textures that can be bound to a
kernel 128 256 Maximum width for a 1D surface reference bound to a
CUDA array Not
supported 65536 16384 32768 Maximum width and number of layers for a 1D layered
surface reference 65536 × 2048 16384 × 2048 32768 × 2048 Maximum width and height for a 2D surface reference
bound to a CUDA array 65536 × 32768 16384 × 65536 131072 × 65536 Maximum width, height, and number of layers for a 2D
layered surface reference 65536 × 32768 × 2048 16384 × 16384 × 2048 32768 × 32768 × 2048 Maximum width, height, and depth for a 3D surface
reference bound to a CUDA array 65536 × 32768 × 2048 4096 × 4096 × 4096 16384 × 16384 × 16384 Maximum width (and height) for a cubemap surface reference bound to a CUDA array 32768 16384 32768 Maximum width and number of layers for a cubemap
layered surface reference 32768 × 2046 16384 × 2046 32768 × 2046 Maximum number of surfaces that can be bound to a
kernel 8 16 32 Maximum number of instructions per kernel 2 million 512 million Maximum number of Thread Blocks per Thread Block Cluster[85] No 16 8 Technical specifications 1.0 1.1 1.2 1.3 2.x 3.0 3.2 3.5 3.7 5.0 5.2 5.3 6.0 6.1 6.2 7.0 7.2 7.5 8.0 8.6 8.7 8.9 9.0 10.x 12.x Compute capability (version) [86][87] Multiprocessor architecture[edit] Architecture specifications Compute capability (version) 1.0 1.1 1.2 1.3 2.0 2.1 3.0 3.2 3.5 3.7 5.0 5.2 5.3 6.0 6.1 6.2 7.0 7.2 7.5 8.0 8.6 8.7 8.9 9.0 10.x 12.x Number of ALU lanes for INT32 arithmetic operations 8 32 48 192[88] 128 128 64 128 128 64 64 64 128 Number of ALU lanes for any INT32 or FP32 arithmetic operation — — Number of ALU lanes for FP32 arithmetic operations 64 64 128 128 Number of ALU lanes for FP16x2 arithmetic operations No 1 128[89] 128[90] 64[91] Number of ALU lanes for FP64 arithmetic operations No 1 16 by FP32[92] 4 by FP32[93] 8 8 / 64[94] 64 4[95] 32 4 32 2 32 2 64 2 Number of Load/Store Units 4 per 2 SM 8 per 2 SM 8 per 2 SM / 3 SM[94] 8 per 3 SM 16 32 16 32 16 32 Number of special function units for single-precision floating-point transcendental functions 2[96] 4 8 32 16 32 16 Number of texture mapping units (TMU) 4 per 2 SM 8 per 2 SM 8 per 2 / 3SM[94] 8 per 3 SM 4 4 / 8[94] 16 8 16 8 4 Number of ALU lanes for uniform INT32 arithmetic operations No 2[97] Number of tensor cores No 8 (1st gen.)[98] 0 / 8[94] (2nd gen.) 4 (3rd gen.) 4 (4th gen.) Number of raytracing cores No 0 / 1[94] (1st gen.) No 1 (2nd gen.) No 1 (3rd gen.) No Number of SM Partitions = Processing Blocks[99] 1 4 2 4 Number of warp schedulers per SM partition 1 2 4 1 Max number of new instructions issued each cycle by a single scheduler[100] 2[101] 1 2[102] 2 1 Size of unified memory for data cache and shared memory 16 KiB 16 KiB 64 KiB 128 KiB 64 KiB SM + 24 KiB L1 (separate)[104] 96 KiB SM + 24 KiB L1 (separate)[104] 64 KiB SM + 24 KiB L1 (separate)[104] 64 KiB SM + 24 KiB L1 (separate)[104] 96 KiB SM + 24 KiB L1 (separate)[104] 64 KiB SM + 24 KiB L1 (separate)[104] 128 KiB 96 KiB[105] 192 KiB 128 KiB 192 KiB 128 KiB 256 KiB Size of L3 instruction cache per GPU 32 KiB[106] use L2 Data Cache Size of L2 instruction cache per Texture Processor Cluster (TPC) 8 KiB Size of L1.5 instruction cache per SM[107] 4 KiB 32 KiB 32 KiB 48 KiB[84] 128 KiB 32 KiB 128 KiB ~46 KiB[108] 128 KiB[109] Size of L1 instruction cache per SM 8 KiB 8 KiB Size of L0 instruction cache per SM partition only 1 partition per SM No 12 KiB 16 KiB?[110] 32 KiB Instruction Width[107] 32 bits instructions and 64 bits instructions[111] 64 bits instructions + 64 bits control logic every 7 instructions 64 bits instructions + 64 bits control logic every 3 instructions 128 bits combined instruction and control logic Memory Bus Width per Memory Partition in bits 64 ((G)DDR) 32 ((G)DDR) 512 (HBM) 32 ((G)DDR) 512 (HBM) 32 ((G)DDR) 512 (HBM) 32 ((G)DDR) 512 (HBM) 32 ((G)DDR) L2 Cache per Memory Partition 16 KiB[112] 32 KiB[112] 128 KiB 256 KiB 1 MiB 512 KiB 128 KiB 512 KiB 256 KiB 128 KiB 768 KiB 64 KiB 512 KiB 4 MiB 512 KiB 8 MiB[113] 5 MiB 6.25 MiB 8 MiB[114] Number of Render Output Units (ROP) per memory partition (or per GPC in later models) 4 8 4 8 16 8 12 8 4 16 2 8 16 16 per GPC 3 per GPC 16 per GPC Architecture specifications 1.0 1.1 1.2 1.3 2.0 2.1 3.0 3.2 3.5 3.7 5.0 5.2 5.3 6.0 6.1 6.2 7.0 7.2 7.5 8.0 8.6 8.7 8.9 9.0 10.x 12.x Compute capability (version)

For more information read the Nvidia CUDA C++ Programming Guide.[115]

Usages of CUDA architecture[edit] Comparison with competitors[edit]

CUDA competes with other GPU computing stacks: Intel OneAPI and AMD ROCm.

Whereas Nvidia's CUDA is closed-source, Intel's OneAPI and AMD's ROCm are open source.

oneAPI is an initiative based in open standards, created to support software development for multiple hardware architectures.[118] The oneAPI libraries must implement open specifications that are discussed publicly by the Special Interest Groups, offering the possibility for any developer or organization to implement their own versions of oneAPI libraries.[119][120]

Originally made by Intel, other hardware adopters include Fujitsu and Huawei.

Unified Acceleration Foundation (UXL)[edit]

Unified Acceleration Foundation (UXL) is a new technology consortium working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal is to offer open alternatives to Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[121]

ROCm[122] is an open source software stack for graphics processing unit (GPU) programming from Advanced Micro Devices (AMD).

  1. ^ "NVIDIA® CUDA™ Unleashes Power of GPU Computing - Press Release". nvidia.com. Archived from the original on 29 March 2007. Retrieved 26 January 2025.
  2. ^ a b Shah, Agam. "Nvidia not totally against third parties making CUDA chips". www.theregister.com. Retrieved 2024-04-25.
  3. ^ "Nvidia CUDA Home Page". 18 July 2017.
  4. ^ Shimpi, Anand Lal; Wilson, Derek (November 8, 2006). "Nvidia's GeForce 8800 (G80): GPUs Re-architected for DirectX 10". AnandTech. Retrieved May 16, 2015.
  5. ^ "Introduction — nsight-visual-studio-edition 12.6 documentation". docs.nvidia.com. Retrieved 2024-10-10.
  6. ^ a b Abi-Chahla, Fedy (June 18, 2008). "Nvidia's CUDA: The End of the CPU?". Tom's Hardware. Retrieved May 17, 2015.
  7. ^ Zunitch, Peter (2018-01-24). "CUDA vs. OpenCL vs. OpenGL". Videomaker. Retrieved 2018-09-16.
  8. ^ "OpenCL". NVIDIA Developer. 2013-04-24. Retrieved 2019-11-04.
  9. ^ Witt, Stephen (2023-11-27). "How Jensen Huang's Nvidia Is Powering the A.I. Revolution". The New Yorker. ISSN 0028-792X. Retrieved 2023-12-10.
  10. ^ "CUDA LLVM Compiler". 7 May 2012.
  11. ^ First OpenCL demo on a GPU on YouTube
  12. ^ DirectCompute Ocean Demo Running on Nvidia CUDA-enabled GPU on YouTube
  13. ^ Vasiliadis, Giorgos; Antonatos, Spiros; Polychronakis, Michalis; Markatos, Evangelos P.; Ioannidis, Sotiris (September 2008). "Gnort: High Performance Network Intrusion Detection Using Graphics Processors" (PDF). Recent Advances in Intrusion Detection. Lecture Notes in Computer Science. Vol. 5230. pp. 116–134. doi:10.1007/978-3-540-87403-4_7. ISBN 978-3-540-87402-7.
  14. ^ Schatz, Michael C.; Trapnell, Cole; Delcher, Arthur L.; Varshney, Amitabh (2007). "High-throughput sequence alignment using Graphics Processing Units". BMC Bioinformatics. 8: 474. doi:10.1186/1471-2105-8-474. PMC 2222658. PMID 18070356.
  15. ^ Manavski, Svetlin A.; Giorgio, Valle (2008). "CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment". BMC Bioinformatics. 10 (Suppl 2): S10. doi:10.1186/1471-2105-9-S2-S10. PMC 2323659. PMID 18387198.
  16. ^ "Pyrit – Google Code".
  17. ^ "Use your Nvidia GPU for scientific computing". BOINC. 2008-12-18. Archived from the original on 2008-12-28. Retrieved 2017-08-08.
  18. ^ "Nvidia CUDA Software Development Kit (CUDA SDK) – Release Notes Version 2.0 for MAC OS X". Archived from the original on 2009-01-06.
  19. ^ "CUDA 1.1 – Now on Mac OS X". February 14, 2008. Archived from the original on November 22, 2008.
  20. ^ "CUDA 11 Features Revealed". 14 May 2020.
  21. ^ "CUDA Toolkit 11.1 Introduces Support for GeForce RTX 30 Series and Quadro RTX Series GPUs". 23 September 2020.
  22. ^ "Enhancing Memory Allocation with New NVIDIA CUDA 11.2 Features". 16 December 2020.
  23. ^ "Exploring the New Features of CUDA 11.3". 16 April 2021.
  24. ^ Silberstein, Mark; Schuster, Assaf; Geiger, Dan; Patney, Anjul; Owens, John D. (2008). "Efficient computation of sum-products on GPUs through software-managed cache" (PDF). Proceedings of the 22nd annual international conference on Supercomputing – ICS '08 (PDF). Proceedings of the 22nd annual international conference on Supercomputing – ICS '08. pp. 309–318. doi:10.1145/1375527.1375572. ISBN 978-1-60558-158-3.
  25. ^ "CUDA C Programming Guide v8.0" (PDF). nVidia Developer Zone. January 2017. p. 19. Retrieved 22 March 2017.
  26. ^ "NVCC forces c++ compilation of .cu files". 29 November 2011.
  27. ^ Whitehead, Nathan; Fit-Florea, Alex. "Precision & Performance: Floating Point and IEEE 754 Compliance for Nvidia GPUs" (PDF). Nvidia. Retrieved November 18, 2014.
  28. ^ "CUDA-Enabled Products". CUDA Zone. Nvidia Corporation. Retrieved 2008-11-03.
  29. ^ "Coriander Project: Compile CUDA Codes To OpenCL, Run Everywhere". Phoronix.
  30. ^ Perkins, Hugh (2017). "cuda-on-cl" (PDF). IWOCL. Retrieved August 8, 2017.
  31. ^ "hughperkins/coriander: Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices". GitHub. May 6, 2019.
  32. ^ "CU2CL Documentation". chrec.cs.vt.edu.
  33. ^ "GitHub – vosen/ZLUDA". GitHub.
  34. ^ Larabel, Michael (2024-02-12), "AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source", Phoronix, retrieved 2024-02-12
  35. ^ "GitHub – chip-spv/chipStar". GitHub.
  36. ^ "PyCUDA".
  37. ^ "pycublas". Archived from the original on 2009-04-20. Retrieved 2017-08-08.
  38. ^ "CuPy". Retrieved 2020-01-08.
  39. ^ "NVIDIA CUDA Programming Guide. Version 1.0" (PDF). June 23, 2007.
  40. ^ "NVIDIA CUDA Programming Guide. Version 2.1" (PDF). December 8, 2008.
  41. ^ "NVIDIA CUDA Programming Guide. Version 2.2" (PDF). April 2, 2009.
  42. ^ "NVIDIA CUDA Programming Guide. Version 2.2.1" (PDF). May 26, 2009.
  43. ^ "NVIDIA CUDA Programming Guide. Version 2.3.1" (PDF). August 26, 2009.
  44. ^ "NVIDIA CUDA Programming Guide. Version 3.0" (PDF). February 20, 2010.
  45. ^ "NVIDIA CUDA C Programming Guide. Version 3.1.1" (PDF). July 21, 2010.
  46. ^ "NVIDIA CUDA C Programming Guide. Version 3.2" (PDF). November 9, 2010.
  47. ^ "CUDA 11.0 Release Notes". NVIDIA Developer.
  48. ^ "CUDA 11.1 Release Notes". NVIDIA Developer.
  49. ^ "CUDA 11.5 Release Notes". NVIDIA Developer.
  50. ^ "CUDA 11.8 Release Notes". NVIDIA Developer.
  51. ^ "NVIDIA Quadro NVS 420 Specs". TechPowerUp GPU Database. 25 August 2023.
  52. ^ Larabel, Michael (March 29, 2017). "NVIDIA Rolls Out Tegra X2 GPU Support In Nouveau". Phoronix. Retrieved August 8, 2017.
  53. ^ Nvidia Xavier Specs on TechPowerUp (preliminary)
  54. ^ "Welcome — Jetson LinuxDeveloper Guide 34.1 documentation".
  55. ^ "NVIDIA Bringing up Open-Source Volta GPU Support for Their Xavier SoC".
  56. ^ "NVIDIA Ada Lovelace Architecture".
  57. ^ Dissecting the Turing GPU Architecture through Microbenchmarking
  58. ^ "H.1. Features and Technical Specifications – Table 13. Feature Support per Compute Capability". docs.nvidia.com. Retrieved 2020-09-23.
  59. ^ "CUDA C++ Programming Guide".
  60. ^ Fused-Multiply-Add, actually executed, Dense Matrix
  61. ^ as SASS since 7.5, as PTX since 8.0
  62. ^ a b unofficial support in SASS
  63. ^ "Technical brief. NVIDIA Jetson AGX Orin Series" (PDF). nvidia.com. Retrieved 5 September 2023.
  64. ^ "NVIDIA Ampere GA102 GPU Architecture" (PDF). nvidia.com. Retrieved 5 September 2023.
  65. ^ Luo, Weile; Fan, Ruibo; Li, Zeyu; Du, Dayou; Wang, Qiang; Chu, Xiaowen (2024). "Benchmarking and Dissecting the Nvidia Hopper GPU Architecture". arXiv:2402.13499v1 [cs.AR].
  66. ^ "Datasheet NVIDIA A40" (PDF). nvidia.com. Retrieved 27 April 2024.
  67. ^ "NVIDIA AMPERE GA102 GPU ARCHITECTURE" (PDF). 27 April 2024.
  68. ^ "Datasheet NVIDIA L40" (PDF). 27 April 2024.
  69. ^ In the Whitepapers the Tensor Core cube diagrams represent the Dot Product Unit Width into the height (4 FP16 for Volta and Turing, 8 FP16 for A100, 4 FP16 for GA102, 16 FP16 for GH100). The other two dimensions represent the number of Dot Product Units (4x4 = 16 for Volta and Turing, 8x4 = 32 for Ampere and Hopper). The resulting gray blocks are the FP16 FMA operations per cycle. Pascal without Tensor core is only shown for speed comparison as is Volta V100 with non-FP16 datatypes.
  70. ^ "NVIDIA Turing Architecture Whitepaper" (PDF). nvidia.com. Retrieved 5 September 2023.
  71. ^ "NVIDIA Tensor Core GPU" (PDF). nvidia.com. Retrieved 5 September 2023.
  72. ^ "NVIDIA Hopper Architecture In-Depth". 22 March 2022.
  73. ^ a b shape x converted operand size, e.g. 2 tensor cores x 4x4x4xFP16/cycle = 256 Bytes/cycle
  74. ^ a b = product first 3 table rows
  75. ^ a b = product of previous 2 table rows; shape: e.g. 8x8x4xFP16 = 512 Bytes
  76. ^ Sun, Wei; Li, Ang; Geng, Tong; Stuijk, Sander; Corporaal, Henk (2023). "Dissecting Tensor Cores via Microbenchmarks: Latency, Throughput and Numeric Behaviors". IEEE Transactions on Parallel and Distributed Systems. 34 (1): 246–261. arXiv:2206.02874. doi:10.1109/tpds.2022.3217824. S2CID 249431357.
  77. ^ "Parallel Thread Execution ISA Version 7.7".
  78. ^ Raihan, Md Aamir; Goli, Negar; Aamodt, Tor (2018). "Modeling Deep Learning Accelerator Enabled GPUs". arXiv:1811.08309 [cs.MS].
  79. ^ "NVIDIA Ada Lovelace Architecture".
  80. ^ a b Jia, Zhe; Maggioni, Marco; Smith, Jeffrey; Daniele Paolo Scarpazza (2019). "Dissecting the NVidia Turing T4 GPU via Microbenchmarking". arXiv:1903.07486 [cs.DC].
  81. ^ Burgess, John (2019). "RTX ON – The NVIDIA TURING GPU". 2019 IEEE Hot Chips 31 Symposium (HCS). pp. 1–27. doi:10.1109/HOTCHIPS.2019.8875651. ISBN 978-1-7281-2089-8. S2CID 204822166.
  82. ^ Burgess, John (2019). "RTX ON – The NVIDIA TURING GPU". 2019 IEEE Hot Chips 31 Symposium (HCS). pp. 1–27. doi:10.1109/HOTCHIPS.2019.8875651. ISBN 978-1-7281-2089-8. S2CID 204822166.
  83. ^ dependent on device
  84. ^ a b "Tegra X1". 9 January 2015.
  85. ^ NVIDIA H100 Tensor Core GPU Architecture
  86. ^ H.1. Features and Technical Specifications – Table 14. Technical Specifications per Compute Capability
  87. ^ NVIDIA Hopper Architecture In-Depth
  88. ^ can only execute 160 integer instructions according to programming guide
  89. ^ 128 according to [1]. 64 from FP32 + 64 separate units?
  90. ^ 64 by FP32 cores and 64 by flexible FP32/INT cores.
  91. ^ "CUDA C++ Programming Guide".
  92. ^ 32 FP32 lanes combine to 16 FP64 lanes. Maybe lower depending on model.
  93. ^ only supported by 16 FP32 lanes, they combine to 4 FP64 lanes
  94. ^ a b c d e f depending on model
  95. ^ Effective speed, probably over FP32 ports. No description of actual FP64 cores.
  96. ^ Can also be used for integer additions and comparisons
  97. ^ 2 clock cycles/instruction for each SM partition Burgess, John (2019). "RTX ON – The NVIDIA TURING GPU". 2019 IEEE Hot Chips 31 Symposium (HCS). pp. 1–27. doi:10.1109/HOTCHIPS.2019.8875651. ISBN 978-1-7281-2089-8. S2CID 204822166.
  98. ^ Durant, Luke; Giroux, Olivier; Harris, Mark; Stam, Nick (May 10, 2017). "Inside Volta: The World's Most Advanced Data Center GPU". Nvidia developer blog.
  99. ^ The schedulers and dispatchers have dedicated execution units unlike with Fermi and Kepler.
  100. ^ Dispatching can overlap concurrently, if it takes more than one cycle (when there are less execution units than 32/SM Partition)
  101. ^ Can dual issue MAD pipe and SFU pipe
  102. ^ No more than one scheduler can issue 2 instructions at once. The first scheduler is in charge of warps with odd IDs. The second scheduler is in charge of warps with even IDs.
  103. ^ a b c d e f shared memory separate, but L1 includes texture cache
  104. ^ "H.6.1. Architecture". docs.nvidia.com. Retrieved 2019-05-13.
  105. ^ "Demystifying GPU Microarchitecture through Microbenchmarking" (PDF).
  106. ^ a b Jia, Zhe; Maggioni, Marco; Staiger, Benjamin; Scarpazza, Daniele P. (2018). "Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking". arXiv:1804.06826 [cs.DC].
  107. ^ Jia, Zhe; Maggioni, Marco; Smith, Jeffrey; Daniele Paolo Scarpazza (2019). "Dissecting the NVidia Turing T4 GPU via Microbenchmarking". arXiv:1903.07486 [cs.DC].
  108. ^ "Dissecting the Ampere GPU Architecture through Microbenchmarking".
  109. ^ Note that Jia, Zhe; Maggioni, Marco; Smith, Jeffrey; Daniele Paolo Scarpazza (2019). "Dissecting the NVidia Turing T4 GPU via Microbenchmarking". arXiv:1903.07486 [cs.DC]. disagrees and states 2 KiB L0 instruction cache per SM partition and 16 KiB L1 instruction cache per SM
  110. ^ "asfermi Opcode". GitHub.
  111. ^ a b for access with texture engine only
  112. ^ 25% disabled on RTX 4060, RTX 4070, RTX 4070 Ti and RTX 4090
  113. ^ 25% disabled on RTX 5070 Ti and RTX 5090
  114. ^ "CUDA C++ Programming Guide, Compute Capabilities". docs.nvidia.com. Retrieved 2025-02-06.
  115. ^ "nVidia CUDA Bioinformatics: BarraCUDA". BioCentric. 2019-07-19. Retrieved 2019-10-15.
  116. ^ "Part V: Physics Simulation". NVIDIA Developer. Retrieved 2020-09-11.
  117. ^ "oneAPI Programming Model". oneAPI.io. Retrieved 2024-07-27.
  118. ^ "Specifications | oneAPI". oneAPI.io. Retrieved 2024-07-27.
  119. ^ "oneAPI Specification — oneAPI Specification 1.3-rev-1 documentation". oneapi-spec.uxlfoundation.org. Retrieved 2024-07-27.
  120. ^ "Exclusive: Behind the plot to break Nvidia's grip on AI by targeting software". Reuters. Retrieved 2024-04-05.
  121. ^ "Question: What does ROCm stand for? · Issue #1628 · RadeonOpenCompute/ROCm". Github.com. Retrieved January 18, 2022.

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.3