Device has ECC support enabled
The maximum value of cudaAccessPolicyWindow::num_bytes.
Number of asynchronous engines
Device can map host memory with cudaHostAlloc/cudaHostGetDevicePointer
Device can access host registered memory at the same virtual address as the CPU
Indicates device supports cluster launch
Device supports Compute Preemption
Device can possibly execute multiple kernels concurrently
Device can coherently access managed memory concurrently with the CPU
Device supports launching cooperative kernels via cudaLaunchCooperativeKernel
1 if the device supports deferred mapping CUDA arrays and CUDA mipmapped arrays
NUMA configuration of a device: value is of type cudaDeviceNumaConfig enum
NUMA node ID of the GPU memory
Host can directly access managed memory on the device without migration.
Device supports caching globals in L1
Bitmask to be interpreted according to the cudaFlushGPUDirectRDMAWritesOptions enum
1 if the device supports GPUDirect RDMA APIs, 0 otherwise
See the cudaGPUDirectRDMAWritesOrdering enum for numerical values
The combined 16-bit PCI device ID and 16-bit PCI vendor ID
The combined 16-bit PCI subsystem ID and 16-bit PCI subsystem vendor ID
Link between the device and the host supports native atomic operations
NUMA ID of the host node closest to the device or -1 when system does not support NUMA
1 if the device supports HostNuma location IPC between nodes in a multi-node system.
Device supports using the cudaHostRegister flag cudaHostRegisterReadOnly to register memory that must be mapped as read-only to the GPU
Device supports host memory registration via cudaHostRegister.
Device is integrated as opposed to discrete
Device supports IPC Events.
Device is on a multi-GPU board
Size of L2 cache in bytes
Device supports caching locals in L1
8-byte locally unique identifier. Value is undefined on TCC and non-Windows platforms
LUID device node mask. Value is undefined on TCC and non-Windows platforms
Major compute capability
Device supports allocating managed memory on this system
Maximum number of resident blocks per multiprocessor
Maximum size of each dimension of a grid
Maximum 1D surface size
Maximum 1D layered surface dimensions
Maximum 2D surface dimensions
Maximum 2D layered surface dimensions
Maximum 3D surface dimensions
Maximum Cubemap surface dimensions
Maximum Cubemap layered surface dimensions
Maximum 1D texture size
Maximum 1D layered texture dimensions
Maximum 1D mipmapped texture size
Maximum 2D texture dimensions
Maximum 2D texture dimensions if texture gather operations have to be performed
Maximum 2D layered texture dimensions
Maximum dimensions (width, height, pitch) for 2D textures bound to pitched memory
Maximum 2D mipmapped texture dimensions
Maximum 3D texture dimensions
Maximum alternate 3D texture dimensions
Maximum Cubemap texture dimensions
Maximum Cubemap layered texture dimensions
Maximum size of each dimension of a block
Maximum number of threads per block
Maximum resident threads per multiprocessor
Maximum pitch in bytes allowed by memory copies
Global memory bus width in bits
Bitmask of handle types supported with mempool-based IPC
1 if the device supports using the cudaMallocAsync and cudaMemPool family of APIs, 0 otherwise
Minor compute capability
Indicates if contexts created on this device will be shared via MPS
Unique identifier for a group of devices on the same multi-GPU board
Number of multiprocessors on device
ASCII string identifying device
Device supports coherently accessing pageable memory without calling cudaHostRegister on it
Device accesses pageable memory via the host's page tables
PCI bus ID of the device
PCI device ID of the device
PCI domain ID of the device
Device's maximum l2 persisting lines capacity setting in bytes
32-bit registers available per block
32-bit registers available per multiprocessor
Reserved for future use
Shared memory reserved by CUDA driver per block in bytes
Shared memory available per block in bytes
Per device maximum shared memory per block usable by special opt in
Shared memory available per multiprocessor in bytes
1 if the device supports sparse CUDA arrays and sparse CUDA mipmapped arrays, 0 otherwise
Device supports stream priorities
Alignment requirements for surfaces
1 if device is a Tesla device using TCC driver, 0 otherwise
Alignment requirement for textures
Pitch alignment requirement for texture references bound to pitched memory
External timeline semaphore interop is supported on the device
Constant memory available on device in bytes
Global memory available on device in bytes
Device shares a unified address space with the host
Indicates device supports unified pointers
16-byte unique identifier
Warp size in threads
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4