A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY__POOLS.html below:

CUDA Runtime API :: CUDA Toolkit Documentation

__host__ ​cudaError_t cudaFreeAsync ( void* devPtr, cudaStream_t hStream )

Frees memory with stream ordered semantics.

devPtr
hStream
- The stream establishing the stream ordering promise

Inserts a free operation into hStream. The allocation must not be accessed after stream execution reaches the free. After this API returns, accessing the memory from any subsequent work launched on the GPU or querying its pointer attributes results in undefined behavior.

Note:

During stream capture, this function results in the creation of a free node and must therefore be passed the address of a graph allocation.

See also:

cuMemFreeAsync, cudaMallocAsync

__host__ ​cudaError_t cudaMallocAsync ( void** devPtr, size_t size, cudaStream_t hStream )

Allocates memory with stream ordered semantics.

devPtr
- Returned device pointer
size
- Number of bytes to allocate
hStream
- The stream establishing the stream ordering contract and the memory pool to allocate from

Inserts an allocation operation into hStream. A pointer to the allocated memory is returned immediately in *dptr. The allocation must not be accessed until the the allocation operation completes. The allocation comes from the memory pool associated with the stream's device.

Note:

See also:

cuMemAllocAsync, cudaMallocAsync ( C++ API), cudaMallocFromPoolAsync, cudaFreeAsync, cudaDeviceSetMemPool, cudaDeviceGetDefaultMemPool, cudaDeviceGetMemPool, cudaMemPoolSetAccess, cudaMemPoolSetAttribute, cudaMemPoolGetAttribute

__host__ ​cudaError_t cudaMallocFromPoolAsync ( void** ptr, size_t size, cudaMemPool_t memPool, cudaStream_t stream )

Allocates memory from a specified pool with stream ordered semantics.

ptr
- Returned device pointer
size
memPool
- The pool to allocate from
stream
- The stream establishing the stream ordering semantic

Inserts an allocation operation into hStream. A pointer to the allocated memory is returned immediately in *dptr. The allocation must not be accessed until the the allocation operation completes. The allocation comes from the specified memory pool.

Note: Note:

During stream capture, this function results in the creation of an allocation node. In this case, the allocation is owned by the graph instead of the memory pool. The memory pool's properties are used to set the node's creation parameters.

See also:

cuMemAllocFromPoolAsync, cudaMallocAsync ( C++ API), cudaMallocAsync, cudaFreeAsync, cudaDeviceGetDefaultMemPool, cudaMemPoolCreate, cudaMemPoolSetAccess, cudaMemPoolSetAttribute

__host__ ​cudaError_t cudaMemGetDefaultMemPool ( cudaMemPool_t* memPool, cudaMemLocation* location, cudaMemAllocationType type )

Returns the default memory pool for a given location and allocation type  *  * The memory location can be of one of cudaMemLocationTypeDevice, cudaMemLocationTypeHost or  * cudaMemLocationTypeHostNuma. The allocation type can be one of cudaMemAllocationTypePinned or  * cudaMemAllocationTypeManaged. When the allocation type is cudaMemAllocationTypeManaged,  * the location type can also be cudaMemLocationTypeNone to indicate no preferred location  * for the managed memory pool. In all other cases, the call return cudaErrorInvalidValue  *  *.

__host__ ​cudaError_t cudaMemGetMemPool ( cudaMemPool_t* memPool, cudaMemLocation* location, cudaMemAllocationType type )

Gets the current memory pool for a given memory location and allocation type  *  * The memory location can be of one of cudaMemLocationTypeDevice, cudaMemLocationTypeHost or  * cudaMemLocationTypeHostNuma. The allocation type can be one of cudaMemAllocationTypePinned or  * cudaMemAllocationTypeManaged. When the allocation type is cudaMemAllocationTypeManaged,  * the location type can also be cudaMemLocationTypeNone to indicate no preferred location  * for the managed memory pool. In all other cases, the call return cudaErrorInvalidValue  *  * Returns the last pool provided to cudaMemSetMemPool or cudaDeviceSetMemPool for this location and allocation type  * or the location's default memory pool if cudaMemSetMemPool or cudaDeviceSetMemPool for that allocType and location  * has never been called.  * By default the current mempool of a location is the default mempool for a device that can be obtained via cudaMemGetDefaultMemPool  * Otherwise the returned pool must have been set with cudaDeviceSetMemPool.  *  *.

__host__ ​cudaError_t cudaMemPoolCreate ( cudaMemPool_t* memPool, const cudaMemPoolProps* poolProps )

Creates a memory pool.

Creates a CUDA memory pool and returns the handle in pool. The poolProps determines the properties of the pool such as the backing device and IPC capabilities.

To create a memory pool for host memory not targeting a specific NUMA node, applications must set set cudaMemPoolProps::cudaMemLocation::type to cudaMemLocationTypeHost. cudaMemPoolProps::cudaMemLocation::id is ignored for such pools. Pools created with the type cudaMemLocationTypeHost are not IPC capable and cudaMemPoolProps::handleTypes must be 0, any other values will result in cudaErrorInvalidValue. To create a memory pool targeting a specific host NUMA node, applications must set cudaMemPoolProps::cudaMemLocation::type to cudaMemLocationTypeHostNuma and cudaMemPoolProps::cudaMemLocation::id must specify the NUMA ID of the host memory node. Specifying cudaMemLocationTypeHostNumaCurrent as the cudaMemPoolProps::cudaMemLocation::type will result in cudaErrorInvalidValue. By default, the pool's memory will be accessible from the device it is allocated on. In the case of pools created with cudaMemLocationTypeHostNuma or cudaMemLocationTypeHost, their default accessibility will be from the host CPU. Applications can control the maximum size of the pool by specifying a non-zero value for cudaMemPoolProps::maxSize. If set to 0, the maximum size of the pool will default to a system dependent value.

Applications that intend to use CU_MEM_HANDLE_TYPE_FABRIC based memory sharing must ensure: (1) `nvidia-caps-imex-channels` character device is created by the driver and is listed under /proc/devices (2) have at least one IMEX channel file accessible by the user launching the application.

When exporter and importer CUDA processes have been granted access to the same IMEX channel, they can securely share memory.

The IMEX channel security model works on a per user basis. Which means all processes under a user can share memory if the user has access to a valid IMEX channel. When multi-user isolation is desired, a separate IMEX channel is required for each user.

These channel files exist in /dev/nvidia-caps-imex-channels/channel* and can be created using standard OS native calls like mknod on Linux. For example: To create channel0 with the major number from /proc/devices users can execute the following command: `mknod /dev/nvidia-caps-imex-channels/channel0 c <major number>=""> 0`

Note:

Specifying cudaMemHandleTypeNone creates a memory pool that will not support IPC.

See also:

cuMemPoolCreate, cudaDeviceSetMemPool, cudaMallocFromPoolAsync, cudaMemPoolExportToShareableHandle, cudaDeviceGetDefaultMemPool, cudaDeviceGetMemPool

__host__ ​cudaError_t cudaMemPoolDestroy ( cudaMemPool_t memPool )

Destroys the specified memory pool.

__host__ ​cudaError_t cudaMemPoolExportPointer ( cudaMemPoolPtrExportData* exportData, void* ptr )

Export data to share a memory pool allocation between processes.

exportData
ptr
- pointer to memory being exported
__host__ ​cudaError_t cudaMemPoolExportToShareableHandle ( void* shareableHandle, cudaMemPool_t memPool, cudaMemAllocationHandleType handleType, unsigned int  flags )

Exports a memory pool to the requested handle type.

shareableHandle
memPool
handleType
- the type of handle to create
flags
- must be 0
__host__ ​cudaError_t cudaMemPoolGetAccess ( cudaMemAccessFlags ** flags, cudaMemPool_t memPool, cudaMemLocation* location )

Returns the accessibility of a pool from a device.

flags
- the accessibility of the pool from the specified location
memPool
- the pool being queried
location
- the location accessing the pool
__host__ ​cudaError_t cudaMemPoolGetAttribute ( cudaMemPool_t memPool, cudaMemPoolAttr attr, void* value )

Gets attributes of a memory pool.

memPool
attr
- The attribute to get
value
- Retrieved value
__host__ ​cudaError_t cudaMemPoolImportFromShareableHandle ( cudaMemPool_t* memPool, void* shareableHandle, cudaMemAllocationHandleType handleType, unsigned int  flags )

imports a memory pool from a shared handle.

memPool
shareableHandle
handleType
- The type of handle being imported
flags
- must be 0
__host__ ​cudaError_t cudaMemPoolImportPointer ( void** ptr, cudaMemPool_t memPool, cudaMemPoolPtrExportData* exportData )

Import a memory pool allocation from another process.

__host__ ​cudaError_t cudaMemPoolSetAccess ( cudaMemPool_t memPool, const cudaMemAccessDesc* descList, size_t count )

Controls visibility of pools between devices.

memPool
descList
count
- Number of descriptors in the map array.
__host__ ​cudaError_t cudaMemPoolSetAttribute ( cudaMemPool_t memPool, cudaMemPoolAttr attr, void* value )

Sets attributes of a memory pool.

memPool
attr
- The attribute to modify
value
- Pointer to the value to assign
__host__ ​cudaError_t cudaMemPoolTrimTo ( cudaMemPool_t memPool, size_t minBytesToKeep )

Tries to release memory back to the OS.

memPool
minBytesToKeep
- If the pool has less than minBytesToKeep reserved, the TrimTo operation is a no-op. Otherwise the pool will be guaranteed to have at least minBytesToKeep bytes reserved after the operation.

Releases memory back to the OS until the pool contains fewer than minBytesToKeep reserved bytes, or there is no more memory that the allocator can safely release. The allocator cannot release OS allocations that back outstanding asynchronous allocations. The OS allocations may happen at different granularity from the user allocations.

Note:

See also:

cuMemPoolTrimTo, cudaMallocAsync, cudaFreeAsync, cudaDeviceGetDefaultMemPool, cudaDeviceGetMemPool, cudaMemPoolCreate

__host__ ​cudaError_t cudaMemSetMemPool ( cudaMemLocation* location, cudaMemAllocationType type, cudaMemPool_t memPool )

Sets the current memory pool for a memory location and allocation type  *  * The memory location can be of one of cudaMemLocationTypeDevice, cudaMemLocationTypeHost or  * cudaMemLocationTypeHostNuma. The allocation type can be one of cudaMemAllocationTypePinned or  * cudaMemAllocationTypeManaged. When the allocation type is cudaMemAllocationTypeManaged,  * the location type can also be cudaMemLocationTypeNone to indicate no preferred location  * for the managed memory pool. In all other cases, the call return cudaErrorInvalidValue  *  * When a memory pool is set as the current memory pool, the location parameter should be the same as the location of the pool.  * If the location type or index don't match, the call returns cudaErrorInvalidValue.  * The type of memory pool should also match the parameter allocType. Else the call returns cudaErrorInvalidValue.  * By default, a memory location's current memory pool is its default memory pool.  * If the location type is cudaMemLocationTypeDevice and the allocation type is cudaMemAllocationTypePinned, then  * this API is the equivalent of calling cudaDeviceSetMemPool with the location id as the device.  * For further details on the implications, please refer to the documentation for cudaDeviceSetMemPool.  *  *.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4