Frees memory with stream ordered semantics.
Inserts a free operation into hStream. The allocation must not be accessed after stream execution reaches the free. After this API returns, accessing the memory from any subsequent work launched on the GPU or querying its pointer attributes results in undefined behavior.
Note:During stream capture, this function results in the creation of a free node and must therefore be passed the address of a graph allocation.
See also:
Allocates memory with stream ordered semantics.
Inserts an allocation operation into hStream. A pointer to the allocated memory is returned immediately in *dptr. The allocation must not be accessed until the the allocation operation completes. The allocation comes from the memory pool associated with the stream's device.
Note:The default memory pool of a device contains device memory from that device.
Basic stream ordering allows future work submitted into the same stream to use the allocation. Stream query, stream synchronize, and CUDA events can be used to guarantee that the allocation operation completes before work submitted in a separate stream runs.
During stream capture, this function results in the creation of an allocation node. In this case, the allocation is owned by the graph instead of the memory pool. The memory pool's properties are used to set the node's creation parameters.
See also:
cuMemAllocAsync, cudaMallocAsync ( C++ API), cudaMallocFromPoolAsync, cudaFreeAsync, cudaDeviceSetMemPool, cudaDeviceGetDefaultMemPool, cudaDeviceGetMemPool, cudaMemPoolSetAccess, cudaMemPoolSetAttribute, cudaMemPoolGetAttribute
Allocates memory from a specified pool with stream ordered semantics.
Inserts an allocation operation into hStream. A pointer to the allocated memory is returned immediately in *dptr. The allocation must not be accessed until the the allocation operation completes. The allocation comes from the specified memory pool.
Note:The specified memory pool may be from a device different than that of the specified hStream.
Basic stream ordering allows future work submitted into the same stream to use the allocation. Stream query, stream synchronize, and CUDA events can be used to guarantee that the allocation operation completes before work submitted in a separate stream runs.
During stream capture, this function results in the creation of an allocation node. In this case, the allocation is owned by the graph instead of the memory pool. The memory pool's properties are used to set the node's creation parameters.
See also:
cuMemAllocFromPoolAsync, cudaMallocAsync ( C++ API), cudaMallocAsync, cudaFreeAsync, cudaDeviceGetDefaultMemPool, cudaMemPoolCreate, cudaMemPoolSetAccess, cudaMemPoolSetAttribute
Returns the default memory pool for a given location and allocation type  *  * The memory location can be of one of cudaMemLocationTypeDevice, cudaMemLocationTypeHost or  * cudaMemLocationTypeHostNuma. The allocation type can be one of cudaMemAllocationTypePinned or  * cudaMemAllocationTypeManaged. When the allocation type is cudaMemAllocationTypeManaged,  * the location type can also be cudaMemLocationTypeNone to indicate no preferred location  * for the managed memory pool. In all other cases, the call return cudaErrorInvalidValue  *  *.
Gets the current memory pool for a given memory location and allocation type  *  * The memory location can be of one of cudaMemLocationTypeDevice, cudaMemLocationTypeHost or  * cudaMemLocationTypeHostNuma. The allocation type can be one of cudaMemAllocationTypePinned or  * cudaMemAllocationTypeManaged. When the allocation type is cudaMemAllocationTypeManaged,  * the location type can also be cudaMemLocationTypeNone to indicate no preferred location  * for the managed memory pool. In all other cases, the call return cudaErrorInvalidValue  *  * Returns the last pool provided to cudaMemSetMemPool or cudaDeviceSetMemPool for this location and allocation type  * or the location's default memory pool if cudaMemSetMemPool or cudaDeviceSetMemPool for that allocType and location  * has never been called.  * By default the current mempool of a location is the default mempool for a device that can be obtained via cudaMemGetDefaultMemPool  * Otherwise the returned pool must have been set with cudaDeviceSetMemPool.  *  *.
Creates a memory pool.
Creates a CUDA memory pool and returns the handle in pool. The poolProps determines the properties of the pool such as the backing device and IPC capabilities.
To create a memory pool for host memory not targeting a specific NUMA node, applications must set set cudaMemPoolProps::cudaMemLocation::type to cudaMemLocationTypeHost. cudaMemPoolProps::cudaMemLocation::id is ignored for such pools. Pools created with the type cudaMemLocationTypeHost are not IPC capable and cudaMemPoolProps::handleTypes must be 0, any other values will result in cudaErrorInvalidValue. To create a memory pool targeting a specific host NUMA node, applications must set cudaMemPoolProps::cudaMemLocation::type to cudaMemLocationTypeHostNuma and cudaMemPoolProps::cudaMemLocation::id must specify the NUMA ID of the host memory node. Specifying cudaMemLocationTypeHostNumaCurrent as the cudaMemPoolProps::cudaMemLocation::type will result in cudaErrorInvalidValue. By default, the pool's memory will be accessible from the device it is allocated on. In the case of pools created with cudaMemLocationTypeHostNuma or cudaMemLocationTypeHost, their default accessibility will be from the host CPU. Applications can control the maximum size of the pool by specifying a non-zero value for cudaMemPoolProps::maxSize. If set to 0, the maximum size of the pool will default to a system dependent value.
Applications that intend to use CU_MEM_HANDLE_TYPE_FABRIC based memory sharing must ensure: (1) `nvidia-caps-imex-channels` character device is created by the driver and is listed under /proc/devices (2) have at least one IMEX channel file accessible by the user launching the application.
When exporter and importer CUDA processes have been granted access to the same IMEX channel, they can securely share memory.
The IMEX channel security model works on a per user basis. Which means all processes under a user can share memory if the user has access to a valid IMEX channel. When multi-user isolation is desired, a separate IMEX channel is required for each user.
These channel files exist in /dev/nvidia-caps-imex-channels/channel* and can be created using standard OS native calls like mknod on Linux. For example: To create channel0 with the major number from /proc/devices users can execute the following command: `mknod /dev/nvidia-caps-imex-channels/channel0 c <major number>=""> 0`
Note:Specifying cudaMemHandleTypeNone creates a memory pool that will not support IPC.
See also:
cuMemPoolCreate, cudaDeviceSetMemPool, cudaMallocFromPoolAsync, cudaMemPoolExportToShareableHandle, cudaDeviceGetDefaultMemPool, cudaDeviceGetMemPool
Destroys the specified memory pool.
Export data to share a memory pool allocation between processes.
Exports a memory pool to the requested handle type.
Returns the accessibility of a pool from a device.
Gets attributes of a memory pool.
imports a memory pool from a shared handle.
Import a memory pool allocation from another process.
Controls visibility of pools between devices.
Sets attributes of a memory pool.
Tries to release memory back to the OS.
Releases memory back to the OS until the pool contains fewer than minBytesToKeep reserved bytes, or there is no more memory that the allocator can safely release. The allocator cannot release OS allocations that back outstanding asynchronous allocations. The OS allocations may happen at different granularity from the user allocations.
Note:: Allocations that have not been freed count as outstanding.
: Allocations that have been asynchronously freed but whose completion has not been observed on the host (eg. by a synchronize) can count as outstanding.
See also:
cuMemPoolTrimTo, cudaMallocAsync, cudaFreeAsync, cudaDeviceGetDefaultMemPool, cudaDeviceGetMemPool, cudaMemPoolCreate
Sets the current memory pool for a memory location and allocation type  *  * The memory location can be of one of cudaMemLocationTypeDevice, cudaMemLocationTypeHost or  * cudaMemLocationTypeHostNuma. The allocation type can be one of cudaMemAllocationTypePinned or  * cudaMemAllocationTypeManaged. When the allocation type is cudaMemAllocationTypeManaged,  * the location type can also be cudaMemLocationTypeNone to indicate no preferred location  * for the managed memory pool. In all other cases, the call return cudaErrorInvalidValue  *  * When a memory pool is set as the current memory pool, the location parameter should be the same as the location of the pool.  * If the location type or index don't match, the call returns cudaErrorInvalidValue.  * The type of memory pool should also match the parameter allocType. Else the call returns cudaErrorInvalidValue.  * By default, a memory location's current memory pool is its default memory pool.  * If the location type is cudaMemLocationTypeDevice and the allocation type is cudaMemAllocationTypePinned, then  * this API is the equivalent of calling cudaDeviceSetMemPool with the location id as the device.  * For further details on the implications, please refer to the documentation for cudaDeviceSetMemPool.  *  *.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4