A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__LIBRARY.html below:

CUDA Driver API :: CUDA Toolkit Documentation

CUresult cuKernelGetAttribute ( int* pi, CUfunction_attribute attrib, CUkernel kernel, CUdevice dev )

Returns information about a kernel.

pi
- Returned attribute value
attrib
- Attribute requested
kernel
- Kernel to query attribute of
dev
- Device to query attribute of

Returns in *pi the integer value of the attribute attrib for the kernel kernel for the requested device dev. The supported attributes are:

Note:

If another thread is trying to set the same attribute on the same device using cuKernelSetAttribute() simultaneously, the attribute query will give the old or new value depending on the interleavings chosen by the OS scheduler and memory consistency.

See also:

cuLibraryLoadData, cuLibraryLoadFromFile, cuLibraryUnload, cuKernelSetAttribute, cuLibraryGetKernel, cuLaunchKernel, cuKernelGetFunction, cuLibraryGetModule, cuModuleGetFunction, cuFuncGetAttribute

CUresult cuKernelGetFunction ( CUfunction* pFunc, CUkernel kernel )

Returns a function handle.

pFunc
- Returned function handle
kernel
- Kernel to retrieve function for the requested context
CUresult cuKernelGetLibrary ( CUlibrary* pLib, CUkernel kernel )

Returns a library handle.

pLib
- Returned library handle
kernel
- Kernel to retrieve library handle
CUresult cuKernelGetName ( const char** name, CUkernel hfunc )

Returns the function name for a CUkernel handle.

name
- The returned name of the function
hfunc
- The function handle to retrieve the name for

Returns in **name the function name associated with the kernel handle hfunc . The function name is returned as a null-terminated string. The returned name is only valid when the kernel handle is valid. If the library is unloaded or reloaded, one must call the API again to get the updated name. This API may return a mangled name if the function is not declared as having C linkage. If either **name or hfunc is NULL, CUDA_ERROR_INVALID_VALUE is returned.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

CUresult cuKernelGetParamInfo ( CUkernel kernel, size_t paramIndex, size_t* paramOffset, size_t* paramSize )

Returns the offset and size of a kernel parameter in the device-side parameter layout.

kernel
- The kernel to query
paramIndex
- The parameter index to query
paramOffset
- Returns the offset into the device-side parameter layout at which the parameter resides
paramSize
- Optionally returns the size of the parameter in the device-side parameter layout

Queries the kernel parameter at paramIndex into kernel's list of parameters, and returns in paramOffset and paramSize the offset and size, respectively, where the parameter will reside in the device-side parameter layout. This information can be used to update kernel node parameters from the device via cudaGraphKernelNodeSetParam() and cudaGraphKernelNodeUpdatesApply(). paramIndex must be less than the number of parameters that kernel takes. paramSize can be set to NULL if only the parameter offset is desired.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cuFuncGetParamInfo

CUresult cuKernelSetAttribute ( CUfunction_attribute attrib, int  val, CUkernel kernel, CUdevice dev )

Sets information about a kernel.

attrib
- Attribute requested
val
- Value to set
kernel
- Kernel to set attribute of
dev
- Device to set attribute of

This call sets the value of a specified attribute attrib on the kernel kernel for the requested device dev to an integer value specified by val. This function returns CUDA_SUCCESS if the new value of the attribute could be successfully set. If the set fails, this call will return an error. Not all attributes can have values set. Attempting to set a value on a read-only attribute will result in an error (CUDA_ERROR_INVALID_VALUE)

Note that attributes set using cuFuncSetAttribute() will override the attribute set by this API irrespective of whether the call to cuFuncSetAttribute() is made before or after this API call. However, cuKernelGetAttribute() will always return the attribute value set by this API.

Supported attributes are:

Note:

The API has stricter locking requirements in comparison to its legacy counterpart cuFuncSetAttribute() due to device-wide semantics. If multiple threads are trying to set the same attribute on the same device simultaneously, the attribute setting will depend on the interleavings chosen by the OS scheduler and memory consistency.

See also:

cuLibraryLoadData, cuLibraryLoadFromFile, cuLibraryUnload, cuKernelGetAttribute, cuLibraryGetKernel, cuLaunchKernel, cuKernelGetFunction, cuLibraryGetModule, cuModuleGetFunction, cuFuncSetAttribute

CUresult cuKernelSetCacheConfig ( CUkernel kernel, CUfunc_cache config, CUdevice dev )

Sets the preferred cache configuration for a device kernel.

kernel
- Kernel to configure cache for
config
- Requested cache configuration
dev
- Device to set attribute of

On devices where the L1 cache and shared memory use the same hardware resources, this sets through config the preferred cache configuration for the device kernel kernel on the requested device dev. This is only a preference. The driver will use the requested configuration if possible, but it is free to choose a different configuration if required to execute kernel. Any context-wide preference set via cuCtxSetCacheConfig() will be overridden by this per-kernel setting.

Note that attributes set using cuFuncSetCacheConfig() will override the attribute set by this API irrespective of whether the call to cuFuncSetCacheConfig() is made before or after this API call.

This setting does nothing on devices where the size of the L1 cache and shared memory are fixed.

Launching a kernel with a different preference than the most recent preference setting may insert a device-side synchronization point.

The supported cache configurations are:

Note:

The API has stricter locking requirements in comparison to its legacy counterpart cuFuncSetCacheConfig() due to device-wide semantics. If multiple threads are trying to set a config on the same device simultaneously, the cache config setting will depend on the interleavings chosen by the OS scheduler and memory consistency.

See also:

cuLibraryLoadData, cuLibraryLoadFromFile, cuLibraryUnload, cuLibraryGetKernel, cuKernelGetFunction, cuLibraryGetModule, cuModuleGetFunction, cuFuncSetCacheConfig, cuCtxSetCacheConfig, cuLaunchKernel

CUresult cuLibraryEnumerateKernels ( CUkernel* kernels, unsigned int  numKernels, CUlibrary lib )

Retrieve the kernel handles within a library.

kernels
- Buffer where the kernel handles are returned to
numKernels
- Maximum number of kernel handles may be returned to the buffer
lib
- Library to query from

Returns in kernels a maximum number of numKernels kernel handles within lib. The returned kernel handle becomes invalid when the library is unloaded.

See also:

cuLibraryGetKernelCount

CUresult cuLibraryGetGlobal ( CUdeviceptr* dptr, size_t* bytes, CUlibrary library, const char* name )

Returns a global device pointer.

dptr
- Returned global device pointer for the requested context
bytes
- Returned global size in bytes
library
- Library to retrieve global from
name
- Name of global to retrieve
CUresult cuLibraryGetKernel ( CUkernel* pKernel, CUlibrary library, const char* name )

Returns a kernel handle.

pKernel
- Returned kernel handle
library
- Library to retrieve kernel from
name
- Name of kernel to retrieve
CUresult cuLibraryGetKernelCount ( unsigned int* count, CUlibrary lib )

Returns the number of kernels within a library.

count
- Number of kernels found within the library
lib
- Library to query

Returns in count the number of kernels in lib.

CUresult cuLibraryGetManaged ( CUdeviceptr* dptr, size_t* bytes, CUlibrary library, const char* name )

Returns a pointer to managed memory.

dptr
- Returned pointer to the managed memory
bytes
- Returned memory size in bytes
library
- Library to retrieve managed memory from
name
- Name of managed memory to retrieve

Returns in *dptr and *bytes the base pointer and size of the managed memory with name name for the requested library library. If no managed memory with the requested name name exists, the call returns CUDA_ERROR_NOT_FOUND. One of the parameters dptr or bytes (not both) can be NULL in which case it is ignored. Note that managed memory for library library is shared across devices and is registered when the library is loaded into atleast one context.

See also:

cuLibraryLoadData, cuLibraryLoadFromFile, cuLibraryUnload

CUresult cuLibraryGetModule ( CUmodule* pMod, CUlibrary library )

Returns a module handle.

pMod
- Returned module handle
library
- Library to retrieve module from
CUresult cuLibraryGetUnifiedFunction ( void** fptr, CUlibrary library, const char* symbol )

Returns a pointer to a unified function.

fptr
- Returned pointer to a unified function
library
- Library to retrieve function pointer memory from
symbol
- Name of function pointer to retrieve
CUresult cuLibraryLoadData ( CUlibrary* library, const void* code, CUjit_option* jitOptions, void** jitOptionsValues, unsigned int  numJitOptions, CUlibraryOption* libraryOptions, void** libraryOptionValues, unsigned int  numLibraryOptions )

Load a library with specified code and options.

library
- Returned library
code
- Code to load
jitOptions
- Options for JIT
jitOptionsValues
- Option values for JIT
numJitOptions
- Number of options
libraryOptions
- Options for loading
libraryOptionValues
- Option values for loading
numLibraryOptions
- Number of options for loading

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_PTX, CUDA_ERROR_UNSUPPORTED_PTX_VERSION, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_NO_BINARY_FOR_GPU, CUDA_ERROR_SHARED_OBJECT_SYMBOL_NOT_FOUND, CUDA_ERROR_SHARED_OBJECT_INIT_FAILED, CUDA_ERROR_JIT_COMPILER_NOT_FOUND, CUDA_ERROR_NOT_SUPPORTED

Takes a pointer code and loads the corresponding library library based on the application defined library loading mode:

These environment variables are described in the CUDA programming guide under the "CUDA environment variables" section.

The code may be a cubin or fatbin as output by nvcc, or a NULL-terminated PTX, either as output by nvcc or hand-written. A fatbin should also contain relocatable code when doing separate compilation.

Options are passed as an array via jitOptions and any corresponding parameters are passed in jitOptionsValues. The number of total JIT options is supplied via numJitOptions. Any outputs will be returned via jitOptionsValues.

Library load options are passed as an array via libraryOptions and any corresponding parameters are passed in libraryOptionValues. The number of total library load options is supplied via numLibraryOptions.

Note:

If the library contains managed variables and no device in the system supports managed variables this call is expected to return CUDA_ERROR_NOT_SUPPORTED

See also:

cuLibraryLoadFromFile, cuLibraryUnload, cuModuleLoad, cuModuleLoadData, cuModuleLoadDataEx

CUresult cuLibraryLoadFromFile ( CUlibrary* library, const char* fileName, CUjit_option* jitOptions, void** jitOptionsValues, unsigned int  numJitOptions, CUlibraryOption* libraryOptions, void** libraryOptionValues, unsigned int  numLibraryOptions )

Load a library with specified file and options.

library
- Returned library
fileName
- File to load from
jitOptions
- Options for JIT
jitOptionsValues
- Option values for JIT
numJitOptions
- Number of options
libraryOptions
- Options for loading
libraryOptionValues
- Option values for loading
numLibraryOptions
- Number of options for loading

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_PTX, CUDA_ERROR_UNSUPPORTED_PTX_VERSION, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_NO_BINARY_FOR_GPU, CUDA_ERROR_SHARED_OBJECT_SYMBOL_NOT_FOUND, CUDA_ERROR_SHARED_OBJECT_INIT_FAILED, CUDA_ERROR_JIT_COMPILER_NOT_FOUND, CUDA_ERROR_NOT_SUPPORTED

Takes a pointer code and loads the corresponding library library based on the application defined library loading mode:

These environment variables are described in the CUDA programming guide under the "CUDA environment variables" section.

The file should be a cubin file as output by nvcc, or a PTX file either as output by nvcc or handwritten, or a fatbin file as output by nvcc. A fatbin should also contain relocatable code when doing separate compilation.

Options are passed as an array via jitOptions and any corresponding parameters are passed in jitOptionsValues. The number of total options is supplied via numJitOptions. Any outputs will be returned via jitOptionsValues.

Library load options are passed as an array via libraryOptions and any corresponding parameters are passed in libraryOptionValues. The number of total library load options is supplied via numLibraryOptions.

Note:

If the library contains managed variables and no device in the system supports managed variables this call is expected to return CUDA_ERROR_NOT_SUPPORTED

See also:

cuLibraryLoadData, cuLibraryUnload, cuModuleLoad, cuModuleLoadData, cuModuleLoadDataEx

CUresult cuLibraryUnload ( CUlibrary library )

Unloads a library.

library
- Library to unload

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4