A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html below:

CUDA Runtime API :: CUDA Toolkit Documentation

__host__ ​cudaError_t cudaCtxResetPersistingL2Cache ( void )

Resets all persisting lines in cache to normal status.

Resets all persisting lines in cache to normal status. Takes effect on function return.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaAccessPolicyWindow

__host__ ​cudaError_t cudaStreamAddCallback ( cudaStream_t stream, cudaStreamCallback_t callback, void* userData, unsigned int  flags )

Add a callback to a compute stream.

stream
- Stream to add callback to
callback
- The function to call once preceding stream operations are complete
userData
- User specified data to be passed to the callback function
flags
- Reserved for future use, must be 0

Adds a callback to be called on the host after all currently enqueued items in the stream have completed. For each cudaStreamAddCallback call, a callback will be executed exactly once. The callback will block later work in the stream until it is finished.

The callback may be passed cudaSuccess or an error code. In the event of a device error, all subsequently executed callbacks will receive an appropriate cudaError_t.

Callbacks must not make any CUDA API calls. Attempting to use CUDA APIs may result in cudaErrorNotPermitted. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.

For the purposes of Unified Memory, callback execution makes a number of guarantees:

See also:

cudaStreamCreate, cudaStreamCreateWithFlags, cudaStreamQuery, cudaStreamSynchronize, cudaStreamWaitEvent, cudaStreamDestroy, cudaMallocManaged, cudaStreamAttachMemAsync, cudaLaunchHostFunc, cuStreamAddCallback

__host__ ​cudaError_t cudaStreamAttachMemAsync ( cudaStream_t stream, void* devPtr, size_t length = 0, unsigned int  flags = cudaMemAttachSingle )

Attach memory to a stream asynchronously.

Enqueues an operation in stream to specify stream association of length bytes of memory starting from devPtr. This function is a stream-ordered operation, meaning that it is dependent on, and will only take effect when, previous work in stream has completed. Any previous association is automatically replaced.

devPtr must point to an one of the following types of memories:

For managed allocations, length must be either zero or the entire allocation's size. Both indicate that the entire allocation's stream association is being changed. Currently, it is not possible to change stream association for a portion of a managed allocation.

For pageable allocations, length must be non-zero.

The stream association is specified using flags which must be one of cudaMemAttachGlobal, cudaMemAttachHost or cudaMemAttachSingle. The default value for flags is cudaMemAttachSingle If the cudaMemAttachGlobal flag is specified, the memory can be accessed by any stream on any device. If the cudaMemAttachHost flag is specified, the program makes a guarantee that it won't access the memory on the device from any stream on a device that has a zero value for the device attribute cudaDevAttrConcurrentManagedAccess. If the cudaMemAttachSingle flag is specified and stream is associated with a device that has a zero value for the device attribute cudaDevAttrConcurrentManagedAccess, the program makes a guarantee that it will only access the memory on the device from stream. It is illegal to attach singly to the NULL stream, because the NULL stream is a virtual global stream and not a specific stream. An error will be returned in this case.

When memory is associated with a single stream, the Unified Memory system will allow CPU access to this memory region so long as all operations in stream have completed, regardless of whether other streams are active. In effect, this constrains exclusive ownership of the managed memory region by an active GPU to per-stream activity instead of whole-GPU activity.

Accessing memory on the device from streams that are not associated with it will produce undefined results. No error checking is performed by the Unified Memory system to ensure that kernels launched into other streams do not access this region.

It is a program's responsibility to order calls to cudaStreamAttachMemAsync via events, synchronization or other means to ensure legal access to memory at all times. Data visibility and coherency will be changed appropriately for all kernels which follow a stream-association change.

If stream is destroyed while data is associated with it, the association is removed and the association reverts to the default visibility of the allocation as specified at cudaMallocManaged. For __managed__ variables, the default association is always cudaMemAttachGlobal. Note that destroying a stream is an asynchronous operation, and as a result, the change to default association won't happen until all work in the stream has completed.

See also:

cudaStreamCreate, cudaStreamCreateWithFlags, cudaStreamWaitEvent, cudaStreamSynchronize, cudaStreamAddCallback, cudaStreamDestroy, cudaMallocManaged, cuStreamAttachMemAsync

__host__ ​cudaError_t cudaStreamBeginCapture ( cudaStream_t stream, cudaStreamCaptureMode mode )

Begins graph capture on a stream.

stream
- Stream in which to initiate capture
mode
- Controls the interaction of this capture sequence with other API calls that are potentially unsafe. For more details see cudaThreadExchangeStreamCaptureMode.

Begin graph capture on stream. When a stream is in capture mode, all operations pushed into the stream will not be executed, but will instead be captured into a graph, which will be returned via cudaStreamEndCapture. Capture may not be initiated if stream is cudaStreamLegacy. Capture must be ended on the same stream in which it was initiated, and it may only be initiated if the stream is not already in capture mode. The capture mode may be queried via cudaStreamIsCapturing. A unique id representing the capture sequence may be queried via cudaStreamGetCaptureInfo.

If mode is not cudaStreamCaptureModeRelaxed, cudaStreamEndCapture must be called on this stream from the same thread.

Note:

Kernels captured using this API must not use texture and surface references. Reading or writing through any texture or surface reference is undefined behavior. This restriction does not apply to texture and surface objects.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaStreamCreate, cudaStreamIsCapturing, cudaStreamEndCapture, cudaThreadExchangeStreamCaptureMode

__host__ ​cudaError_t cudaStreamBeginCaptureToGraph ( cudaStream_t stream, cudaGraph_t graph, const cudaGraphNode_t* dependencies, const cudaGraphEdgeData* dependencyData, size_t numDependencies, cudaStreamCaptureMode mode )

Begins graph capture on a stream to an existing graph.

stream
- Stream in which to initiate capture.
graph
- Graph to capture into.
dependencies
- Dependencies of the first node captured in the stream. Can be NULL if numDependencies is 0.
dependencyData
- Optional array of data associated with each dependency.
numDependencies
- Number of dependencies.
mode
- Controls the interaction of this capture sequence with other API calls that are potentially unsafe. For more details see cudaThreadExchangeStreamCaptureMode.

Begin graph capture on stream. When a stream is in capture mode, all operations pushed into the stream will not be executed, but will instead be captured into graph, which will be returned via cudaStreamEndCapture.

Capture may not be initiated if stream is cudaStreamLegacy. Capture must be ended on the same stream in which it was initiated, and it may only be initiated if the stream is not already in capture mode. The capture mode may be queried via cudaStreamIsCapturing. A unique id representing the capture sequence may be queried via cudaStreamGetCaptureInfo.

If mode is not cudaStreamCaptureModeRelaxed, cudaStreamEndCapture must be called on this stream from the same thread.

Note:

Kernels captured using this API must not use texture and surface references. Reading or writing through any texture or surface reference is undefined behavior. This restriction does not apply to texture and surface objects.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaStreamCreate, cudaStreamIsCapturing, cudaStreamEndCapture, cudaThreadExchangeStreamCaptureMode

__host__ ​cudaError_t cudaStreamCopyAttributes ( cudaStream_t dst, cudaStream_t src )

Copies attributes from source stream to destination stream.

dst
Destination stream
src
Source stream For attributes see cudaStreamAttrID

Copies attributes from source stream src to destination stream dst. Both streams must have the same context.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaAccessPolicyWindow

__host__ ​cudaError_t cudaStreamCreate ( cudaStream_t* pStream )

Create an asynchronous stream.

pStream
- Pointer to new stream identifier

Creates a new asynchronous stream on the context that is current to the calling host thread. If no context is current to the calling host thread, then the primary context for a device is selected, made current to the calling thread, and initialized before creating a stream on it.

See also:

cudaStreamCreateWithPriority, cudaStreamCreateWithFlags, cudaStreamGetPriority, cudaStreamGetFlags, cudaStreamGetDevice, cudaStreamQuery, cudaStreamSynchronize, cudaStreamWaitEvent, cudaStreamAddCallback, cudaSetDevice, cudaStreamDestroy, cuStreamCreate

__host__ ​ __device__ ​cudaError_t cudaStreamCreateWithFlags ( cudaStream_t* pStream, unsigned int  flags )

Create an asynchronous stream.

pStream
- Pointer to new stream identifier
flags
- Parameters for stream creation

Creates a new asynchronous stream on the context that is current to the calling host thread. If no context is current to the calling host thread, then the primary context for a device is selected, made current to the calling thread, and initialized before creating a stream on it. The flags argument determines the behaviors of the stream. Valid values for flags are

See also:

cudaStreamCreate, cudaStreamCreateWithPriority, cudaStreamGetFlags, cudaStreamGetDevice, cudaStreamQuery, cudaStreamSynchronize, cudaStreamWaitEvent, cudaStreamAddCallback, cudaSetDevice, cudaStreamDestroy, cuStreamCreate

__host__ ​cudaError_t cudaStreamCreateWithPriority ( cudaStream_t* pStream, unsigned int  flags, int  priority )

Create an asynchronous stream with the specified priority.

pStream
- Pointer to new stream identifier
flags
- Flags for stream creation. See cudaStreamCreateWithFlags for a list of valid flags that can be passed
priority
- Priority of the stream. Lower numbers represent higher priorities. See cudaDeviceGetStreamPriorityRange for more information about the meaningful stream priorities that can be passed.

Creates a stream with the specified priority and returns a handle in pStream. The stream is created on the context that is current to the calling host thread. If no context is current to the calling host thread, then the primary context for a device is selected, made current to the calling thread, and initialized before creating a stream on it. This affects the scheduling priority of work in the stream. Priorities provide a hint to preferentially run work with higher priority when possible, but do not preempt already-running work or provide any other functional guarantee on execution order.

priority follows a convention where lower numbers represent higher priorities. '0' represents default priority. The range of meaningful numerical priorities can be queried using cudaDeviceGetStreamPriorityRange. If the specified priority is outside the numerical range returned by cudaDeviceGetStreamPriorityRange, it will automatically be clamped to the lowest or the highest number in the range.

Note:

See also:

cudaStreamCreate, cudaStreamCreateWithFlags, cudaDeviceGetStreamPriorityRange, cudaStreamGetPriority, cudaStreamQuery, cudaStreamWaitEvent, cudaStreamAddCallback, cudaStreamSynchronize, cudaSetDevice, cudaStreamDestroy, cuStreamCreateWithPriority

__host__ ​ __device__ ​cudaError_t cudaStreamDestroy ( cudaStream_t stream )

Destroys and cleans up an asynchronous stream.

stream
- Stream identifier
__host__ ​cudaError_t cudaStreamEndCapture ( cudaStream_t stream, cudaGraph_t* pGraph )

Ends capture on a stream, returning the captured graph.

stream
- Stream to query
pGraph
- The captured graph
__host__ ​cudaError_t cudaStreamGetAttribute ( cudaStream_t hStream, cudaStreamAttrID attr, cudaStreamAttrValue* value_out )

Queries stream attribute.

Queries attribute attr from hStream and stores it in corresponding member of value_out.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaAccessPolicyWindow

__host__ ​cudaError_t cudaStreamGetCaptureInfo ( cudaStream_t stream, cudaStreamCaptureStatus ** captureStatus_out, unsigned long long* id_out = 0, cudaGraph_t* graph_out = 0, const cudaGraphNode_t** dependencies_out = 0, const cudaGraphEdgeData** edgeData_out = 0, size_t* numDependencies_out = 0 )

Query a stream's capture state.

stream
- The stream to query
captureStatus_out
- Location to return the capture status of the stream; required
id_out
- Optional location to return an id for the capture sequence, which is unique over the lifetime of the process
graph_out
- Optional location to return the graph being captured into. All operations other than destroy and node removal are permitted on the graph while the capture sequence is in progress. This API does not transfer ownership of the graph, which is transferred or destroyed at cudaStreamEndCapture. Note that the graph handle may be invalidated before end of capture for certain errors. Nodes that are or become unreachable from the original stream at cudaStreamEndCapture due to direct actions on the graph do not trigger cudaErrorStreamCaptureUnjoined.
dependencies_out
- Optional location to store a pointer to an array of nodes. The next node to be captured in the stream will depend on this set of nodes, absent operations such as event wait which modify this set. The array pointer is valid until the next API call which operates on the stream or until the capture is terminated. The node handles may be copied out and are valid until they or the graph is destroyed. The driver-owned array may also be passed directly to APIs that operate on the graph (not the stream) without copying.
edgeData_out
- Optional location to store a pointer to an array of graph edge data. This array parallels dependencies_out; the next node to be added has an edge to dependencies_out[i] with annotation edgeData_out[i] for each i. The array pointer is valid until the next API call which operates on the stream or until the capture is terminated.
numDependencies_out
- Optional location to store the size of the array returned in dependencies_out.
__host__ ​cudaError_t cudaStreamGetDevice ( cudaStream_t hStream, int* device )

Query the device of a stream.

hStream
- Handle to the stream to be queried
device
- Returns the device to which the stream belongs
__host__ ​cudaError_t cudaStreamGetFlags ( cudaStream_t hStream, unsigned int* flags )

Query the flags of a stream.

hStream
- Handle to the stream to be queried
flags
- Pointer to an unsigned integer in which the stream's flags are returned
__host__ ​cudaError_t cudaStreamGetId ( cudaStream_t hStream, unsigned long long* streamId )

Query the Id of a stream.

hStream
- Handle to the stream to be queried
streamId
- Pointer to an unsigned long long in which the stream Id is returned
__host__ ​cudaError_t cudaStreamGetPriority ( cudaStream_t hStream, int* priority )

Query the priority of a stream.

hStream
- Handle to the stream to be queried
priority
- Pointer to a signed integer in which the stream's priority is returned
__host__ ​cudaError_t cudaStreamIsCapturing ( cudaStream_t stream, cudaStreamCaptureStatus ** pCaptureStatus )

Returns a stream's capture status.

stream
- Stream to query
pCaptureStatus
- Returns the stream's capture status

Return the capture status of stream via pCaptureStatus. After a successful call, *pCaptureStatus will contain one of the following:

Note that, if this is called on cudaStreamLegacy (the "null stream") while a blocking stream on the same device is capturing, it will return cudaErrorStreamCaptureImplicit and *pCaptureStatus is unspecified after the call. The blocking stream capture is not invalidated.

When a blocking stream is capturing, the legacy stream is in an unusable state until the blocking stream capture is terminated. The legacy stream is not supported for stream capture, but attempted use would have an implicit dependency on the capturing stream(s).

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaStreamCreate, cudaStreamBeginCapture, cudaStreamEndCapture

__host__ ​cudaError_t cudaStreamQuery ( cudaStream_t stream )

Queries an asynchronous stream for completion status.

stream
- Stream identifier
__host__ ​cudaError_t cudaStreamSetAttribute ( cudaStream_t hStream, cudaStreamAttrID attr, const cudaStreamAttrValue* value )

Sets stream attribute.

Sets attribute attr on hStream from corresponding attribute of value. The updated attribute will be applied to subsequent work submitted to the stream. It will not affect previously submitted work.

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaAccessPolicyWindow

__host__ ​cudaError_t cudaStreamSynchronize ( cudaStream_t stream )

Waits for stream tasks to complete.

stream
- Stream identifier
__host__ ​cudaError_t cudaStreamUpdateCaptureDependencies ( cudaStream_t stream, cudaGraphNode_t* dependencies, const cudaGraphEdgeData* dependencyData, size_t numDependencies, unsigned int  flags = 0 )

Update the set of dependencies in a capturing stream.

stream
- The stream to update
dependencies
- The set of dependencies to add
dependencyData
- Optional array of data associated with each dependency.
numDependencies
- The size of the dependencies array
flags
- See above
__host__ ​ __device__ ​cudaError_t cudaStreamWaitEvent ( cudaStream_t stream, cudaEvent_t event, unsigned int  flags = 0 )

Make a compute stream wait on an event.

stream
- Stream to wait
event
- Event to wait on
flags
- Parameters for the operation(See above)
__host__ ​cudaError_t cudaThreadExchangeStreamCaptureMode ( cudaStreamCaptureMode ** mode )

Swaps the stream capture interaction mode for a thread.

mode
- Pointer to mode value to swap with the current mode

Sets the calling thread's stream capture interaction mode to the value contained in *mode, and overwrites *mode with the previous mode for the thread. To facilitate deterministic behavior across function or module boundaries, callers are encouraged to use this API in a push-pop fashion:

‎     cudaStreamCaptureMode mode = desiredMode;
           cudaThreadExchangeStreamCaptureMode(&mode);
           ...
           cudaThreadExchangeStreamCaptureMode(&mode); // restore previous mode

During stream capture (see cudaStreamBeginCapture), some actions, such as a call to cudaMalloc, may be unsafe. In the case of cudaMalloc, the operation is not enqueued asynchronously to a stream, and is not observed by stream capture. Therefore, if the sequence of operations captured via cudaStreamBeginCapture depended on the allocation being replayed whenever the graph is launched, the captured graph would be invalid.

Therefore, stream capture places restrictions on API calls that can be made within or concurrently to a cudaStreamBeginCapture-cudaStreamEndCapture sequence. This behavior can be controlled via this API and flags to cudaStreamBeginCapture.

A thread's mode is one of the following:

Note:

Note that this function may also return error codes from previous, asynchronous launches.

See also:

cudaStreamBeginCapture


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4