This section is non-normative.
Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to (post-2014) native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.
WebGPU sees physical GPU hardware as GPUAdapter
s. It provides a connection to an adapter via GPUDevice
, which manages resources, and the device’s GPUQueue
s, which execute commands. GPUDevice
may have its own memory with high-speed access to the processing units. GPUBuffer
and GPUTexture
are the physical resources backed by GPU memory. GPUCommandBuffer
and GPURenderBundle
are containers for user-recorded commands. GPUShaderModule
contains shader code. The other resources, such as GPUSampler
or GPUBindGroup
, configure the way physical resources are used by the GPU.
GPUs execute commands encoded in GPUCommandBuffer
s by feeding data through a pipeline, which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware. Most of the state of a pipeline is defined by a GPURenderPipeline
or a GPUComputePipeline
object. The state not included in these pipeline objects is set during encoding with commands, such as beginRenderPass()
or setBlendConstant()
.
This section is non-normative. It describes the risks associated with exposing this API on the Web.
2.1. Security ConsiderationsThe security requirements for WebGPU are the same as ever for the web, and are likewise non-negotiable. The general approach is strictly validating all the commands before they reach GPU, ensuring that a page can only work with its own data.
2.1.1. CPU-based undefined behaviorA WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.
In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input. An implementation has to validate all the input from the user and only reach the driver with the valid workloads. This document specifies all the error conditions and handling semantics. For example, specifying the same buffer with intersecting ranges in both "source" and "destination" of copyBufferToBuffer() results in GPUCommandEncoder
generating an error, and no other operation occurring.
See § 22 Errors & Debugging for more information about error handling.
2.1.2. GPU-based undefined behaviorWebGPU shaders are executed by the compute units inside GPU hardware. In native APIs, some of the shader instructions may result in undefined behavior on the GPU. In order to address that, the shader instruction set and its defined behaviors are strictly defined by WebGPU. When a shader is provided to createShaderModule()
, the WebGPU implementation has to validate it before doing any translation (to platform-specific shaders) or transformation passes.
Generally, allocating new memory may expose the leftover data of other applications running on the system. In order to address that, WebGPU conceptually initializes all the resources to zero, although in practice an implementation may skip this step if it sees the developer initializing the contents manually. This includes variables and shared workgroup memory inside shaders.
The precise mechanism of clearing the workgroup memory can differ between platforms. If the native API does not provide facilities to clear it, the WebGPU implementation transforms the compute shader to first do a clear across all invocations, synchronize them, and continue executing developer’s code.
NOTE:
The initialization status of a resource used in a queue operation can only be known when the operation is enqueued (not when it is encoded into a command buffer, for example). Therefore, some implementations will require an unoptimized late-clear at enqueue time (e.g. clearing a texture, rather than changing
GPULoadOp
"load"
to
"clear"
).
As a result, all implementations should issue a developer console warning about this potential performance penalty, even if there is no penalty in that implementation.
2.1.4. Out-of-bounds access in shadersShaders can access physical resources either directly (for example, as a "uniform"
GPUBufferBinding
), or via texture units, which are fixed-function hardware blocks that handle texture coordinate conversions. Validation in the WebGPU API can only guarantee that all the inputs to the shader are provided and they have the correct usage and types. The WebGPU API can not guarantee that the data is accessed within bounds if the texture units are not involved.
In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds.
Alternatively, an implementation may transform the shader code by inserting manual bounds checks. When this path is taken, the out-of-bound checks only apply to array indexing. They aren’t needed for plain field access of shader structures due to the minBindingSize
validation on the host side.
If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:
return a value at a different location within the resource bounds
return a value vector of "(0, 0, 0, X)" with any "X"
partially discard the draw or dispatch call
If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:
write the value to a different location within the resource bounds
discard the write operation
partially discard the draw or dispatch call
When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.
2.1.6. Driver bugsGPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and disable WebGPU on drivers with known bugs that can’t be worked around.
2.1.7. Timing attacks 2.1.7.1. Content-timeline timingWebGPU does not expose new states to JavaScript (the content timeline) which are shared between agents in an agent cluster. Content timeline states such as [[mapping]]
only change during explicit content timeline tasks, like in plain JavaScript.
Writable storage buffers and other cross-invocation communication may be usable to construct high-precision timers on the queue timeline.
The optional "timestamp-query"
feature also provides high precision timing of GPU operations. To mitigate security and privacy concerns, the timing query values are aligned to a lower precision: see current queue timestamp. Note in particular:
The device timeline typically runs in a process that is shared by multiple origins, so cross-origin isolation (provided by COOP/COEP) does not provide isolation of device/queue-timeline timers.
Queue timeline work is issued from the device timeline, and may execute on GPU hardware that does not provide the isolation expected of CPU processes (such as Meltdown mitigations).
GPU hardware is not typically susceptible to Spectre-style attacks, but WebGPU may be implemented in software, and software implementations may run in a shared process, preventing isolation-based mitigations.
Row hammer is a class of attacks that exploit the leaking of states in DRAM cells. It could be used on GPU. WebGPU does not have any specific mitigations in place, and relies on platform-level solutions, such as reduced memory refresh intervals.
2.1.9. Denial of serviceWebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.
2.1.10. Workload identificationWebGPU provides access to constrained global resources shared between different programs (and web pages) running on the same machine. An application can try to indirectly probe how constrained these global resources are, in order to reason about workloads performed by other open web pages, based on the patterns of usage of these shared resources. These issues are generally analogous to issues with Javascript, such as system memory and CPU execution throughput. WebGPU does not provide any additional mitigations for this.
2.1.11. Memory resourcesWebGPU exposes fallible allocations from machine-global memory heaps, such as VRAM. This allows for probing the size of the system’s remaining available memory (for a given heap type) by attempting to allocate and watching for allocation failures.
GPUs internally have one or more (typically only two) heaps of memory shared by all running applications. When a heap is depleted, WebGPU would fail to create a resource. This is observable, which may allow a malicious application to guess what heaps are used by other applications, and how much they allocate from them.
2.1.12. Computation resourcesIf one site uses WebGPU at the same time as another, it may observe the increase in time it takes to process some work. For example, if a site constantly submits compute workloads and tracks completion of work on the queue, it may observe that something else also started using the GPU.
A GPU has many parts that can be tested independently, such as the arithmetic units, texture sampling units, atomic units, etc. A malicious application may sense when some of these units are stressed, and attempt to guess the workload of another application by analyzing the stress patterns. This is analogous to the realities of CPU execution of Javascript.
2.1.13. Abuse of capabilitiesMalicious sites could abuse the capabilities exposed by WebGPU to run computations that don’t benefit the user or their experience and instead only benefit the site. Examples would be hidden crypto-mining, password cracking or rainbow tables computations.
It is not possible to guard against these types of uses of the API because the browser is not able to distinguish between valid workloads and abusive workloads. This is a general problem with all general-purpose computation capabilities on the Web: JavaScript, WebAssembly or WebGL. WebGPU only makes some workloads easier to implement, or slightly more efficient to run than using WebGL.
To mitigate this form of abuse, browsers can throttle operations on background tabs, could warn that a tab is using a lot of resource, and restrict which contexts are allowed to use WebGPU.
User agents can heuristically issue warnings to users about high power use, especially due to potentially malicious usage. If a user agent implements such a warning, it should include WebGPU usage in its heuristics, in addition to JavaScript, WebAssembly, WebGL, and so on.
2.2. Privacy ConsiderationsThere is a tracking vector here. The privacy considerations for WebGPU are similar to those of WebGL. GPU APIs are complex and must expose various aspects of a device’s capabilities out of necessity in order to enable developers to take advantage of those capabilities effectively. The general mitigation approach involves normalizing or binning potentially identifying information and enforcing uniform behavior where possible.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
2.2.1. Machine-specific features and limitsWebGPU can expose a lot of detail on the underlying GPU architecture and the device geometry. This includes available physical adapters, many limits on the GPU and CPU resources that could be used (such as the maximum texture size), and any optional hardware-specific capabilities that are available.
User agents are not obligated to expose the real hardware limits, they are in full control of how much the machine specifics are exposed. One strategy to reduce fingerprinting is binning all the target platforms into a few number of bins. In general, the privacy impact of exposing the hardware limits matches the one of WebGL.
The default limits are also deliberately high enough to allow most applications to work without requesting higher limits. All the usage of the API is validated according to the requested limits, so the actual hardware capabilities are not exposed to the users by accident.
2.2.2. Machine-specific artifactsThere are some machine-specific rasterization/precision artifacts and performance differences that can be observed roughly in the same way as in WebGL. This applies to rasterization coverage and patterns, interpolation precision of the varyings between shader stages, compute unit scheduling, and more aspects of execution.
Generally, rasterization and precision fingerprints are identical across most or all of the devices of each vendor. Performance differences are relatively intractable, but also relatively low-signal (as with JS execution performance).
Privacy-critical applications and user agents should utilize software implementations to eliminate such artifacts.
2.2.3. Machine-specific performanceAnother factor for differentiating users is measuring the performance of specific operations on the GPU. Even with low precision timing, repeated execution of an operation can show if the user’s machine is fast at specific workloads. This is a fairly common vector (present in both WebGL and Javascript), but it’s also low-signal and relatively intractable to truly normalize.
WebGPU compute pipelines expose access to GPU unobstructed by the fixed-function hardware. This poses an additional risk for unique device fingerprinting. User agents can take steps to dissociate logical GPU invocations with actual compute units to reduce this risk.
2.2.4. User Agent StateThis specification doesn’t define any additional user-agent state for an origin. However it is expected that user agents will have compilation caches for the result of expensive compilation like GPUShaderModule
, GPURenderPipeline
and GPUComputePipeline
. These caches are important to improve the loading time of WebGPU applications after the first visit.
For the specification, these caches are indifferentiable from incredibly fast compilation, but for applications it would be easy to measure how long createComputePipelineAsync()
takes to resolve. This can leak information across origins (like "did the user access a site with this specific shader") so user agents should follow the best practices in storage partitioning.
The system’s GPU driver may also have its own cache of compiled shaders and pipelines. User agents may want to disable these when at all possible, or add per-partition data to shaders in ways that will make the GPU driver consider them different.
2.2.5. Driver bugsIn addition to the concerns outlined in Security Considerations, driver bugs may introduce differences in behavior that can be observed as a method of differentiating users. The mitigations mentioned in Security Considerations apply here as well, including coordinating with GPU vendors and implementing workarounds for known issues in the user agent.
2.2.6. Adapter IdentifiersPast experience with WebGL has demonstrated that developers have a legitimate need to be able to identify the GPU their code is running on in order to create and maintain robust GPU-based content. For example, to identify adapters with known driver bugs in order to work around them or to avoid features that perform more poorly than expected on a given class of hardware.
But exposing adapter identifiers also naturally expands the amount of fingerprinting information available, so there’s a desire to limit the precision with which we identify the adapter.
There are several mitigations that can be applied to strike a balance between enabling robust content and preserving privacy. First is that user agents can reduce the burden on developers by identifying and working around known driver issues, as they have since browsers began making use of GPUs.
When adapter identifiers are exposed by default they should be as broad as possible while still being useful. Possibly identifying, for example, the adapter’s vendor and general architecture without identifying the specific adapter in use. Similarly, in some cases identifiers for an adapter that is considered a reasonable proxy for the actual adapter may be reported.
In cases where full and detailed information about the adapter is useful (for example: when filing bug reports) the user can be asked for consent to reveal additional information about their hardware to the page.
Finally, the user agent will always have the discretion to not report adapter identifiers at all if it considers it appropriate, such as in enhanced privacy modes.
3. Fundamentals 3.1. Conventions 3.1.1. Syntactic ShorthandsIn this specification, the following syntactic shorthands are used:
.
("dot") syntax, common in programming languages.
The phrasing "Foo.Bar
" means "the Bar
member of the value (or interface) Foo
." If Foo
is an ordered map and Bar
does not exist in Foo
, returns undefined
.
The phrasing "Foo.Bar
is provided" means "the Bar
member exists in the map value Foo
"
?.
("optional chaining") syntax, adopted from JavaScript.
The phrasing "Foo?.Bar
" means "if Foo
is null
or undefined
or Bar
does not exist in Foo
, undefined
; otherwise, Foo.Bar
".
For example, where buffer
is a GPUBuffer
, buffer?.\[[device]].\[[adapter]]
means "if buffer
is null
or undefined
, then undefined
; otherwise, the \[[adapter]]
internal slot of the \[[device]]
internal slot of buffer
.
??
("nullish coalescing") syntax, adopted from JavaScript.
The phrasing "x
?? y
" means "x
, if x
is not null or undefined, and y
otherwise".
A WebIDL attribute which is backed by an internal slot of the same name. It may or may not be mutable.
A WebGPU object consists of a WebGPU Interface and an internal object.
The WebGPU interface defines the public interface and state of the WebGPU object. It can be used on the content timeline where it was created, where it is a JavaScript-exposed WebIDL interface.
Any interface which includes GPUObjectBase
is a WebGPU interface.
The internal object tracks the state of the WebGPU object on the device timeline. All reads/writes to the mutable state of an internal object occur from steps executing on a single well-ordered device timeline.
The following special property types can be defined on WebGPU objects:
A read-only slot set during initialization of the object. It can be accessed from any timeline.
Note: Since the slot is immutable, implementations may have a copy on multiple timelines, as needed. Immutable properties are defined in this way to avoid describing multiple copies in this spec.
If named [[with brackets]]
, it is an internal slot.
If named withoutBrackets
, it is a readonly
slot-backed attribute of the WebGPU interface.
A property which is only accessible from the content timeline where the object was created.
If named [[with brackets]]
, it is an internal slot.
If named withoutBrackets
, it is a slot-backed attribute of the WebGPU interface.
A property which tracks state of the internal object and is only accessible from the device timeline where the object was created. device timeline properties may be mutable.
Device timeline properties are named [[with brackets]]
, and are internal slots.
A property which tracks state of the internal object and is only accessible from the queue timeline where the object was created. queue timeline properties may be mutable.
Queue timeline properties are named [[with brackets]]
, and are internal slots.
interface mixin GPUObjectBase { attribute USVString label; };
GPUObjectBase
has the following immutable properties:
[[device]]
, of type device, readonly
The device that owns the internal object.
Operations on the contents of this object assert they are running on the device timeline, and that the device is valid.
GPUObjectBase
has the following content timeline properties:
label
, of type USVString
A developer-provided label which is used in an implementation-defined way. It can be used by the browser, OS, or other tools to help identify the underlying internal object to the developer. Examples include displaying the label in GPUError
messages, console warnings, browser developer tools, and platform debugging utilities.
NOTE:
The
label
is a property of the
GPUObjectBase
. Two
GPUObjectBase
"wrapper" objects have completely separate label states, even if they refer to the same underlying object (for example returned by
getBindGroupLayout()
). The
label
property will not change except by being set from JavaScript.
This means one underlying object could be associated with multiple labels. This specification does not define how the label is propagated to the device timeline. How labels are used is completely implementation-defined: error messages could show the most recently set label, all known labels, or no labels at all.
It is defined as a USVString
because some user agents may supply it to the debug facilities of the underlying native APIs.
GPUObjectBase
has the following device timeline properties:
[[valid]]
, of type boolean
, initially true
.
If true
, indicates that the internal object is valid to use.
NOTE:
Ideally
WebGPU interfacesshould not prevent their parent objects, such as the
[[device]]
that owns them, from being garbage collected. This cannot be guaranteed, however, as holding a strong reference to a parent object may be required in some implementations.
As a result, developers should assume that a WebGPU interface may not be garbage collected until all child objects of that interface have also been garbage collected. This may cause some resources to remain allocated longer than anticipated.
Calling the destroy
method on a WebGPU interface (such as GPUDevice
.destroy()
or GPUBuffer
.destroy()
) should be favored over relying on garbage collection if predictable release of allocated resources is needed.
An object descriptor holds the information needed to create an object, which is typically done via one of the create*
methods of GPUDevice
.
dictionary GPUObjectDescriptorBase
{
USVString label = "";
};
GPUObjectDescriptorBase
has the following members:
label
, of type USVString, defaulting to ""
The initial value of GPUObjectBase.label
.
Object creation operations in WebGPU don’t return promises, but nonetheless are internally asynchronous. Returned objects refer to internal objects which are manipulated on a device timeline. Rather than fail with exceptions or rejections, most errors that occur on a device timeline are communicated through GPUError
s generated on the associated device.
Internal objects are either valid or invalid. An invalid object will never become valid at a later time, but some valid objects may be invalidated.
Objects are invalid from creation if it wasn’t possible to create them. This can happen, for example, if the object descriptor doesn’t describe a valid object, or if there is not enough memory to allocate a resource. It can also happen if an object is created with or from another invalid object (for example calling createView()
on an invalid GPUTexture
) (for example the GPUTexture
of a createView()
call): this case is referred to as contagious invalidity.
Internal objects of most types cannot become invalid after they are created, but still may become unusable, e.g. if the owning device is lost or destroyed
, or the object has a special internal state, like buffer state "destroyed".
Internal objects of some types can become invalid after they are created; specifically, devices, adapters, GPUCommandBuffer
s, and command/pass/bundle encoders.
A given
GPUObjectBase
object
is
valid to use witha
targetObjectif the all of the requirements in the following
device timelinesteps are met:
3.2.2. Promise OrderingSeveral operations in WebGPU return promises.
WebGPU does not make any guarantees about the order in which these promises settle (resolve or reject), except for the following:
Applications must not rely on any other promise settlement ordering.
3.3. Coordinate SystemsRendering operations use the following coordinate systems:
Normalized device coordinates (or NDC) have three dimensions, where:
-1.0 ≤ x ≤ 1.0
-1.0 ≤ y ≤ 1.0
0.0 ≤ z ≤ 1.0
The bottom-left corner is at (-1.0, -1.0, z).
Note: Whether z = 0
or z = 1
is treated as the near plane is application specific. The above diagram presents z = 0
as the near plane but the observed behavior is determined by a combination of the projection matrices used by shaders, the depthClearValue
, and the depthCompare
function.
Clip space coordinates have four dimensions: (x, y, z, w)
Clip space coordinates are used for the the clip position of a vertex (i.e. the position output of a vertex shader), and for the clip volume.
Normalized device coordinates and clip space coordinates are related as follows: If point p = (p.x, p.y, p.z, p.w) is in the clip volume, then its normalized device coordinates are (p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w).
Framebuffer coordinates address the pixels in the framebuffer
They have two dimensions.
Each pixel extends 1 unit in x and y dimensions.
The top-left corner is at (0.0, 0.0).
x increases to the right.
y increases down.
See § 17 Render Passes and § 23.2.5 Rasterization.
Viewport coordinates combine framebuffer coordinates in x and y dimensions, with depth in z.
Normally 0.0 ≤ z ≤ 1.0, but this can be modified by setting [[viewport]]
.minDepth
and maxDepth
via setViewport()
Fragment coordinates match viewport coordinates.
Texture coordinates, sometimes called "UV coordinates" in 2D, are used to sample textures and have a number of components matching the texture dimension
.
0 ≤ u ≤ 1.0
0 ≤ v ≤ 1.0
0 ≤ w ≤ 1.0
(0.0, 0.0, 0.0) is in the first texel in texture memory address order.
(1.0, 1.0, 1.0) is in the last texel texture memory address order.
Window coordinates, or present coordinates, match framebuffer coordinates, and are used when interacting with an external display or conceptually similar interface.
Note: WebGPU’s coordinate systems match DirectX’s coordinate systems in a graphics pipeline.
3.4. Programming Model 3.4.1. TimelinesWebGPU’s behavior is described in terms of "timelines". Each operation (defined as algorithms) occurs on a timeline. Timelines clearly define both the order of operations, and which state is available to which operations.
Note: This "timeline" model describes the constraints of the multi-process models of browser engines (typically with a "content process" and "GPU process"), as well as the GPU itself as a separate execution unit in many implementations. Implementing WebGPU does not require timelines to execute in parallel, so does not require multiple processes, or even multiple threads. (It does require concurrency for cases like get a copy of the image contents of a context which synchronously blocks on another timeline to complete.)
Associated with the execution of the Web script. It includes calling all methods described by this specification.
To issue steps to the content timeline from an operation on GPUDevice
device
, queue a global task for GPUDevice device
with those steps.
Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.
Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.
Associated with any of the above timelines
Steps may be issued to any timeline if they only operate on immutable properties or arguments passed from the calling steps.
The following show the styling of steps and values associated with each timeline. This styling is non-normative; the specification text always describes the association.
Can be used on any timeline.
Can only be used on the content timeline.
Can only be used on the device timeline.
Can only be used on the queue timeline.
In this specification, asynchronous operations are used when the return value depends on work that happens on any timeline other than the Content timeline. They are represented by promises and events in API.
GPUBuffer.mapAsync()
:
User requests to map a GPUBuffer
on the Content timeline and gets a promise in return.
User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.
After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.
This section is non-normative.
Once a GPUDevice
has been obtained during an application initialization routine, we can describe the WebGPU platform as consisting of the following layers:
User agent implementing the specification.
Operating system with low-level native API drivers for this device.
Actual CPU and GPU hardware.
Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:
The script-owned memory, such as an ArrayBuffer
created by the script, is generally not accessible by a GPU driver.
A user agent may have different processes responsible for running the content and communication to the GPU driver. In this case, it uses inter-process shared memory to transfer data.
Dedicated GPUs have their own memory with high bandwidth, while integrated GPUs typically share memory with the system.
Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.
All of these transitions are done by the WebGPU implementation of the user agent.
Note: This example describes the worst case, while in practice the implementation may not need to cross the process boundary, or may be able to expose the driver-managed memory directly to the user behind an ArrayBuffer
, thus avoiding any data copies.
A physical resource can be used with an internal usage by a GPU command:
Buffer with input data for draw or dispatch calls. Preserves the contents. Allowed by buffer INDEX
, buffer VERTEX
, or buffer INDIRECT
.
Resource bindings that are constant from the shader point of view. Preserves the contents. Allowed by buffer UNIFORM
or texture TEXTURE_BINDING
.
Read/write storage resource binding. Allowed by buffer STORAGE
or texture STORAGE_BINDING
.
Read-only storage resource bindings. Preserves the contents. Allowed by buffer STORAGE
or texture STORAGE_BINDING
.
Texture used as a read/write output attachment or write-only resolve target in a render pass. Allowed by texture RENDER_ATTACHMENT
.
Texture used as a read-only attachment in a render pass. Preserves the contents. Allowed by texture RENDER_ATTACHMENT
.
We define subresource to be either a whole buffer, or a texture subresource.
Enforcing that the usages are only combined into a compatible usage list allows the API to limit when data races can occur in working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.
EXAMPLE:
These rules allow for
read-only depth-stencil: a single depth/stencil texture can be used as two different read-only usages in a render pass simultaneously:
As a depth/stencil attachment with all aspects marked read-only (using depthReadOnly
and/or stencilReadOnly
as necessary).
As a texture binding to a draw call.
EXAMPLE:
The
usage scope storage exceptionallows two cases that would not be allowed otherwise:
A buffer or texture may be bound as storage to two different draw calls in a render pass.
Disjoint ranges of a single buffer may be bound to two different binding points as storage.
Overlapping ranges may not be bound to a single dispatch/draw call; this is checked by "Encoder bind groups alias a writable resource".
EXAMPLE:
The
usage scope attachment exceptionallows a texture subresource to be used as
attachmentmore than once. This is necessary to allow disjoint slices of 3D textures to be bound as different attachments to a single render pass.
One slice may not be bound twice for two different attachments; this is checked by beginRenderPass()
.
A usage scope is a map from subresource to list<internal usage>>. Each usage scope covers a range of operations which may execute in a concurrent fashion with each other, and therefore may only use subresources in consistent compatible usage lists within the scope.
Usage scopes are constructed and validated during encoding:
The usage scopes are as follows:
In a compute pass, each dispatch command (dispatchWorkgroups()
or dispatchWorkgroupsIndirect()
) is one usage scope.
A subresource is used in the usage scope if it is potentially accessible by the dispatched invocations, including:
All subresources referenced by bind groups in slots used by the current GPUComputePipeline
’s [[layout]]
Buffers used directly by dispatch calls (such as indirect buffers)
Note: State-setting compute pass commands, like setBindGroup(), do not contribute their bound resources directly to a usage scope: they only change the state that is checked in dispatch commands.
One render pass is one usage scope.
A subresource is used in the usage scope if it’s referenced by any command, including state-setting commands (unlike in compute passes), including:
Buffers set by setVertexBuffer()
Buffers set by setIndexBuffer()
All subresources referenced by bind groups set by setBindGroup()
Buffers used directly by draw calls (such as indirect buffers)
Note: Copy commands are standalone operations and don’t use usage scopes for validation. They implement their own validation to prevent self-races.
EXAMPLE:
The following example resource usages
areincluded in
usage scopes:
In a render pass, subresources used in any setBindGroup() call, regardless of whether the currently bound pipeline’s shader or layout actually depends on these bindings, or the bind group is shadowed by another 'set' call.
A buffer used in any setVertexBuffer()
call, regardless of whether any draw call depends on this buffer, or whether this buffer is shadowed by another 'set' call.
A buffer used in any setIndexBuffer()
call, regardless of whether any draw call depends on this buffer, or whether this buffer is shadowed by another 'set' call.
A texture subresource used as a color attachment, resolve attachment, or depth/stencil attachment in GPURenderPassDescriptor
by beginRenderPass()
, regardless of whether the shader actually depends on these attachments.
Resources used in bind group entries with visibility 0, or visible only to the compute stage but used in a render pass (or vice versa).
An adapter identifies an implementation of WebGPU on the system: both an instance of compute/rendering functionality on the platform underlying a browser, and an instance of a browser’s implementation of WebGPU on top of that functionality.
Adapters are exposed via GPUAdapter
.
Adapters do not uniquely represent underlying implementations: calling requestAdapter()
multiple times returns a different adapter object each time.
Each adapter object can only be used to create one device: upon a successful requestDevice()
call, the adapter’s [[state]]
changes to "consumed"
. Additionally, adapter objects may expire at any time.
Note: This ensures applications use the latest system state for adapter selection when creating a device. It also encourages robustness to more scenarios by making them look similar: first initialization, reinitialization due to an unplugged adapter, reinitialization due to a test GPUDevice.destroy()
call, etc.
An adapter may be considered a fallback adapter if it has significant performance caveats in exchange for some combination of wider compatibility, more predictable behavior, or improved privacy. It is not required that a fallback adapter is available on every system.
adapter has the following immutable properties:
[[features]]
, of type ordered set<GPUFeatureName
>, readonly
The features which can be used to create devices on this adapter.
[[limits]]
, of type supported limits, readonly
The best limits which can be used to create devices on this adapter.
Each adapter limit must be the same or better than its default value in supported limits.
[[fallback]]
, of type boolean
, readonly
If set to true
indicates that the adapter is a fallback adapter.
[[xrCompatible]]
, of type boolean
If set to true
indicates that the adapter was requested with compatibility with WebXR sessions.
adapter has the following device timeline properties:
[[state]]
, initially "valid"
"valid"
The adapter can be used to create a device.
"consumed"
The adapter has already been used to create a device, and cannot be used again.
"expired"
The adapter has expired for some other reason.
A device is the logical instantiation of an adapter, through which internal objects are created.
Devices are exposed via GPUDevice
.
A device is the exclusive owner of all internal objects created from it: when the device becomes invalid (is lost or destroyed
), it and all objects created on it (directly, e.g. createTexture()
, or indirectly, e.g. createView()
) become implicitly unusable.
device has the following immutable properties:
[[adapter]]
, of type adapter, readonly
The adapter from which this device was created.
[[features]]
, of type ordered set<GPUFeatureName
>, readonly
The features which can be used on this device, as computed at creation. No additional features can be used, even if the underlying adapter can support them.
[[limits]]
, of type supported limits, readonly
The limits which can be used on this device, as computed at creation. No better limits can be used, even if the underlying adapter can support them.
device has the following content timeline properties:
[[content device]]
, of type GPUDevice
, readonly
The Content timeline GPUDevice
interface which this device is associated with.
Any time the user agent needs to revoke access to a device, it calls lose the device(device
, "unknown"
) on the device’s device timeline, potentially ahead of other operations currently queued on that timeline.
If an operation fails with side effects that would observably change the state of objects on the device or potentially corrupt internal implementation/driver state, the device should be lost to prevent these changes from being observable.
Note: For all device losses not initiated by the application (via destroy()
), user agents should consider issuing developer-visible warnings unconditionally, even if the lost
promise is handled. These scenarios should be rare, and the signal is vital to developers because most of the WebGPU API tries to behave like nothing is wrong to avoid interrupting the runtime flow of the application: no validation errors are raised, most promises resolve normally, etc.
To
listen for timeline event eventon
device device, handled by
stepson timeline
timeline:
If or when the device timeline has been informed of the completion of event, or
If device is lost already, or when it becomes lost:
Then issue steps on timeline.
3.6. Optional CapabilitiesWebGPU adapters and devices have capabilities, which describe WebGPU functionality that differs between different implementations, typically due to hardware or system software constraints. A capability is either a feature or a limit.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
The capabilities of an adapter must conform to § 4.2.1 Adapter Capability Guarantees.
Only supported capabilities may be requested in requestDevice()
; requesting unsupported capabilities results in failure.
The capabilities of a device are determined in "a new device" by starting with the adapter’s defaults (no features and the default supported limits) and adding capabilities as requested in requestDevice()
. These capabilities are enforced regardless of the capabilities of the adapter.
There is a tracking vector here. For privacy considerations, see § 2.2.1 Machine-specific features and limits.
3.6.1. FeaturesA feature is a set of optional WebGPU functionality that is not supported on all implementations, typically due to hardware or system software constraints.
All features are optional, but adapters make some guarantees about their availability (see § 4.2.1 Adapter Capability Guarantees).
A device supports the exact set of features determined at creation (see § 3.6 Optional Capabilities). API calls perform validation according to these features (not the adapter’s features):
Using existing API surfaces in a new way typically results in a validation error.
There are several types of optional API surface:
Using a new method or enum value always throws a TypeError
.
Using a new dictionary member with a (correctly-typed) non-default value typically results in a validation error.
Using a new WGSL enable
directive always results in a createShaderModule()
validation error.
See the Feature Index for a description of the functionality each feature enables.
Note: Enabling features may not necessarily be desirable, as doing so may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally only request features that they may actually require.
3.6.2. LimitsEach limit is a numeric limit on the usage of WebGPU on a device.
Note: Setting "better" limits may not necessarily be desirable, as doing so may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally only request better limits if they may actually require them.
Each limit has a default value.
Adapters are always guaranteed to support the defaults or better (see § 4.2.1 Adapter Capability Guarantees).
A device supports the exact set of limits determined at creation (see § 3.6 Optional Capabilities). API calls perform validation according to these limits (not the adapter’s limits), no better or worse.
For any given limit, some values are better than others. A better limit value always relaxes validation, enabling strictly more programs to be valid. For each limit class, "better" is defined.
Different limits have different limit classes:
The limit enforces a maximum on some value passed into the API.
Higher values are better.
May only be set to values ≥ the default. Lower values are clamped to the default.
The limit enforces a minimum alignment on some value passed into the API; that is, the value must be a multiple of the limit.
Lower values are better.
May only be set to powers of 2 which are ≤ the default. Values which are not powers of 2 are invalid. Higher powers of 2 are clamped to the default.
Note: Setting "better" limits may not necessarily be desirable, as they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content (ideally, the default values).
A supported limits object has a value for every limit defined by WebGPU:
Limit name Type Limit class DefaultmaxTextureDimension1D
GPUSize32
maximum 8192 The maximum allowed value for the size
.width of a texture created with dimension
"1d"
. maxTextureDimension2D
GPUSize32
maximum 8192 The maximum allowed value for the size
.width and size
.height of a texture created with dimension
"2d"
. maxTextureDimension3D
GPUSize32
maximum 2048 The maximum allowed value for the size
.width, size
.height and size
.depthOrArrayLayers of a texture created with dimension
"3d"
. maxTextureArrayLayers
GPUSize32
maximum 256 The maximum allowed value for the size
.depthOrArrayLayers of a texture created with dimension
"2d"
. maxBindGroups
GPUSize32
maximum 4 The maximum number of GPUBindGroupLayouts
allowed in bindGroupLayouts
when creating a GPUPipelineLayout
. maxBindGroupsPlusVertexBuffers
GPUSize32
maximum 24 The maximum number of bind group and vertex buffer slots used simultaneously, counting any empty slots below the highest index. Validated in createRenderPipeline()
and in draw calls. maxBindingsPerBindGroup
GPUSize32
maximum 1000 The number of binding indices available when creating a GPUBindGroupLayout
.
Note: This limit is normative, but arbitrary. With the default binding slot limits, it is impossible to use 1000 bindings in one bind group, but this allows GPUBindGroupLayoutEntry
.binding
values up to 999. This limit allows implementations to treat binding space as an array, within reasonable memory space, rather than a sparse map structure.
maxDynamicUniformBuffersPerPipelineLayout
GPUSize32
maximum 8 The maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are uniform buffers with dynamic offsets. See Exceeds the binding slot limits. maxDynamicStorageBuffersPerPipelineLayout
GPUSize32
maximum 4 The maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are storage buffers with dynamic offsets. See Exceeds the binding slot limits. maxSampledTexturesPerShaderStage
GPUSize32
maximum 16 For each possible GPUShaderStage
stage
, the maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are sampled textures. See Exceeds the binding slot limits. maxSamplersPerShaderStage
GPUSize32
maximum 16 For each possible GPUShaderStage
stage
, the maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are samplers. See Exceeds the binding slot limits. maxStorageBuffersPerShaderStage
GPUSize32
maximum 8 For each possible GPUShaderStage
stage
, the maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are storage buffers. See Exceeds the binding slot limits. maxStorageTexturesPerShaderStage
GPUSize32
maximum 4 For each possible GPUShaderStage
stage
, the maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are storage textures. See Exceeds the binding slot limits. maxUniformBuffersPerShaderStage
GPUSize32
maximum 12 For each possible GPUShaderStage
stage
, the maximum number of GPUBindGroupLayoutEntry
entries across a GPUPipelineLayout
which are uniform buffers. See Exceeds the binding slot limits. maxUniformBufferBindingSize
GPUSize64
maximum 65536 bytes The maximum GPUBufferBinding
.size
for bindings with a GPUBindGroupLayoutEntry
entry for which entry.buffer
?.type
is "uniform"
. maxStorageBufferBindingSize
GPUSize64
maximum 134217728 bytes (128 MiB) The maximum GPUBufferBinding
.size
for bindings with a GPUBindGroupLayoutEntry
entry for which entry.buffer
?.type
is "storage"
or "read-only-storage"
. minUniformBufferOffsetAlignment
GPUSize32
alignment 256 bytes The required alignment for GPUBufferBinding
.offset
and the dynamic offsets provided in setBindGroup(), for bindings with a GPUBindGroupLayoutEntry
entry for which entry.buffer
?.type
is "uniform"
. minStorageBufferOffsetAlignment
GPUSize32
alignment 256 bytes The required alignment for GPUBufferBinding
.offset
and the dynamic offsets provided in setBindGroup(), for bindings with a GPUBindGroupLayoutEntry
entry for which entry.buffer
?.type
is "storage"
or "read-only-storage"
. maxVertexBuffers
GPUSize32
maximum 8 The maximum number of buffers
when creating a GPURenderPipeline
. maxBufferSize
GPUSize64
maximum 268435456 bytes (256 MiB) The maximum size of size
when creating a GPUBuffer
. maxVertexAttributes
GPUSize32
maximum 16 The maximum number of attributes
in total across buffers
when creating a GPURenderPipeline
. maxVertexBufferArrayStride
GPUSize32
maximum 2048 bytes The maximum allowed arrayStride
when creating a GPURenderPipeline
. maxInterStageShaderVariables
GPUSize32
maximum 16 The maximum allowed number of input or output variables for inter-stage communication (like vertex outputs or fragment inputs). maxColorAttachments
GPUSize32
maximum 8 The maximum allowed number of color attachments in GPURenderPipelineDescriptor
.fragment
.targets
, GPURenderPassDescriptor
.colorAttachments
, and GPURenderPassLayout
.colorFormats
. maxColorAttachmentBytesPerSample
GPUSize32
maximum 32 The maximum number of bytes necessary to hold one sample (pixel or subpixel) of render pipeline output data, across all color attachments. maxComputeWorkgroupStorageSize
GPUSize32
maximum 16384 bytes The maximum number of bytes of workgroup storage used for a compute stage GPUShaderModule
entry-point. maxComputeInvocationsPerWorkgroup
GPUSize32
maximum 256 The maximum value of the product of the workgroup_size
dimensions for a compute stage GPUShaderModule
entry-point. maxComputeWorkgroupSizeX
GPUSize32
maximum 256 The maximum value of the workgroup_size
X dimension for a compute stage GPUShaderModule
entry-point. maxComputeWorkgroupSizeY
GPUSize32
maximum 256 The maximum value of the workgroup_size
Y dimensions for a compute stage GPUShaderModule
entry-point. maxComputeWorkgroupSizeZ
GPUSize32
maximum 64 The maximum value of the workgroup_size
Z dimensions for a compute stage GPUShaderModule
entry-point. maxComputeWorkgroupsPerDimension
GPUSize32
maximum 65535 The maximum value for the arguments of dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ)
. 3.6.2.1. GPUSupportedLimits
GPUSupportedLimits
exposes an adapter or device’s supported limits. See GPUAdapter.limits
and GPUDevice.limits
.
[Exposed=(Window, Worker), SecureContext] interface GPUSupportedLimits { readonly attribute unsigned long3.6.2.2.maxTextureDimension1D
; readonly attribute unsigned longmaxTextureDimension2D
; readonly attribute unsigned longmaxTextureDimension3D
; readonly attribute unsigned longmaxTextureArrayLayers
; readonly attribute unsigned longmaxBindGroups
; readonly attribute unsigned longmaxBindGroupsPlusVertexBuffers
; readonly attribute unsigned longmaxBindingsPerBindGroup
; readonly attribute unsigned longmaxDynamicUniformBuffersPerPipelineLayout
; readonly attribute unsigned longmaxDynamicStorageBuffersPerPipelineLayout
; readonly attribute unsigned longmaxSampledTexturesPerShaderStage
; readonly attribute unsigned longmaxSamplersPerShaderStage
; readonly attribute unsigned longmaxStorageBuffersPerShaderStage
; readonly attribute unsigned longmaxStorageTexturesPerShaderStage
; readonly attribute unsigned longmaxUniformBuffersPerShaderStage
; readonly attribute unsigned long longmaxUniformBufferBindingSize
; readonly attribute unsigned long longmaxStorageBufferBindingSize
; readonly attribute unsigned longminUniformBufferOffsetAlignment
; readonly attribute unsigned longminStorageBufferOffsetAlignment
; readonly attribute unsigned longmaxVertexBuffers
; readonly attribute unsigned long longmaxBufferSize
; readonly attribute unsigned longmaxVertexAttributes
; readonly attribute unsigned longmaxVertexBufferArrayStride
; readonly attribute unsigned longmaxInterStageShaderVariables
; readonly attribute unsigned longmaxColorAttachments
; readonly attribute unsigned longmaxColorAttachmentBytesPerSample
; readonly attribute unsigned longmaxComputeWorkgroupStorageSize
; readonly attribute unsigned longmaxComputeInvocationsPerWorkgroup
; readonly attribute unsigned longmaxComputeWorkgroupSizeX
; readonly attribute unsigned longmaxComputeWorkgroupSizeY
; readonly attribute unsigned longmaxComputeWorkgroupSizeZ
; readonly attribute unsigned longmaxComputeWorkgroupsPerDimension
; };
GPUSupportedFeatures
GPUSupportedFeatures
is a setlike interface. Its set entries are the GPUFeatureName
values of the features supported by an adapter or device. It must only contain strings from the GPUFeatureName
enum.
[Exposed=(Window, Worker), SecureContext] interface GPUSupportedFeatures { readonly setlike<DOMString>; };
NOTE:
The type of the
GPUSupportedFeatures
set entries
is
DOMString
to allow user agents to gracefully handle valid
GPUFeatureName
s which are added in later revisions of the spec but which the user agent has not been updated to recognize yet. If the
set entriestype was
GPUFeatureName
the following code would throw an
TypeError
rather than reporting
false
:
Check for support of an unrecognized feature:
if (adapter.features.has('unknown-feature')) { // Use unknown-feature } else { console.warn('unknown-feature is not supported by this adapter.'); }3.6.2.3.
WGSLLanguageFeatures
WGSLLanguageFeatures
is the setlike interface of navigator.gpu.
. Its set entries are the string names of the WGSL language extensions supported by the implementation (regardless of the adapter or device).wgslLanguageFeatures
[Exposed=(Window, Worker), SecureContext] interface WGSLLanguageFeatures { readonly setlike<DOMString>; };3.6.2.4.
GPUAdapterInfo
GPUAdapterInfo
exposes various identifying information about an adapter.
None of the members in GPUAdapterInfo
are guaranteed to be populated with any particular value; if no value is provided, the attribute will return the empty string ""
. It is at the user agent’s discretion which values to reveal, and it is likely that on some devices none of the values will be populated. As such, applications must be able to handle any possible GPUAdapterInfo
values, including the absence of those values.
The GPUAdapterInfo
for an adapter is exposed via GPUAdapter.info
and GPUDevice.adapterInfo
). This info is immutable: for a given adapter, each GPUAdapterInfo
attribute will return the same value every time it’s accessed.
Note: Though the GPUAdapterInfo
attributes are immutable once accessed, an implementation may delay the decision on what to expose for each attribute until the first time it is accessed.
Note: Other GPUAdapter
instances, even if they represent the same physical adapter, may expose different values in GPUAdapterInfo
. However, they should expose the same values unless a specific event has increased the amount of identifying information the page is allowed to access. (No such events are defined by this specification.)
There is a tracking vector here. For privacy considerations, see § 2.2.6 Adapter Identifiers.
[Exposed=(Window, Worker), SecureContext] interface GPUAdapterInfo { readonly attribute DOMString vendor; readonly attribute DOMString architecture; readonly attribute DOMString device; readonly attribute DOMString description; readonly attribute unsigned long subgroupMinSize; readonly attribute unsigned long subgroupMaxSize; readonly attribute boolean isFallbackAdapter; };
GPUAdapterInfo
has the following attributes:
vendor
, of type DOMString, readonly
The name of the vendor of the adapter, if available. Empty string otherwise.
architecture
, of type DOMString, readonly
The name of the family or class of GPUs the adapter belongs to, if available. Empty string otherwise.
device
, of type DOMString, readonly
A vendor-specific identifier for the adapter, if available. Empty string otherwise.
Note: This is a value that represents the type of adapter. For example, it may be a PCI device ID. It does not uniquely identify a given piece of hardware like a serial number.
description
, of type DOMString, readonly
A human readable string describing the adapter as reported by the driver, if available. Empty string otherwise.
Note: Because no formatting is applied to description
attempting to parse this value is not recommended. Applications which change their behavior based on the GPUAdapterInfo
, such as applying workarounds for known driver issues, should rely on the other fields when possible.
subgroupMinSize
, of type unsigned long, readonly
If the "subgroups"
feature is supported, the minimum supported subgroup size for the adapter.
subgroupMaxSize
, of type unsigned long, readonly
If the "subgroups"
feature is supported, the maximum supported subgroup size for the adapter.
isFallbackAdapter
, of type boolean, readonly
Whether the adapter is a fallback adapter.
To create a
new adapter infofor a given
adapter adapter, run the following
content timelinesteps:
Let adapterInfo be a new GPUAdapterInfo
.
If the vendor is known, set adapterInfo.vendor
to the name of adapter’s vendor as a normalized identifier string. To preserve privacy, the user agent may instead set adapterInfo.vendor
to the empty string or a reasonable approximation of the vendor as a normalized identifier string.
If |the architecture is known, set adapterInfo.architecture
to a normalized identifier string representing the family or class of adapters to which adapter belongs. To preserve privacy, the user agent may instead set adapterInfo.architecture
to the empty string or a reasonable approximation of the architecture as a normalized identifier string.
If the device is known, set adapterInfo.device
to a normalized identifier string representing a vendor-specific identifier for adapter. To preserve privacy, the user agent may instead set adapterInfo.device
to to the empty string or a reasonable approximation of a vendor-specific identifier as a normalized identifier string.
If a description is known, set adapterInfo.description
to a description of the adapter as reported by the driver. To preserve privacy, the user agent may instead set adapterInfo.description
to the empty string or a reasonable approximation of a description.
If "subgroups"
is supported, set subgroupMinSize
to the smallest supported subgroup size. Otherwise, set this value to 4.
Note: To preserve privacy, the user agent may choose to not support some features or provide values for the property which do not distinguish different devices, but are still usable (e.g. use the default value of 4 for all devices).
If "subgroups"
is supported, set subgroupMaxSize
to the largest supported subgroup size. Otherwise, set this value to 128.
Note: To preserve privacy, the user agent may choose to not support some features or provide values for the property which do not distinguish different devices, but are still usable (e.g. use the default value of 128 for all devices).
Set adapterInfo.isFallbackAdapter
to adapter.[[fallback]]
.
Return adapterInfo.
A
normalized identifier stringis one that follows the following pattern:
[a-z0-9]+(-[a-z0-9]+)*
a-z 0-9 -
Examples of valid normalized identifier strings include:
gpu
3d
0x3b2f
next-gen
series-x20-ultra
"Extension Documents" are additional documents which describe new functionality which is non-normative and not part of the WebGPU/WGSL specifications. They describe functionality that builds upon these specifications, often including one or more new API feature flags and/or WGSL enable
directives, or interactions with other draft web specifications.
WebGPU implementations must not expose extension functionality; doing so is a spec violation. New functionality does not become part of the WebGPU standard until it is integrated into the WebGPU specification (this document) and/or WGSL specification.
3.8. Origin RestrictionsWebGPU allows accessing image data stored in images, videos, and canvases. Restrictions are imposed on the use of cross-domain media, because shaders can be used to indirectly deduce the contents of textures which have been uploaded to the GPU.
WebGPU disallows uploading an image source if it is not origin-clean.
This also implies that the origin-clean flag for a canvas rendered using WebGPU will never be set to false
.
For more information on issuing CORS requests for image and video elements, consult:
WebGPU defines a new task source called the WebGPU task source. It is used for the uncapturederror
event and GPUDevice
.lost
.
WebGPU defines a new task source called the automatic expiry task source. It is used for the automatic, timed expiry (destruction) of certain objects:
GPUTexture
s returned by getCurrentTexture()
GPUExternalTexture
s created from HTMLVideoElement
s
Tasks from the automatic expiry task source should be processed with high priority; in particular, once queued, they should run before user-defined (JavaScript) tasks.
NOTE:
This behavior is more predictable, and the strictness helps developers write more portable applications by eagerly detecting incorrect assumptions about implicit lifetimes that may be hard to detect. Developers are still strongly encouraged to test in multiple implementations.
Implementation note: It is valid to implement a high-priority expiry "task" by instead inserting additional steps at a fixed point inside the event loop processing model rather than running an actual task.
3.10. Color Spaces and EncodingWebGPU does not provide color management. All values within WebGPU (such as texture elements) are raw numeric values, not color-managed color values.
WebGPU does interface with color-managed outputs (via GPUCanvasConfiguration
) and inputs (via copyExternalImageToTexture()
and importExternalTexture()
). Thus, color conversion must be performed between the WebGPU numeric values and the external color values. Each such interface point locally defines an encoding (color space, transfer function, and alpha premultiplication) in which the WebGPU numeric values are to be interpreted.
WebGPU allows all of the color spaces in the PredefinedColorSpace
enum. Note, each color space is defined over an extended range, as defined by the referenced CSS definitions, to represent color values outside of its space (in both chrominance and luminance).
An out-of-gamut premultiplied RGBA value is one where any of the R/G/B channel values exceeds the alpha channel value. For example, the premultiplied sRGB RGBA value [1.0, 0, 0, 0.5] represents the (unpremultiplied) color [2, 0, 0] with 50% alpha, written rgb(srgb 2 0 0 / 50%)
in CSS. Just like any color value outside the sRGB color gamut, this is a well defined point in the extended color space (except when alpha is 0, in which case there is no color). However, when such values are output to a visible canvas, the result is undefined (see GPUCanvasAlphaMode
"premultiplied"
).
A color is converted between spaces by translating its representation in one space to a representation in another according to the definitions above.
If the source value has fewer than 4 RGBA channels, the missing green/blue/alpha channels are set to 0, 0, 1
, respectively, before converting for color space/encoding and alpha premultiplication. After conversion, if the destination needs fewer than 4 channels, the additional channels are ignored.
Note: Grayscale images generally represent RGB values (V, V, V)
, or RGBA values (V, V, V, A)
in their color space.
Colors are not lossily clamped during conversion: converting from one color space to another will result in values outside the range [0, 1] if the source color values were outside the range of the destination color space’s gamut. For an sRGB destination, for example, this can occur if the source is rgba16float, in a wider color space like Display-P3, or is premultiplied and contains out-of-gamut values.
Similarly, if the source value has a high bit depth (e.g. PNG with 16 bits per component) or extended range (e.g. canvas with float16
storage), these colors are preserved through color space conversion, with intermediate computations having at least the precision of the source.
If the source and destination of a color space/encoding conversion are the same, then conversion is not necessary. In general, if any given step of the conversion is an identity function (no-op), implementations should elide it, for performance.
For optimal performance, applications should set their color space and encoding options so that the number of necessary conversions is minimized throughout the process. For various image sources of GPUCopyExternalImageSourceInfo
:
Premultiplication is controlled via premultiplyAlpha
.
Color space is controlled via colorSpaceConversion
.
2d canvas:
Color space is controlled via the colorSpace
context creation attribute.
WebGL canvas:
Premultiplication is controlled via the premultipliedAlpha
option in WebGLContextAttributes
.
Color space is controlled via the WebGLRenderingContextBase
’s drawingBufferColorSpace
state.
Note: Check browser implementation support for these features before relying on them.
3.11. Numeric conversions from JavaScript to WGSLSeveral parts of the WebGPU API (pipeline-overridable constants
and render pass clear values) take numeric values from WebIDL (double
or float
) and convert them to WGSL values (bool
, i32
, u32
, f32
, f16
).
To convert an IDL value
idlValueof type
double
or
float
to WGSL type T
, possibly throwing a
TypeError
, run the following
device timelinesteps:
Note: This TypeError
is generated in the device timeline and never surfaced to JavaScript.
Assert idlValue is a finite value, since it is not unrestricted double
or unrestricted float
.
Let v be the ECMAScript Number resulting from ! converting idlValue to an ECMAScript value.
bool
Return the WGSL bool
value corresponding to the result of ! converting v to an IDL value of type boolean
.
Note: This algorithm is called after the conversion from an ECMAScript value to an IDL double
or float
value. If the original ECMAScript value was a non-numeric, non-boolean value like []
or {}
, then the WGSL bool
result may be different than if the ECMAScript value had been converted to IDL boolean
directly.
i32
Return the WGSL i32
value corresponding to the result of ? converting v to an IDL value of type [EnforceRange
] long
.
u32
Return the WGSL u32
value corresponding to the result of ? converting v to an IDL value of type [EnforceRange
] unsigned long
.
f32
Return the WGSL f32
value corresponding to the result of ? converting v to an IDL value of type float
.
f16
Let wgslF32 be the WGSL f32
value corresponding to the result of ? converting v to an IDL value of type float
.
Return f16(wgslF32)
, the result of ! converting the WGSL f32
value to f16
as defined in WGSL floating point conversion.
Note: As long as the value is in-range of f32
, no error is thrown, even if the value is out-of-range of f16
.
To convert a
GPUColor
color to a texel value of texture format format
, possibly throwing a
TypeError
, run the following
device timelinesteps:
Note: This TypeError
is generated in the device timeline and never surfaced to JavaScript.
If the components of format (assert they all have the same type) are:
Let T be f32
.
Let T be i32
.
Let T be u32
.
Let wgslColor be a WGSL value of type vec4<T>
, where the 4 components are the RGBA channels of color, each ? converted to WGSL type T.
Convert wgslColor to format using the same conversion rules as the § 23.2.7 Output Merging step, and return the result.
Note: For non-integer types, the exact choice of value is implementation-defined. For normalized types, the value is clamped to the range of the type.
Note: In other words, the value written will be as if it was written by a WGSL shader that outputs the value represented as a vec4
of f32
, i32
, or u32
.
A GPU
object is available in the Window
and WorkerGlobalScope
contexts through the Navigator
and WorkerNavigator
interfaces respectively and is exposed via navigator.gpu
:
interface mixin NavigatorGPU
{
[SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;
NavigatorGPU
has the following attributes:
gpu
, of type GPU, readonly
A global singleton providing top-level entry points like requestAdapter()
.
GPU
is the entry point to WebGPU.
[Exposed=(Window, Worker), SecureContext] interface GPU { Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {}); GPUTextureFormat getPreferredCanvasFormat(); [SameObject] readonly attribute WGSLLanguageFeatures wgslLanguageFeatures; };
GPU
has the following methods:
requestAdapter(options)
Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.
getPreferredCanvasFormat()
Returns an optimal GPUTextureFormat
for displaying 8-bit depth, standard dynamic range content on this system. Must only return "rgba8unorm"
or "bgra8unorm"
.
The returned value can be passed as the format
to configure()
calls on a GPUCanvasContext
to ensure the associated canvas is able to display its contents efficiently.
Note: Canvases which are not displayed to the screen may or may not benefit from using this format.
GPU
has the following attributes:
wgslLanguageFeatures
, of type WGSLLanguageFeatures, readonly
The names of supported WGSL language extensions. Supported language extensions are automatically enabled.
Adapters may expire at any time. Upon any change in the system’s state that could affect the result of any requestAdapter()
call, the user agent should expire all previously-returned adapters. For example:
A physical adapter is added/removed (via plug/unplug, driver update, hang recovery, etc.)
The system’s power configuration has changed (laptop unplugged, power settings changed, etc.)
Note: User agents may choose to expire adapters often, even when there has been no system state change (e.g. seconds or minutes after the adapter was created). This can help obfuscate real system state changes, and make developers more aware that calling requestAdapter()
again is always necessary before calling requestDevice()
. If an application does encounter this situation, standard device-loss recovery handling should allow it to recover.
Requesting a
GPUAdapter
with no hints:
const gpuAdapter = await navigator.gpu.requestAdapter();4.2.1. Adapter Capability Guarantees
Any GPUAdapter
returned by requestAdapter()
must provide the following guarantees:
At least one of the following must be true:
"texture-compression-bc"
is supported.
Both "texture-compression-etc2"
and "texture-compression-astc"
are supported.
If "texture-compression-bc-sliced-3d"
is supported, then "texture-compression-bc"
must be supported.
If "texture-compression-astc-sliced-3d"
is supported, then "texture-compression-astc"
must be supported.
All supported limits must be either the default value or better.
All alignment-class limits must be powers of 2.
maxBindingsPerBindGroup
must be must be ≥ (max bindings per shader stage × max shader stages per pipeline), where:
max bindings per shader stage is (maxSampledTexturesPerShaderStage
+ maxSamplersPerShaderStage
+ maxStorageBuffersPerShaderStage
+ maxStorageTexturesPerShaderStage
+ maxUniformBuffersPerShaderStage
).
max shader stages per pipeline is 2
, because a GPURenderPipeline
supports both a vertex and fragment shader.
Note: maxBindingsPerBindGroup
does not reflect a fundamental limit; implementations should raise it to conform to this requirement, rather than lowering the other limits.
maxBindGroups
must be ≤ maxBindGroupsPlusVertexBuffers
.
maxVertexBuffers
must be ≤ maxBindGroupsPlusVertexBuffers
.
minUniformBufferOffsetAlignment
and minStorageBufferOffsetAlignment
must both be ≥ 32 bytes.
Note: 32 bytes would be the alignment of vec4<f64>
. See WebGPU Shading Language § 14.4.1 Alignment and Size.
maxUniformBufferBindingSize
must be ≤ maxBufferSize
.
maxStorageBufferBindingSize
must be ≤ maxBufferSize
.
maxStorageBufferBindingSize
must be a multiple of 4 bytes.
maxVertexBufferArrayStride
must be a multiple of 4 bytes.
maxComputeWorkgroupSizeX
must be ≤ maxComputeInvocationsPerWorkgroup
.
maxComputeWorkgroupSizeY
must be ≤ maxComputeInvocationsPerWorkgroup
.
maxComputeWorkgroupSizeZ
must be ≤ maxComputeInvocationsPerWorkgroup
.
maxComputeInvocationsPerWorkgroup
must be ≤ maxComputeWorkgroupSizeX
× maxComputeWorkgroupSizeY
× maxComputeWorkgroupSizeZ
.
GPURequestAdapterOptions
provides hints to the user agent indicating what configuration is suitable for the application.
dictionary GPURequestAdapterOptions { DOMString featureLevel = "core"; GPUPowerPreference powerPreference; boolean forceFallbackAdapter = false; boolean xrCompatible = false; };
enum GPUPowerPreference
{
"low-power",
"high-performance",
};
GPURequestAdapterOptions
has the following members:
featureLevel
, of type DOMString, defaulting to "core"
"Feature level" for the adapter request.
The allowed feature level string values are:
No effect.
No effect.
Note: This value is reserved for future use as a way to opt into additional validation restrictions. Applications should not use this value at this time.
powerPreference
, of type GPUPowerPreference
Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.
The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.
Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU. This hint may also affect the power configuration of the selected GPU to match the requested power preference.
Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and powerPreference
, the user agent is likely to select the same adapter.
It must be one of the following values:
undefined
(or not present)
Provides no hint to the user agent.
"low-power"
Indicates a request to prioritize power savings over performance.
Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.
"high-performance"
Indicates a request to prioritize performance over power consumption.
Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.
forceFallbackAdapter
, of type boolean, defaulting to false
When set to true
indicates that only a fallback adapter may be returned. If the user agent does not support a fallback adapter, will cause requestAdapter()
to resolve to null
.
Note: requestAdapter()
may still return a fallback adapter if forceFallbackAdapter
is set to false
and either no other appropriate adapter is available or the user agent chooses to return a fallback adapter. Developers that wish to prevent their applications from running on fallback adapters should check the info
.isFallbackAdapter
attribute prior to requesting a GPUDevice
.
xrCompatible
, of type boolean, defaulting to false
When set to true
indicates that the best adapter for rendering to a WebXR session must be returned. If the user agent or system does not support WebXR sessions then adapter selection may ignore this value.
Note: If xrCompatible
is not set to true
when the adapter is requested, GPUDevice
s created from the adapter cannot be used to render for WebXR sessions.
Requesting a
"high-performance"
GPUAdapter
:
const gpuAdapter = await navigator.gpu.requestAdapter({ powerPreference: 'high-performance' });4.3.
GPUAdapter
A GPUAdapter
encapsulates an adapter, and describes its capabilities (features and limits).
To get a GPUAdapter
, use requestAdapter()
.
[Exposed=(Window, Worker), SecureContext] interface GPUAdapter { [SameObject] readonly attribute GPUSupportedFeatures features; [SameObject] readonly attribute GPUSupportedLimits limits; [SameObject] readonly attribute GPUAdapterInfo info; Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {}); };
GPUAdapter
has the following immutable properties
features
, of type GPUSupportedFeatures, readonly
The set of values in this
.[[adapter]]
.[[features]]
.
limits
, of type GPUSupportedLimits, readonly
The limits in this
.[[adapter]]
.[[limits]]
.
info
, of type GPUAdapterInfo, readonly
Information about the physical adapter underlying this GPUAdapter
.
For a given GPUAdapter
, the GPUAdapterInfo
values exposed are constant over time.
The same object is returned each time. To create that object for the first time:
[[adapter]]
, of type adapter, readonly
The adapter to which this GPUAdapter
refers.
GPUAdapter
has the following methods:
requestDevice(descriptor)
Requests a device from the adapter.
This is a one-time action: if a device is returned successfully, the adapter becomes "consumed"
.
:
If any of the following requirements are unmet:
Then issue the following steps on contentTimeline and return:
Note: This is the same error that is produced if a feature name isn’t known by the browser at all (in its GPUFeatureName
definition). This converges the behavior when the browser doesn’t support a feature with the behavior when a particular adapter doesn’t support a feature.
All of the requirements in the following steps must be met.
adapter.[[state]]
must not be "consumed"
.
For each [key, value] in descriptor.requiredLimits
for which value is not undefined
:
key must be the name of a member of supported limits.
value must be no better than adapter.[[limits]]
[key].
If key’s class is alignment, value must be a power of 2 less than 232.
Note: User agents should consider issuing developer-visible warnings when key is not recognized, even when value is undefined
.
If any are unmet, issue the following steps on contentTimeline and return:
If adapter.[[state]]
is "expired"
or the user agent otherwise cannot fulfill the request:
Let device be a new device.
Lose the device(device, "unknown"
).
Assert adapter.[[state]]
is "expired"
.
Note: User agents should consider issuing developer-visible warnings in most or all cases when this occurs. Applications should perform reinitialization logic starting with requestAdapter()
.
Otherwise:
Let device be a new device with the capabilities described by descriptor.
Expire adapter.
Issue the subsequent steps on contentTimeline.
Requesting a
GPUDevice
with default features and limits:
const gpuAdapter = await navigator.gpu.requestAdapter(); const gpuDevice = await gpuAdapter.requestDevice();4.3.1.
GPUDeviceDescriptor
GPUDeviceDescriptor
describes a device request.
dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase { sequence<GPUFeatureName> requiredFeatures = []; record<DOMString, (GPUSize64 or undefined)> requiredLimits = {}; GPUQueueDescriptor defaultQueue = {}; };
GPUDeviceDescriptor
has the following members:
requiredFeatures
, of type sequence<GPUFeatureName>, defaulting to []
Specifies the features that are required by the device request. The request will fail if the adapter cannot provide these features.
Exactly the specified set of features, and no more or less, will be allowed in validation of API calls on the resulting device.
requiredLimits
, of type record<DOMString, (GPUSize64 or undefined)>
, defaulting to {}
Specifies the limits that are required by the device request. The request will fail if the adapter cannot provide these limits.
Each key with a non-undefined
value must be the name of a member of supported limits.
API calls on the resulting device perform validation according to the exact limits of the device (not the adapter; see § 3.6.2 Limits).
defaultQueue
, of type GPUQueueDescriptor, defaulting to {}
The descriptor for the default GPUQueue
.
Requesting a
GPUDevice
with the
"texture-compression-astc"
feature if supported:
const gpuAdapter = await navigator.gpu.requestAdapter(); const requiredFeatures = []; if (gpuAdapter.features.has('texture-compression-astc')) { requiredFeatures.push('texture-compression-astc') } const gpuDevice = await gpuAdapter.requestDevice({ requiredFeatures });
Requesting a
GPUDevice
with a higher
maxColorAttachmentBytesPerSample
limit:
const gpuAdapter = await navigator.gpu.requestAdapter(); if (gpuAdapter.limits.maxColorAttachmentBytesPerSample < 64) { // When the desired limit isn’t supported, take action to either fall back to a code // path that does not require the higher limit or notify the user that their device // does not meet minimum requirements. } // Request higher limit of max color attachments bytes per sample. const gpuDevice = await gpuAdapter.requestDevice({ requiredLimits: { maxColorAttachmentBytesPerSample: 64 }, });4.3.1.1.
GPUFeatureName
Each GPUFeatureName
identifies a set of functionality which, if available, allows additional usages of WebGPU that would have otherwise been invalid.
enum GPUFeatureName { "core-features-and-limits", "depth-clip-control", "depth32float-stencil8", "texture-compression-bc", "texture-compression-bc-sliced-3d", "texture-compression-etc2", "texture-compression-astc", "texture-compression-astc-sliced-3d", "timestamp-query", "indirect-first-instance", "shader-f16", "rg11b10ufloat-renderable", "bgra8unorm-storage", "float32-filterable", "float32-blendable", "clip-distances", "dual-source-blending", "subgroups", };4.4.
GPUDevice
A GPUDevice
encapsulates a device and exposes the functionality of that device.
GPUDevice
is the top-level interface through which WebGPU interfaces are created.
To get a GPUDevice
, use requestDevice()
.
[Exposed=(Window, Worker), SecureContext] interface GPUDevice : EventTarget { [SameObject] readonly attribute GPUSupportedFeatures features; [SameObject] readonly attribute GPUSupportedLimits limits; [SameObject] readonly attribute GPUAdapterInfo adapterInfo; [SameObject] readonly attribute GPUQueue queue; undefined destroy(); GPUBuffer createBuffer(GPUBufferDescriptor descriptor); GPUTexture createTexture(GPUTextureDescriptor descriptor); GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {}); GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor); GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor); GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor); GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor); GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor); GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor); GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor); Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor); Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor); GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {}); GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor); GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor); }; GPUDevice includes GPUObjectBase;
GPUDevice
has the following immutable properties:
features
, of type GPUSupportedFeatures, readonly
A set containing the GPUFeatureName
values of the features supported by the device ([[device]]
.[[features]]
).
limits
, of type GPUSupportedLimits, readonly
The limits supported by the device ([[device]]
.[[limits]]
).
queue
, of type GPUQueue, readonly
The primary GPUQueue
for this device.
adapterInfo
, of type GPUAdapterInfo, readonly
Information about the physical adapter which created the device that this GPUDevice
refers to.
For a given GPUDevice
, the GPUAdapterInfo
values exposed are constant over time.
The same object is returned each time. To create that object for the first time:
The [[device]]
for a GPUDevice
is the device that the GPUDevice
refers to.
GPUDevice
has the following methods:
destroy()
Destroys the device, preventing further operations on it. Outstanding asynchronous operations will fail.
Note: It is valid to destroy a device multiple times.
Note: Since no further operations can be enqueued on this device, implementations can abort outstanding asynchronous operations immediately and free resource allocations, including mapped memory that was just unmapped.
A more robust example of requesting a
GPUAdapter
and
GPUDevice
with error handling:
let gpuDevice = null; async function initializeWebGPU() { // Check to ensure the user agent supports WebGPU. if (!('gpu' in navigator)) { console.error("User agent doesn’t support WebGPU."); return false; } // Request an adapter. const gpuAdapter = await navigator.gpu.requestAdapter(); // requestAdapter may resolve with null if no suitable adapters are found. if (!gpuAdapter) { console.error('No WebGPU adapters found.'); return false; } // Request a device. // Note that the promise will reject if invalid options are passed to the optional // dictionary. To avoid the promise rejecting always check any features and limits // against the adapters features and limits prior to calling requestDevice(). gpuDevice = await gpuAdapter.requestDevice(); // requestDevice will never return null, but if a valid device request can’t be // fulfilled for some reason it may resolve to a device which has already been lost. // Additionally, devices can be lost at any time after creation for a variety of reasons // (ie: browser resource management, driver updates), so it’s a good idea to always // handle lost devices gracefully. gpuDevice.lost.then((info) => { console.error(`WebGPU device was lost: ${info.message}`); gpuDevice = null; // Many causes for lost devices are transient, so applications should try getting a // new device once a previous one has been lost unless the loss was caused by the // application intentionally destroying the device. Note that any WebGPU resources // created with the previous device (buffers, textures, etc) will need to be // re-created with the new one. if (info.reason != 'destroyed') { initializeWebGPU(); } }); onWebGPUInitialized(); return true; } function onWebGPUInitialized() { // Begin creating WebGPU resources here... } initializeWebGPU();5. Buffers 5.1.
GPUBuffer
A GPUBuffer
represents a block of memory that can be used in GPU operations. Data is stored in linear layout, meaning that each byte of the allocation can be addressed by its offset from the start of the GPUBuffer
, subject to alignment restrictions depending on the operation. Some GPUBuffers
can be mapped which makes the block of memory accessible via an ArrayBuffer
called its mapping.
GPUBuffer
s are created via createBuffer()
. Buffers may be mappedAtCreation
.
[Exposed=(Window, Worker), SecureContext] interface GPUBuffer { readonly attribute GPUSize64Out size; readonly attribute GPUFlagsConstant usage; readonly attribute GPUBufferMapState mapState; Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size); ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size); undefined unmap(); undefined destroy(); }; GPUBuffer includes GPUObjectBase; enum GPUBufferMapState { "unmapped", "pending", "mapped", };
GPUBuffer
has the following immutable properties:
size
, of type GPUSize64Out, readonly
The length of the GPUBuffer
allocation in bytes.
usage
, of type GPUFlagsConstant, readonly
The allowed usages for this GPUBuffer
.
GPUBuffer
has the following content timeline properties:
mapState
, of type GPUBufferMapState, readonly
The current GPUBufferMapState
of the buffer:
"unmapped"
The buffer is not mapped for use by this
.getMappedRange()
.
"pending"
A mapping of the buffer has been requested, but is pending. It may succeed, or fail validation in mapAsync()
.
"mapped"
The buffer is mapped and this
.getMappedRange()
may be used.
The getter steps are:
[[pending_map]]
, of type Promise
<void> or null
, initially null
The Promise
returned by the currently-pending mapAsync()
call.
There is never more than one pending map, because mapAsync()
will refuse immediately if a request is already in flight.
[[mapping]]
, of type active buffer mapping or null
, initially null
Set if and only if the buffer is currently mapped for use by getMappedRange()
. Null otherwise (even if there is a [[pending_map]]
).
An active buffer mapping is a structure with the following fields:
The mapping for this GPUBuffer
. This data is accessed through ArrayBuffer
s which are views onto this data, returned by getMappedRange()
and stored in views.
GPUMapModeFlags
The GPUMapModeFlags
of the map, as specified in the corresponding call to mapAsync()
or createBuffer()
.
unsigned long long
, unsigned long long
]
The range of this GPUBuffer
that is mapped.
ArrayBuffer
>
The ArrayBuffer
s returned via getMappedRange()
to the application. They are tracked so they can be detached when unmap()
is called.
To
initialize an active buffer mappingwith mode
modeand range
range, run the following
content timelinesteps:
Let size be range[1] - range[0].
Let data be ? CreateByteDataBlock(size).
NOTE:
This may result in a
RangeError
being thrown. For consistency and predictability:
For any size at which new ArrayBuffer()
would succeed at a given moment, this allocation should succeed at that moment.
For any size at which new ArrayBuffer()
deterministically throws a RangeError
, this allocation should as well.
Return an active buffer mapping with:
GPUBuffer
has the following device timeline properties:
[[internal state]]
The current internal state of the buffer:
GPUBufferDescriptor
dictionary GPUBufferDescriptor : GPUObjectDescriptorBase { required GPUSize64 size; required GPUBufferUsageFlags usage; boolean mappedAtCreation = false; };
GPUBufferDescriptor
has the following members:
size
, of type GPUSize64
The size of the buffer in bytes.
usage
, of type GPUBufferUsageFlags
The allowed usages for the buffer.
mappedAtCreation
, of type boolean, defaulting to false
If true
creates the buffer in an already mapped state, allowing getMappedRange()
to be called immediately. It is valid to set mappedAtCreation
to true
even if usage
does not contain MAP_READ
or MAP_WRITE
. This can be used to set the buffer’s initial data.
Guarantees that even if the buffer creation eventually fails, it will still appear as if the mapped range can be written/read to until it is unmapped.
typedef [EnforceRange] unsigned longGPUBufferUsageFlags
; [Exposed=(Window, Worker), SecureContext] namespaceGPUBufferUsage
{ const GPUFlagsConstant MAP_READ = 0x0001; const GPUFlagsConstant MAP_WRITE = 0x0002; const GPUFlagsConstant COPY_SRC = 0x0004; const GPUFlagsConstant COPY_DST = 0x0008; const GPUFlagsConstant INDEX = 0x0010; const GPUFlagsConstant VERTEX = 0x0020; const GPUFlagsConstant UNIFORM = 0x0040; const GPUFlagsConstant STORAGE = 0x0080; const GPUFlagsConstant INDIRECT = 0x0100; const GPUFlagsConstant QUERY_RESOLVE = 0x0200; };
The GPUBufferUsage
flags determine how a GPUBuffer
may be used after its creation:
MAP_READ
The buffer can be mapped for reading. (Example: calling mapAsync()
with GPUMapMode.READ
)
May only be combined with COPY_DST
.
MAP_WRITE
The buffer can be mapped for writing. (Example: calling mapAsync()
with GPUMapMode.WRITE
)
May only be combined with COPY_SRC
.
COPY_SRC
The buffer can be used as the source of a copy operation. (Examples: as the source
argument of a copyBufferToBuffer() or copyBufferToTexture()
call.)
COPY_DST
The buffer can be used as the destination of a copy or write operation. (Examples: as the destination
argument of a copyBufferToBuffer() or copyTextureToBuffer()
call, or as the target of a writeBuffer()
call.)
INDEX
The buffer can be used as an index buffer. (Example: passed to setIndexBuffer()
.)
VERTEX
The buffer can be used as a vertex buffer. (Example: passed to setVertexBuffer()
.)
UNIFORM
The buffer can be used as a uniform buffer. (Example: as a bind group entry for a GPUBufferBindingLayout
with a buffer
.type
of "uniform"
.)
STORAGE
The buffer can be used as a storage buffer. (Example: as a bind group entry for a GPUBufferBindingLayout
with a buffer
.type
of "storage"
or "read-only-storage"
.)
INDIRECT
The buffer can be used as to store indirect command arguments. (Examples: as the indirectBuffer
argument of a drawIndirect()
or dispatchWorkgroupsIndirect()
call.)
QUERY_RESOLVE
The buffer can be used to capture query results. (Example: as the destination
argument of a resolveQuerySet()
call.)
createBuffer(descriptor)
Creates a GPUBuffer
.
Creating a 128 byte uniform buffer that can be written into:
const buffer = gpuDevice.createBuffer({ size: 128, usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST });5.1.4. Buffer Destruction
An application that no longer requires a GPUBuffer
can choose to lose access to it before garbage collection by calling destroy()
. Destroying a buffer also unmaps it, freeing any memory allocated for the mapping.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer
once all previously submitted operations using it are complete.
GPUBuffer
has the following methods:
destroy()
Destroys the GPUBuffer
.
Note: It is valid to destroy a buffer multiple times.
Note: Since no further operations can be enqueued using this buffer, implementations can free resource allocations, including mapped memory that was just unmapped.
An application can request to map a GPUBuffer
so that they can access its content via ArrayBuffer
s that represent part of the GPUBuffer
’s allocations. Mapping a GPUBuffer
is requested asynchronously with mapAsync()
so that the user agent can ensure the GPU finished using the GPUBuffer
before the application can access its content. A mapped GPUBuffer
cannot be used by the GPU and must be unmapped using unmap()
before work using it can be submitted to the Queue timeline.
Once the GPUBuffer
is mapped, the application can synchronously ask for access to ranges of its content with getMappedRange()
. The returned ArrayBuffer
can only be detached by unmap()
(directly, or via GPUBuffer
.destroy()
or GPUDevice
.destroy()
), and cannot be transferred. A TypeError
is thrown by any other operation that attempts to do so.
typedef [EnforceRange] unsigned longGPUMapModeFlags
; [Exposed=(Window, Worker), SecureContext] namespaceGPUMapMode
{ const GPUFlagsConstant READ = 0x0001; const GPUFlagsConstant WRITE = 0x0002; };
The GPUMapMode
flags determine how a GPUBuffer
is mapped when calling mapAsync()
:
READ
Only valid with buffers created with the MAP_READ
usage.
Once the buffer is mapped, calls to getMappedRange()
will return an ArrayBuffer
containing the buffer’s current values. Changes to the returned ArrayBuffer
will be discarded after unmap()
is called.
WRITE
Only valid with buffers created with the MAP_WRITE
usage.
Once the buffer is mapped, calls to getMappedRange()
will return an ArrayBuffer
containing the buffer’s current values. Changes to the returned ArrayBuffer
will be stored in the GPUBuffer
after unmap()
is called.
Note: Since the MAP_WRITE
buffer usage may only be combined with the COPY_SRC
buffer usage, mapping for writing can never return values produced by the GPU, and the returned ArrayBuffer
will only ever contain the default initialized data (zeros) or data written by the webpage during a previous mapping.
GPUBuffer
has the following methods:
mapAsync(mode, offset, size)
Maps the given range of the GPUBuffer
and resolves the returned Promise
when the GPUBuffer
’s content is ready to be accessed with getMappedRange()
.
The resolution of the returned Promise
only indicates that the buffer has been mapped. It does not guarantee the completion of any other operations visible to the content timeline, and in particular does not imply that any other Promise
returned from onSubmittedWorkDone()
or mapAsync()
on other GPUBuffer
s have resolved.
The resolution of the Promise
returned from onSubmittedWorkDone()
does imply the completion of mapAsync()
calls made prior to that call, on GPUBuffer
s last used exclusively on that queue.
:
If size is undefined
:
Let rangeSize be max(0, this.size
- offset).
Otherwise:
Let rangeSize be size.
If any of the following conditions are unsatisfied:
Set deviceLost to true
.
Issue the map failure steps on contentTimeline.
Return.
If any of the following conditions are unsatisfied:
Then:
Set deviceLost to false
.
Issue the map failure steps on contentTimeline.
Return.
Set this.[[internal state]]
to "unavailable".
Note: Since the buffer is mapped, its contents cannot change between this step and unmap()
.
When either of the following events occur (whichever comes first), or if either has already occurred:
The device timeline becomes informed of the completion of an unspecified queue timeline point:
after the completion of currently-enqueued operations that use this
and no later than the completion of all currently-enqueued operations (regardless of whether they use this).
this.[[device]]
becomes lost.
Then issue the subsequent steps on the device timeline of this.[[device]]
.
steps:
Set deviceLost to true
if this.[[device]]
is lost, and false
otherwise.
Note: The device could have been lost between the previous block of steps and this one.
If deviceLost:
Issue the map failure steps on contentTimeline.
Otherwise:
Let internalStateAtCompletion be this.[[internal state]]
.
Note: If, and only if, at this point the buffer has become "available" again due to an unmap()
call, then [[pending_map]]
!= p below, so mapping will not succeed in the steps below.
Let dataForMappedRegion be the contents of this starting at offset offset, for rangeSize bytes.
Issue the map success steps on the contentTimeline.
getMappedRange(offset, size)
Returns an ArrayBuffer
with the contents of the GPUBuffer
in the given mapped range.
unmap()
Unmaps the mapped range of the GPUBuffer
and makes its contents available for use by the GPU again.
GPUTexture
A texture is made up of 1d
, 2d
, or 3d
arrays of data which can contain multiple values per-element to represent things like colors. Textures can be read and written in many ways, depending on the GPUTextureUsage
they are created with. For example, textures can be sampled, read, and written from render and compute pipeline shaders, and they can be written by render pass outputs. Internally, textures are often stored in GPU memory with a layout optimized for multidimensional access rather than linear access.
One texture consists of one or more texture subresources, each uniquely identified by a mipmap level and, for 2d
textures only, array layer and aspect.
A texture subresource is a subresource: each can be used in different internal usages within a single usage scope.
Each subresource in a mipmap level is approximately half the size, in each spatial dimension, of the corresponding resource in the lesser level (see logical miplevel-specific texture extent). The subresource in level 0 has the dimensions of the texture itself. Smaller levels are typically used to store lower resolution versions of the same image. GPUSampler
and WGSL provide facilities for selecting and interpolating between levels of detail, explicitly or automatically.
A "2d"
texture may be an array of array layers. Each subresource in a layer is the same size as the corresponding resources in other layers. For non-2d textures, all subresources have an array layer index of 0.
Each subresource has an aspect. Color textures have just one aspect: color. Depth-or-stencil format textures may have multiple aspects: a depth aspect, a stencil aspect, or both, and may be used in special ways, such as in depthStencilAttachment
and in "depth"
bindings.
A "3d"
texture may have multiple slices, each being the two-dimensional image at a particular z
value in the texture. Slices are not separate subresources.
[Exposed=(Window, Worker), SecureContext] interface GPUTexture { GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {}); undefined destroy(); readonly attribute GPUIntegerCoordinateOut width; readonly attribute GPUIntegerCoordinateOut height; readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers; readonly attribute GPUIntegerCoordinateOut mipLevelCount; readonly attribute GPUSize32Out sampleCount; readonly attribute GPUTextureDimension dimension; readonly attribute GPUTextureFormat format; readonly attribute GPUFlagsConstant usage; }; GPUTexture includes GPUObjectBase;
GPUTexture
has the following immutable properties:
width
, of type GPUIntegerCoordinateOut, readonly
The width of this GPUTexture
.
height
, of type GPUIntegerCoordinateOut, readonly
The height of this GPUTexture
.
depthOrArrayLayers
, of type GPUIntegerCoordinateOut, readonly
The depth or layer count of this GPUTexture
.
mipLevelCount
, of type GPUIntegerCoordinateOut, readonly
The number of mip levels of this GPUTexture
.
sampleCount
, of type GPUSize32Out, readonly
The number of sample count of this GPUTexture
.
dimension
, of type GPUTextureDimension, readonly
The dimension of the set of texel for each of this GPUTexture
’s subresources.
format
, of type GPUTextureFormat, readonly
The format of this GPUTexture
.
usage
, of type GPUFlagsConstant, readonly
The allowed usages for this GPUTexture
.
[[viewFormats]]
, of type sequence<GPUTextureFormat
>
The set of GPUTextureFormat
s that can be used as the GPUTextureViewDescriptor
.format
when creating views on this GPUTexture
.
GPUTexture
has the following device timeline properties:
[[destroyed]]
, of type boolean
, initially false
If the texture is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.
The logical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel. It is calculated by this procedure:
The physical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel that includes the possible extra padding to form complete texel blocks in the texture. It is calculated by this procedure:
6.1.1.GPUTextureDescriptor
dictionary GPUTextureDescriptor : GPUObjectDescriptorBase { required GPUExtent3D size; GPUIntegerCoordinate mipLevelCount = 1; GPUSize32 sampleCount = 1; GPUTextureDimension dimension = "2d"; required GPUTextureFormat format; required GPUTextureUsageFlags usage; sequence<GPUTextureFormat> viewFormats = []; };
GPUTextureDescriptor
has the following members:
size
, of type GPUExtent3D
The width, height, and depth or layer count of the texture.
mipLevelCount
, of type GPUIntegerCoordinate, defaulting to 1
The number of mip levels the texture will contain.
sampleCount
, of type GPUSize32, defaulting to 1
The sample count of the texture. A sampleCount
> 1
indicates a multisampled texture.
dimension
, of type GPUTextureDimension, defaulting to "2d"
Whether the texture is one-dimensional, an array of two-dimensional layers, or three-dimensional.
format
, of type GPUTextureFormat
The format of the texture.
usage
, of type GPUTextureUsageFlags
The allowed usages for the texture.
viewFormats
, of type sequence<GPUTextureFormat>, defaulting to []
Specifies what view format
values will be allowed when calling createView()
on this texture (in addition to the texture’s actual format
).
NOTE:
Adding a format to this list may have a significant performance impact, so it is best to avoid adding formats unnecessarily.
The actual performance impact is highly dependent on the target system; developers must test various systems to find out the impact on their particular application. For example, on some systems any texture with a format
or viewFormats
entry including "rgba8unorm-srgb"
will perform less optimally than a "rgba8unorm"
texture which does not. Similar caveats exist for other formats and pairs of formats on other systems.
Formats in this list must be texture view format compatible with the texture format.
Two
GPUTextureFormat
s
formatand
viewFormatare
texture view format compatibleif:
format equals viewFormat, or
format and viewFormat differ only in whether they are srgb
formats (have the -srgb
suffix).
enum GPUTextureDimension
{
"1d",
"2d",
"3d",
};
"1d"
Specifies a texture that has one dimension, width. "1d"
textures cannot have mipmaps, be multisampled, use compressed or depth/stencil formats, or be used as a render target.
"2d"
Specifies a texture that has a width and height, and may have layers.
"3d"
Specifies a texture that has a width, height, and depth. "3d"
textures cannot be multisampled, and their format must support 3d textures (all plain color formats and some packed/compressed formats).
typedef [EnforceRange] unsigned longGPUTextureUsageFlags
; [Exposed=(Window, Worker), SecureContext] namespaceGPUTextureUsage
{ const GPUFlagsConstant COPY_SRC = 0x01; const GPUFlagsConstant COPY_DST = 0x02; const GPUFlagsConstant TEXTURE_BINDING = 0x04; const GPUFlagsConstant STORAGE_BINDING = 0x08; const GPUFlagsConstant RENDER_ATTACHMENT = 0x10; };
The GPUTextureUsage
flags determine how a GPUTexture
may be used after its creation:
COPY_SRC
The texture can be used as the source of a copy operation. (Examples: as the source
argument of a copyTextureToTexture()
or copyTextureToBuffer()
call.)
COPY_DST
The texture can be used as the destination of a copy or write operation. (Examples: as the destination
argument of a copyTextureToTexture()
or copyBufferToTexture()
call, or as the target of a writeTexture()
call.)
TEXTURE_BINDING
The texture can be bound for use as a sampled texture in a shader (Example: as a bind group entry for a GPUTextureBindingLayout
.)
STORAGE_BINDING
The texture can be bound for use as a storage texture in a shader (Example: as a bind group entry for a GPUStorageTextureBindingLayout
.)
RENDER_ATTACHMENT
The texture can be used as a color or depth/stencil attachment in a render pass. (Example: as a GPURenderPassColorAttachment
.view
or GPURenderPassDepthStencilAttachment
.view
.)
createTexture(descriptor)
Creates a GPUTexture
.
Creating a 16x16, RGBA, 2D texture with one array layer and one mip level:
const texture = gpuDevice.createTexture({ size: { width: 16, height: 16 }, format: 'rgba8unorm', usage: GPUTextureUsage.TEXTURE_BINDING, });6.1.4. Texture Destruction
An application that no longer requires a GPUTexture
can choose to lose access to it before garbage collection by calling destroy()
.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUTexture
once all previously submitted operations using it are complete.
GPUTexture
has the following methods:
destroy()
Destroys the GPUTexture
.
GPUTextureView
A GPUTextureView
is a view onto some subset of the texture subresources defined by a particular GPUTexture
.
[Exposed=(Window, Worker), SecureContext] interface GPUTextureView { }; GPUTextureView includes GPUObjectBase;
GPUTextureView
has the following immutable properties:
[[texture]]
, readonly
The GPUTexture
into which this is a view.
[[descriptor]]
, readonly
The GPUTextureViewDescriptor
describing this texture view.
All optional fields of GPUTextureViewDescriptor
are defined.
[[renderExtent]]
, readonly
For renderable views, this is the effective GPUExtent3DDict
for rendering.
Note: this extent depends on the baseMipLevel
.
dictionary GPUTextureViewDescriptor
: GPUObjectDescriptorBase {
GPUTextureFormat format;
GPUTextureViewDimension dimension;
GPUTextureUsageFlags usage = 0;
GPUTextureAspect aspect = "all";
GPUIntegerCoordinate baseMipLevel = 0;
GPUIntegerCoordinate mipLevelCount;
GPUIntegerCoordinate baseArrayLayer = 0;
GPUIntegerCoordinate arrayLayerCount;
};
GPUTextureViewDescriptor
has the following members:
format
, of type GPUTextureFormat
The format of the texture view. Must be either the format
of the texture or one of the viewFormats
specified during its creation.
dimension
, of type GPUTextureViewDimension
The dimension to view the texture as.
usage
, of type GPUTextureUsageFlags, defaulting to 0
The allowed usage(s)
for the texture view. Must be a subset of the usage
flags of the texture. If 0, defaults to the full set of usage
flags of the texture.
Note: If the view’s format
doesn’t support all of the texture’s usage
s, the default will fail, and the view’s usage
must be specified explicitly.
aspect
, of type GPUTextureAspect, defaulting to "all"
Which aspect(s)
of the texture are accessible to the texture view.
baseMipLevel
, of type GPUIntegerCoordinate, defaulting to 0
The first (most detailed) mipmap level accessible to the texture view.
mipLevelCount
, of type GPUIntegerCoordinate
How many mipmap levels, starting with baseMipLevel
, are accessible to the texture view.
baseArrayLayer
, of type GPUIntegerCoordinate, defaulting to 0
The index of the first array layer accessible to the texture view.
arrayLayerCount
, of type GPUIntegerCoordinate
How many array layers, starting with baseArrayLayer
, are accessible to the texture view.
enum GPUTextureViewDimension
{
"1d",
"2d",
"2d-array",
"cube",
"cube-array",
"3d",
};
"1d"
The texture is viewed as a 1-dimensional image.
Corresponding WGSL types:
texture_1d
texture_storage_1d
"2d"
The texture is viewed as a single 2-dimensional image.
Corresponding WGSL types:
texture_2d
texture_storage_2d
texture_multisampled_2d
texture_depth_2d
texture_depth_multisampled_2d
"2d-array"
The texture view is viewed as an array of 2-dimensional images.
Corresponding WGSL types:
texture_2d_array
texture_storage_2d_array
texture_depth_2d_array
"cube"
The texture is viewed as a cubemap.
The view has 6 array layers, each corresponding to a face of the cube in the order [+X, -X, +Y, -Y, +Z, -Z]
and the following orientations:
Note: When viewed from the inside, this results in a left-handed coordinate system where +X is right, +Y is up, and +Z is forward.
Sampling is done seamlessly across the faces of the cubemap.
Corresponding WGSL types:
texture_cube
texture_depth_cube
"cube-array"
The texture is viewed as a packed array of n cubemaps, each with 6 array layers behaving like one "cube"
view, for 6n array layers in total.
Corresponding WGSL types:
texture_cube_array
texture_depth_cube_array
"3d"
The texture is viewed as a 3-dimensional image.
Corresponding WGSL types:
texture_3d
texture_storage_3d
Each GPUTextureAspect
value corresponds to a set of aspects. The set of aspects are defined for each value below.
enum GPUTextureAspect { "all", "stencil-only", "depth-only", };
"all"
All available aspects of the texture format will be accessible to the texture view. For color formats the color aspect will be accessible. For combined depth-stencil formats both the depth and stencil aspects will be accessible. Depth-or-stencil formats with a single aspect will only make that aspect accessible.
The set of aspects is [color, depth, stencil].
"stencil-only"
Only the stencil aspect of a depth-or-stencil format format will be accessible to the texture view.
The set of aspects is [stencil].
"depth-only"
Only the depth aspect of a depth-or-stencil format format will be accessible to the texture view.
The set of aspects is [depth].
createView(descriptor)
Creates a GPUTextureView
.
NOTE:
By default
createView()
will create a view with a dimension that can represent the entire texture. For example, calling
createView()
without specifying a
dimension
on a
"2d"
texture with more than one layer will create a
"2d-array"
GPUTextureView
, even if an
arrayLayerCount
of 1 is specified.
For textures created from sources where the layer count is unknown at the time of development it is recommended that calls to createView()
are provided with an explicit dimension
to ensure shader compatibility.
The name of the format specifies the order of components, bits per component, and data type for the component.
r
, g
, b
, a
= red, green, blue, alpha
unorm
= unsigned normalized
snorm
= signed normalized
uint
= unsigned int
sint
= signed int
float
= floating point
If the format has the -srgb
suffix, then sRGB conversions from gamma to linear and vice versa are applied during the reading and writing of color values in the shader. Compressed texture formats are provided by features. Their naming should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm
.
The texel block is a single addressable element of the textures in pixel-based GPUTextureFormat
s, and a single compressed block of the textures in block-based compressed GPUTextureFormat
s.
The texel block width and texel block height specifies the dimension of one texel block.
For pixel-based GPUTextureFormat
s, the texel block width and texel block height are always 1.
For block-based compressed GPUTextureFormat
s, the texel block width is the number of texels in each row of one texel block, and the texel block height is the number of texel rows in one texel block. See § 26.1 Texture Format Capabilities for an exhaustive list of values for every texture format.
The of an aspect of a GPUTextureFormat
is the number of bytes one texel block occupies during a texel copy, if applicable.
Note: The texel block memory cost of a GPUTextureFormat
is the number of bytes needed to store one texel block. It is not fully defined for all formats. This value is informative and non-normative.
enumGPUTextureFormat
{ // 8-bit formats"r8unorm"
,"r8snorm"
,"r8uint"
,"r8sint"
, // 16-bit formats"r16uint"
,"r16sint"
,"r16float"
,"rg8unorm"
,"rg8snorm"
,"rg8uint"
,"rg8sint"
, // 32-bit formats"r32uint"
,"r32sint"
,"r32float"
,"rg16uint"
,"rg16sint"
,"rg16float"
,"rgba8unorm"
,"rgba8unorm-srgb"
,"rgba8snorm"
,"rgba8uint"
,"rgba8sint"
,"bgra8unorm"
,"bgra8unorm-srgb"
, // Packed 32-bit formats"rgb9e5ufloat"
,"rgb10a2uint"
,"rgb10a2unorm"
,"rg11b10ufloat"
, // 64-bit formats"rg32uint"
,"rg32sint"
,"rg32float"
,"rgba16uint"
,"rgba16sint"
,"rgba16float"
, // 128-bit formats"rgba32uint"
,"rgba32sint"
,"rgba32float"
, // Depth/stencil formats"stencil8"
,"depth16unorm"
,"depth24plus"
,"depth24plus-stencil8"
,"depth32float"
, // "depth32float-stencil8" feature"depth32float-stencil8"
, // BC compressed formats usable if "texture-compression-bc" is both // supported by the device/user agent and enabled in requestDevice."bc1-rgba-unorm"
,"bc1-rgba-unorm-srgb"
,"bc2-rgba-unorm"
,"bc2-rgba-unorm-srgb"
,"bc3-rgba-unorm"
,"bc3-rgba-unorm-srgb"
,"bc4-r-unorm"
,"bc4-r-snorm"
,"bc5-rg-unorm"
,"bc5-rg-snorm"
,"bc6h-rgb-ufloat"
,"bc6h-rgb-float"
,"bc7-rgba-unorm"
,"bc7-rgba-unorm-srgb"
, // ETC2 compressed formats usable if "texture-compression-etc2" is both // supported by the device/user agent and enabled in requestDevice."etc2-rgb8unorm"
,"etc2-rgb8unorm-srgb"
,"etc2-rgb8a1unorm"
,"etc2-rgb8a1unorm-srgb"
,"etc2-rgba8unorm"
,"etc2-rgba8unorm-srgb"
,"eac-r11unorm"
,"eac-r11snorm"
,"eac-rg11unorm"
,"eac-rg11snorm"
, // ASTC compressed formats usable if "texture-compression-astc" is both // supported by the device/user agent and enabled in requestDevice."astc-4x4-unorm"
,"astc-4x4-unorm-srgb"
,"astc-5x4-unorm"
,"astc-5x4-unorm-srgb"
,"astc-5x5-unorm"
,"astc-5x5-unorm-srgb"
,"astc-6x5-unorm"
,"astc-6x5-unorm-srgb"
,"astc-6x6-unorm"
,"astc-6x6-unorm-srgb"
,"astc-8x5-unorm"
,"astc-8x5-unorm-srgb"
,"astc-8x6-unorm"
,"astc-8x6-unorm-srgb"
,"astc-8x8-unorm"
,"astc-8x8-unorm-srgb"
,"astc-10x5-unorm"
,"astc-10x5-unorm-srgb"
,"astc-10x6-unorm"
,"astc-10x6-unorm-srgb"
,"astc-10x8-unorm"
,"astc-10x8-unorm-srgb"
,"astc-10x10-unorm"
,"astc-10x10-unorm-srgb"
,"astc-12x10-unorm"
,"astc-12x10-unorm-srgb"
,"astc-12x12-unorm"
,"astc-12x12-unorm-srgb"
, };
The depth component of the "depth24plus"
and "depth24plus-stencil8"
formats may be implemented as either a 24-bit depth value or a "depth32float"
value.
The stencil8
format may be implemented as either a real "stencil8", or "depth24stencil8", where the depth aspect is hidden and inaccessible.
NOTE:
While the precision of depth32float channels is strictly higher than the precision of
24-bit depthchannels for all values in the representable range (0.0 to 1.0), note that the set of representable values is not an exact superset.
For 24-bit depth, 1 ULP has a constant value of 1 / (224 − 1).
For depth32float, 1 ULP has a variable value no greater than 1 / (224).
A format is renderable if it is either a color renderable format, or a depth-or-stencil format. If a format is listed in § 26.1.1 Plain color formats with RENDER_ATTACHMENT
capability, it is a color renderable format. Any other format is not a color renderable format. All depth-or-stencil formats are renderable.
A renderable format is also blendable if it can be used with render pipeline blending. See § 26.1 Texture Format Capabilities.
A format is filterable if it supports the GPUTextureSampleType
"float"
(not just "unfilterable-float"
); that is, it can be used with "filtering"
GPUSampler
s. See § 26.1 Texture Format Capabilities.
Use of some texture formats require a feature to be enabled on the GPUDevice
. Because new formats can be added to the specification, those enum values may not be known by the implementation. In order to normalize behavior across implementations, attempting to use a format that requires a feature will throw an exception if the associated feature is not enabled on the device. This makes the behavior the same as when the format is unknown to the implementation.
See § 26.1 Texture Format Capabilities for information about which GPUTextureFormat
s require features.
GPUExternalTexture
A GPUExternalTexture
is a sampleable 2D texture wrapping an external video frame. It is an immutable snapshot; its contents may not change over time, either from inside WebGPU (it is only sampleable) or from outside WebGPU (e.g. due to video frame advancement).
GPUExternalTexture
s can be bound into bind groups via the externalTexture
bind group layout entry member. Note that member uses several binding slots, as defined there.
NOTE:
GPUExternalTexture
can
be implemented without creating a copy of the imported source, but this depends
implementation-definedfactors. Ownership of the underlying representation may either be exclusive or shared with other owners (such as a video decoder), but this is not visible to the application.
The underlying representation of an external texture is unobservable (except for precise sampling behavior), but typically may include:
Up to three 2D planes of data (e.g. RGBA, Y+UV, Y+U+V).
Metadata for converting coordinates before reading from those planes (crop and rotation).
Metadata for converting values into the specified output color space (matrices, gammas, 3D LUT).
The configuration used internally by an implementation may not be consistent across time, systems, user agents, media sources, or even frames within a single video source. In order to account for many possible representations, the binding conservatively uses the following, for each external texture:
three sampled texture bindings (for up to 3 planes),
one sampled texture binding for a 3D LUT,
one sampler binding to sample the 3D LUT, and
one uniform buffer binding for metadata.
[Exposed=(Window, Worker), SecureContext] interface GPUExternalTexture { }; GPUExternalTexture includes GPUObjectBase;
GPUExternalTexture
has the following immutable properties:
[[descriptor]]
, of type GPUExternalTextureDescriptor
, readonly
The descriptor with which the texture was created.
GPUExternalTexture
has the following immutable properties:
[[expired]]
, of type boolean
, initially false
Indicates whether the object has expired (can no longer be used).
Note: Unlike [[destroyed]]
slots, which are similar, this can change from true
back to false
.
An external texture is created from an external video object using importExternalTexture()
.
An external texture created from an HTMLVideoElement
expires (is destroyed) automatically in a task after it is imported, instead of manually or upon garbage collection like other resources. When an external texture expires, its [[expired]]
slot changes to true
.
An external texture created from a VideoFrame
expires (is destroyed) when, and only when, the source VideoFrame
is closed, either explicitly by close()
, or by other means.
Note: As noted in decode()
, authors should call close()
on output VideoFrame
s to avoid decoder stalls. If an imported VideoFrame
is dropped without being closed, the imported GPUExternalTexture
object will keep it alive until it is also dropped. The VideoFrame
cannot be garbage collected until both objects are dropped. Garbage collection is unpredictable, so this may still stall the video decoder.
Once the GPUExternalTexture
expires, importExternalTexture()
must be called again. However, the user agent may un-expire and return the same GPUExternalTexture
again, instead of creating a new one. This will commonly happen unless the execution of the application is scheduled to match the video’s frame rate (e.g. using requestVideoFrameCallback()
). If the same object is returned again, it will compare equal, and GPUBindGroup
s, GPURenderBundle
s, etc. referencing the previous object can still be used.
dictionary GPUExternalTextureDescriptor
: GPUObjectDescriptorBase {
required (HTMLVideoElement or VideoFrame) source;
PredefinedColorSpace colorSpace = "srgb";
};
GPUExternalTextureDescriptor
dictionaries have the following members:
source
, of type (HTMLVideoElement or VideoFrame)
The video source to import the external texture from. Source size is determined as described by the external source dimensions table.
colorSpace
, of type PredefinedColorSpace, defaulting to "srgb"
The color space the image contents of source
will be converted into when reading.
importExternalTexture(descriptor)
Creates a GPUExternalTexture
wrapping the provided image source.
GPUDevice
this
.
Arguments:
Returns: GPUExternalTexture
Content timeline steps:
Let source be descriptor.source
.
If the current image contents of source are the same as the most recent importExternalTexture()
call with the same descriptor (ignoring label
), and the user agent chooses to reuse it:
Let previousResult be the GPUExternalTexture
returned previously.
Set previousResult.[[expired]]
to false
, renewing ownership of the underlying resource.
Let result be previousResult.
Note: This allows the application to detect duplicate imports and avoid re-creating dependent objects (such as GPUBindGroup
s). Implementations still need to be able to handle a single frame being wrapped by multiple GPUExternalTexture
, since import metadata like colorSpace
can change even for the same frame.
Otherwise:
If source is not origin-clean, throw a SecurityError
and return.
Let usability be ? check the usability of the image argument(source).
If usability is not good
:
Return an invalidated GPUExternalTexture
.
Let data be the result of converting the current image contents of source into the color space descriptor.colorSpace
with unpremultiplied alpha.
This may result in values outside of the range [0, 1]. If clamping is desired, it may be performed after sampling.
Note: This is described like a copy, but may be implemented as a reference to read-only underlying data plus appropriate metadata to perform conversion later.
Let result be a new GPUExternalTexture
object wrapping data.
If source is an HTMLVideoElement
, queue an automatic expiry task with device this and the following steps:
Set result.[[expired]]
to true
, releasing ownership of the underlying resource.
Note: An HTMLVideoElement
should be imported in the same task that samples the texture (which should generally be scheduled using requestVideoFrameCallback
or requestAnimationFrame()
depending on the application). Otherwise, a texture could get destroyed by these steps before the application is finished using it.
If source is a VideoFrame
, then when source is closed, run the following steps:
Return result.
Rendering using an video element external texture at the page animation frame rate:
const videoElement = document.createElement('video'); // ... set up videoElement, wait for it to be ready... function frame() { requestAnimationFrame(frame); // Always re-import the video on every animation frame, because the // import is likely to have expired. // The browser may cache and reuse a past frame, and if it does it // may return the same GPUExternalTexture object again. // In this case, old bind groups are still valid. const externalTexture = gpuDevice.importExternalTexture({ source: videoElement }); // ... render using externalTexture... } requestAnimationFrame(frame);
Rendering using an video element external texture at the video’s frame rate, if
requestVideoFrameCallback
is available:
const videoElement = document.createElement('video'); // ... set up videoElement... function frame() { videoElement.requestVideoFrameCallback(frame); // Always re-import, because we know the video frame has advanced const externalTexture = gpuDevice.importExternalTexture({ source: videoElement }); // ... render using externalTexture... } videoElement.requestVideoFrameCallback(frame);6.5. Sampling External Texture Bindings
The externalTexture
binding point allows binding GPUExternalTexture
objects (from dynamic image sources like videos). It also supports GPUTextureView
.
Note: When a GPUTextureView
is bound to an externalTexture
binding, it is like a GPUExternalTexture
with a single RGBA plane and no crop, rotation, or color conversion.
External textures are represented in WGSL with texture_external
and may be read using textureLoad
and textureSampleBaseClampToEdge
.
The sampler
provided to textureSampleBaseClampToEdge
is used to sample the underlying textures.
When the binding resource type is a GPUExternalTexture
, the result is in the color space set by colorSpace
. It is implementation-dependent whether, for any given external texture, the sampler (and filtering) is applied before or after conversion from underlying values into the specified color space.
Note: If the internal representation is an RGBA plane, sampling behaves as on a regular 2D texture. If there are several underlying planes (e.g. Y+UV), the sampler is used to sample each underlying texture separately, prior to conversion from YUV to the specified color space.
7. Samplers 7.1.GPUSampler
A GPUSampler
encodes transformations and filtering information that can be used in a shader to interpret texture resource data.
GPUSampler
s are created via createSampler()
.
[Exposed=(Window, Worker), SecureContext] interface GPUSampler { }; GPUSampler includes GPUObjectBase;
GPUSampler
has the following immutable properties:
[[descriptor]]
, of type GPUSamplerDescriptor
, readonly
The GPUSamplerDescriptor
with which the GPUSampler
was created.
[[isComparison]]
, of type boolean
, readonly
Whether the GPUSampler
is used as a comparison sampler.
[[isFiltering]]
, of type boolean
, readonly
Whether the GPUSampler
weights multiple samples of a texture.
GPUSamplerDescriptor
A GPUSamplerDescriptor
specifies the options to use to create a GPUSampler
.
dictionary GPUSamplerDescriptor
: GPUObjectDescriptorBase {
GPUAddressMode addressModeU = "clamp-to-edge";
GPUAddressMode addressModeV = "clamp-to-edge";
GPUAddressMode addressModeW = "clamp-to-edge";
GPUFilterMode magFilter = "nearest";
GPUFilterMode minFilter = "nearest";
GPUMipmapFilterMode mipmapFilter = "nearest";
float lodMinClamp = 0;
float lodMaxClamp = 32;
GPUCompareFunction compare;
[Clamp] unsigned short maxAnisotropy = 1;
};
addressModeU
, of type GPUAddressMode, defaulting to "clamp-to-edge"
addressModeV
, of type GPUAddressMode, defaulting to "clamp-to-edge"
addressModeW
, of type GPUAddressMode, defaulting to "clamp-to-edge"
Specifies the address modes
for the texture width, height, and depth coordinates, respectively.
magFilter
, of type GPUFilterMode, defaulting to "nearest"
Specifies the sampling behavior when the sampled area is smaller than or equal to one texel.
minFilter
, of type GPUFilterMode, defaulting to "nearest"
Specifies the sampling behavior when the sampled area is larger than one texel.
mipmapFilter
, of type GPUMipmapFilterMode, defaulting to "nearest"
Specifies behavior for sampling between mipmap levels.
lodMinClamp
, of type float, defaulting to 0
lodMaxClamp
, of type float, defaulting to 32
Specifies the minimum and maximum levels of detail, respectively, used internally when sampling a texture.
compare
, of type GPUCompareFunction
When provided the sampler will be a comparison sampler with the specified GPUCompareFunction
.
Note: Comparison samplers may use filtering, but the sampling results will be implementation-dependent and may differ from the normal filtering rules.
maxAnisotropy
, of type unsigned short, defaulting to 1
Specifies the maximum anisotropy value clamp used by the sampler. Anisotropic filtering is enabled when maxAnisotropy
is > 1 and the implementation supports it.
Anisotropic filtering improves the image quality of textures sampled at oblique viewing angles. Higher maxAnisotropy
values indicate the maximum ratio of anisotropy supported when filtering.
NOTE:
Most implementations support
maxAnisotropy
values in range between 1 and 16, inclusive. The used value of
maxAnisotropy
will be clamped to the maximum value that the platform supports.
The precise filtering behavior is implementation-dependent.
Level of detail (LOD) describes which mip level(s) are selected when sampling a texture. It may be specified explicitly through shader methods like textureSampleLevel or implicitly determined from the texture coordinate derivatives.
Note: See Scale Factor Operation, LOD Operation and Image Level Selection in the Vulkan 1.3 spec for an example of how implicit LODs may be calculated.
GPUAddressMode
describes the behavior of the sampler if the sampled texels extend beyond the bounds of the sampled texture.
enum GPUAddressMode
{
"clamp-to-edge",
"repeat",
"mirror-repeat",
};
"clamp-to-edge"
Texture coordinates are clamped between 0.0 and 1.0, inclusive.
"repeat"
Texture coordinates wrap to the other side of the texture.
"mirror-repeat"
Texture coordinates wrap to the other side of the texture, but the texture is flipped when the integer part of the coordinate is odd.
GPUFilterMode
and GPUMipmapFilterMode
describe the behavior of the sampler if the sampled area does not cover exactly one texel.
Note: See Texel Filtering in the Vulkan 1.3 spec for an example of how samplers may determine which texels are sampled from for the various filtering modes.
enumGPUFilterMode
{ "nearest", "linear", }; enumGPUMipmapFilterMode
{"nearest"
,"linear"
, };
"nearest"
Return the value of the texel nearest to the texture coordinates.
"linear"
Select two texels in each dimension and return a linear interpolation between their values.
GPUCompareFunction
specifies the behavior of a comparison sampler. If a comparison sampler is used in a shader, the depth_ref
is compared to the fetched texel value, and the result of this comparison test is generated (1.0f
for pass, or 0.0f
for fail).
After comparison, if texture filtering is enabled, the filtering step occurs, so that comparison results are mixed together resulting in values in the range [0, 1]
. Filtering should behave as usual, however it may be computed with lower precision or not mix results at all.
enum GPUCompareFunction
{
"never",
"less",
"equal",
"less-equal",
"greater",
"not-equal",
"greater-equal",
"always",
};
"never"
Comparison tests never pass.
"less"
A provided value passes the comparison test if it is less than the sampled value.
"equal"
A provided value passes the comparison test if it is equal to the sampled value.
"less-equal"
A provided value passes the comparison test if it is less than or equal to the sampled value.
"greater"
A provided value passes the comparison test if it is greater than the sampled value.
"not-equal"
A provided value passes the comparison test if it is not equal to the sampled value.
"greater-equal"
A provided value passes the comparison test if it is greater than or equal to the sampled value.
"always"
Comparison tests always pass.
createSampler(descriptor)
Creates a GPUSampler
.
Creating a
GPUSampler
that does trilinear filtering and repeats texture coordinates:
const sampler = gpuDevice.createSampler({ addressModeU: 'repeat', addressModeV: 'repeat', magFilter: 'linear', minFilter: 'linear', mipmapFilter: 'linear', });8. Resource Binding 8.1.
GPUBindGroupLayout
A GPUBindGroupLayout
defines the interface between a set of resources bound in a GPUBindGroup
and their accessibility in shader stages.
[Exposed=(Window, Worker), SecureContext] interface GPUBindGroupLayout { }; GPUBindGroupLayout includes GPUObjectBase;
GPUBindGroupLayout
has the following immutable properties:
[[descriptor]]
, of type GPUBindGroupLayoutDescriptor
, readonly
A GPUBindGroupLayout
is created via GPUDevice.createBindGroupLayout()
.
dictionary GPUBindGroupLayoutDescriptor
: GPUObjectDescriptorBase {
required sequence<GPUBindGroupLayoutEntry> entries;
};
GPUBindGroupLayoutDescriptor
dictionaries have the following members:
entries
, of type sequence<GPUBindGroupLayoutEntry>
A list of entries describing the shader resource bindings for a bind group.
A GPUBindGroupLayoutEntry
describes a single shader resource binding to be included in a GPUBindGroupLayout
.
dictionary GPUBindGroupLayoutEntry
{
required GPUIndex32 binding;
required GPUShaderStageFlags visibility;
GPUBufferBindingLayout buffer;
GPUSamplerBindingLayout sampler;
GPUTextureBindingLayout texture;
GPUStorageTextureBindingLayout storageTexture;
GPUExternalTextureBindingLayout externalTexture;
};
GPUBindGroupLayoutEntry
dictionaries have the following members:
binding
, of type GPUIndex32
A unique identifier for a resource binding within the GPUBindGroupLayout
, corresponding to a GPUBindGroupEntry.binding
and a @binding attribute in the GPUShaderModule
.
visibility
, of type GPUShaderStageFlags
A bitset of the members of GPUShaderStage
. Each set bit indicates that a GPUBindGroupLayoutEntry
’s resource will be accessible from the associated shader stage.
buffer
, of type GPUBufferBindingLayout
When provided, indicates the binding resource type for this GPUBindGroupLayoutEntry
is GPUBufferBinding
.
sampler
, of type GPUSamplerBindingLayout
When provided, indicates the binding resource type for this GPUBindGroupLayoutEntry
is GPUSampler
.
texture
, of type GPUTextureBindingLayout
When provided, indicates the binding resource type for this GPUBindGroupLayoutEntry
is GPUTextureView
.
storageTexture
, of type GPUStorageTextureBindingLayout
When provided, indicates the binding resource type for this GPUBindGroupLayoutEntry
is GPUTextureView
.
externalTexture
, of type GPUExternalTextureBindingLayout
When provided, indicates the binding resource type for this GPUBindGroupLayoutEntry
is either GPUExternalTexture
or GPUTextureView
.
External textures use several binding slots: see Exceeds the binding slot limits.
typedef [EnforceRange] unsigned longGPUShaderStageFlags
; [Exposed=(Window, Worker), SecureContext] namespaceGPUShaderStage
{ const GPUFlagsConstant VERTEX = 0x1; const GPUFlagsConstant FRAGMENT = 0x2; const GPUFlagsConstant COMPUTE = 0x4; };
GPUShaderStage
contains the following flags, which describe which shader stages a corresponding GPUBindGroupEntry
for this GPUBindGroupLayoutEntry
will be visible to:
VERTEX
The bind group entry will be accessible to vertex shaders.
FRAGMENT
The bind group entry will be accessible to fragment shaders.
COMPUTE
The bind group entry will be accessible to compute shaders.
The binding member of a GPUBindGroupLayoutEntry
is determined by which member of the GPUBindGroupLayoutEntry
is defined: buffer
, sampler
, texture
, storageTexture
, or externalTexture
. Only one may be defined for any given GPUBindGroupLayoutEntry
. Each member has an associated GPUBindingResource
type and each binding type has an associated internal usage, given by this table:
enumGPUBufferBindingType
{"uniform"
,"storage"
,"read-only-storage"
, }; dictionaryGPUBufferBindingLayout
{ GPUBufferBindingType type = "uniform"; boolean hasDynamicOffset = false; GPUSize64 minBindingSize = 0; };
GPUBufferBindingLayout
dictionaries have the following members:
type
, of type GPUBufferBindingType, defaulting to "uniform"
Indicates the type required for buffers bound to this bindings.
hasDynamicOffset
, of type boolean, defaulting to false
Indicates whether this binding requires a dynamic offset.
minBindingSize
, of type GPUSize64, defaulting to 0
Indicates the minimum size
of a buffer binding used with this bind point.
Bindings are always validated against this size in createBindGroup()
.
If this is not 0
, pipeline creation additionally validates that this value ≥ the minimum buffer binding size of the variable.
If this is 0
, it is ignored by pipeline creation, and instead draw/dispatch commands validate that each binding in the GPUBindGroup
satisfies the minimum buffer binding size of the variable.
Note: Similar execution-time validation is theoretically possible for other binding-related fields specified for early validation, like sampleType
and format
, which currently can only be validated in pipeline creation. However, such execution-time validation could be costly or unnecessarily complex, so it is available only for minBindingSize
which is expected to have the most ergonomic impact.
enumGPUSamplerBindingType
{"filtering"
,"non-filtering"
,"comparison"
, }; dictionaryGPUSamplerBindingLayout
{ GPUSamplerBindingType type = "filtering"; };
GPUSamplerBindingLayout
dictionaries have the following members:
type
, of type GPUSamplerBindingType, defaulting to "filtering"
Indicates the required type of a sampler bound to this bindings.
enumGPUTextureSampleType
{"float"
,"unfilterable-float"
,"depth"
,"sint"
,"uint"
, }; dictionaryGPUTextureBindingLayout
{ GPUTextureSampleType sampleType = "float"; GPUTextureViewDimension viewDimension = "2d"; boolean multisampled = false; };
GPUTextureBindingLayout
dictionaries have the following members:
sampleType
, of type GPUTextureSampleType, defaulting to "float"
Indicates the type required for texture views bound to this binding.
viewDimension
, of type GPUTextureViewDimension, defaulting to "2d"
Indicates the required dimension
for texture views bound to this binding.
multisampled
, of type boolean, defaulting to false
Indicates whether or not texture views bound to this binding must be multisampled.
enumGPUStorageTextureAccess
{"write-only"
,"read-only"
,"read-write"
, }; dictionaryGPUStorageTextureBindingLayout
{ GPUStorageTextureAccess access = "write-only"; required GPUTextureFormat format; GPUTextureViewDimension viewDimension = "2d"; };
GPUStorageTextureBindingLayout
dictionaries have the following members:
access
, of type GPUStorageTextureAccess, defaulting to "write-only"
The access mode for this binding, indicating readability and writability.
format
, of type GPUTextureFormat
The required format
of texture views bound to this binding.
viewDimension
, of type GPUTextureViewDimension, defaulting to "2d"
Indicates the required dimension
for texture views bound to this binding.
dictionary GPUExternalTextureBindingLayout
{
};
A GPUBindGroupLayout
object has the following device timeline properties:
[[entryMap]]
, of type ordered map<GPUSize32
, GPUBindGroupLayoutEntry
>, readonly
The map of binding indices pointing to the GPUBindGroupLayoutEntry
s, which this GPUBindGroupLayout
describes.
[[dynamicOffsetCount]]
, of type GPUSize32
, readonly
The number of buffer bindings with dynamic offsets in this GPUBindGroupLayout
.
[[exclusivePipeline]]
, of type GPUPipelineBase
?, readonly
The pipeline that created this GPUBindGroupLayout
, if it was created as part of a default pipeline layout. If not null
, GPUBindGroup
s created with this GPUBindGroupLayout
can only be used with the specified GPUPipelineBase
.
createBindGroupLayout(descriptor)
Creates a GPUBindGroupLayout
.
Two
GPUBindGroupLayout
objects
aand
bare considered
group-equivalentif and only if all of the following conditions are satisfied:
If bind groups layouts are group-equivalent they can be interchangeably used in all contents.
8.2.GPUBindGroup
A GPUBindGroup
defines a set of resources to be bound together in a group and how the resources are used in shader stages.
[Exposed=(Window, Worker), SecureContext] interface GPUBindGroup { }; GPUBindGroup includes GPUObjectBase;
GPUBindGroup
has the following device timeline properties:
[[layout]]
, of type GPUBindGroupLayout
, readonly
The GPUBindGroupLayout
associated with this GPUBindGroup
.
[[entries]]
, of type sequence<GPUBindGroupEntry
>, readonly
The set of GPUBindGroupEntry
s this GPUBindGroup
describes.
[[usedResources]]
, of type usage scope, readonly
The set of buffer and texture subresources used by this bind group, associated with lists of the internal usage flags.
A GPUBindGroup
is created via GPUDevice.createBindGroup()
.
dictionary GPUBindGroupDescriptor
: GPUObjectDescriptorBase {
required GPUBindGroupLayout layout;
required sequence<GPUBindGroupEntry> entries;
};
GPUBindGroupDescriptor
dictionaries have the following members:
layout
, of type GPUBindGroupLayout
The GPUBindGroupLayout
the entries of this bind group will conform to.
entries
, of type sequence<GPUBindGroupEntry>
A list of entries describing the resources to expose to the shader for each binding described by the layout
.
typedef (GPUSampler or GPUTextureView or GPUBufferBinding or GPUExternalTexture)GPUBindingResource
; dictionaryGPUBindGroupEntry
{ required GPUIndex32 binding; required GPUBindingResource resource; };
A GPUBindGroupEntry
describes a single resource to be bound in a GPUBindGroup
, and has the following members:
binding
, of type GPUIndex32
A unique identifier for a resource binding within the GPUBindGroup
, corresponding to a GPUBindGroupLayoutEntry.binding
and a @binding attribute in the GPUShaderModule
.
resource
, of type GPUBindingResource
The resource to bind, which may be a GPUSampler
, GPUTextureView
, GPUExternalTexture
, or GPUBufferBinding
.
GPUBindGroupEntry
has the following device timeline properties:
[[prevalidatedSize]]
, of type boolean
Whether or not this binding entry had its buffer size validated at time of creation.
dictionary GPUBufferBinding
{
required GPUBuffer buffer;
GPUSize64 offset = 0;
GPUSize64 size;
};
A GPUBufferBinding
describes a buffer and optional range to bind as a resource, and has the following members:
buffer
, of type GPUBuffer
The GPUBuffer
to bind.
offset
, of type GPUSize64, defaulting to 0
The offset, in bytes, from the beginning of buffer
to the beginning of the range exposed to the shader by the buffer binding.
size
, of type GPUSize64
The size, in bytes, of the buffer binding. If not provided, specifies the range starting at offset
and ending at the end of buffer
.
createBindGroup(descriptor)
Creates a GPUBindGroup
.
Two
GPUBufferBinding
objects
aand
bare considered
buffer-binding-aliasingif and only if all of the following are true:
The range formed by a.offset
and a.size
intersects the range formed by b.offset
and b.size
, where if a size
is unspecified, the range goes to the end of the buffer.
Note: When doing this calculation, any dynamic offsets have already been applied to the ranges.
8.3.GPUPipelineLayout
A GPUPipelineLayout
defines the mapping between resources of all GPUBindGroup
objects set up during command encoding in setBindGroup(), and the shaders of the pipeline set by GPURenderCommandsMixin.setPipeline
or GPUComputePassEncoder.setPipeline
.
The full binding address of a resource can be defined as a trio of:
shader stage mask, to which the resource is visible
bind group index
binding number
The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup
(with the corresponding GPUBindGroupLayout
) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.
[Exposed=(Window, Worker), SecureContext] interface GPUPipelineLayout { }; GPUPipelineLayout includes GPUObjectBase;
GPUPipelineLayout
has the following device timeline properties:
[[bindGroupLayouts]]
, of type list<GPUBindGroupLayout
>, readonly
The GPUBindGroupLayout
objects provided at creation in GPUPipelineLayoutDescriptor.bindGroupLayouts
.
Note: using the same GPUPipelineLayout
for many GPURenderPipeline
or GPUComputePipeline
pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.
GPUComputePipeline
object X was created with
GPUPipelineLayout.bindGroupLayouts
A, B, C.
GPUComputePipeline
object Y was created with
GPUPipelineLayout.bindGroupLayouts
A, D, C. Supposing the command encoding sequence has two dispatches:
setBindGroup(0, ...)
setBindGroup(1, ...)
setBindGroup(2, ...)
setPipeline
(X)
setBindGroup(1, ...)
setPipeline
(Y)
In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout
at index 2 of GPUPipelineLayout.bindGroupLayouts
, or the GPUBindGroup
at slot 2, change.
Note: the expected usage of the GPUPipelineLayout
is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.
A GPUPipelineLayout
is created via GPUDevice.createPipelineLayout()
.
dictionary GPUPipelineLayoutDescriptor
: GPUObjectDescriptorBase {
required sequence<GPUBindGroupLayout?> bindGroupLayouts;
};
GPUPipelineLayoutDescriptor
dictionaries define all the GPUBindGroupLayout
s used by a pipeline, and have the following members:
bindGroupLayouts
, of type sequence<GPUBindGroupLayout?>
A list of optional GPUBindGroupLayout
s the pipeline will use. Each element corresponds to a @group attribute in the GPUShaderModule
, with the N
th element corresponding with @group(N)
.
createPipelineLayout(descriptor)
Creates a GPUPipelineLayout
.
Note: two GPUPipelineLayout
objects are considered equivalent for any usage if their internal [[bindGroupLayouts]]
sequences contain GPUBindGroupLayout
objects that are group-equivalent.
Create a
GPUBindGroupLayout
that describes a binding with a uniform buffer, a texture, and a sampler. Then create a
GPUBindGroup
and a
GPUPipelineLayout
using the
GPUBindGroupLayout
.
const bindGroupLayout = gpuDevice.createBindGroupLayout({ entries: [{ binding: 0, visibility: GPUShaderStage.VERTEX | GPUShaderStage.FRAGMENT, buffer: {} }, { binding: 1, visibility: GPUShaderStage.FRAGMENT, texture: {} }, { binding: 2, visibility: GPUShaderStage.FRAGMENT, sampler: {} }] }); const bindGroup = gpuDevice.createBindGroup({ layout: bindGroupLayout, entries: [{ binding: 0, resource: { buffer: buffer }, }, { binding: 1, resource: texture }, { binding: 2, resource: sampler }] }); const pipelineLayout = gpuDevice.createPipelineLayout({ bindGroupLayouts: [bindGroupLayout] });9. Shader Modules 9.1.
GPUShaderModule
[Exposed=(Window, Worker), SecureContext] interface GPUShaderModule { Promise<GPUCompilationInfo> getCompilationInfo(); }; GPUShaderModule includes GPUObjectBase;
GPUShaderModule
is a reference to an internal shader module object.
dictionary GPUShaderModuleDescriptor
: GPUObjectDescriptorBase {
required USVString code;
sequence<GPUShaderModuleCompilationHint> compilationHints = [];
};
code
, of type USVString
The WGSL source code for the shader module.
compilationHints
, of type sequence<GPUShaderModuleCompilationHint>, defaulting to []
A list of GPUShaderModuleCompilationHint
s.
Any hint provided by an application should contain information about one entry point of a pipeline that will eventually be created from the entry point.
Implementations should use any information present in the GPUShaderModuleCompilationHint
to perform as much compilation as is possible within createShaderModule()
.
Aside from type-checking, these hints are not validated in any way.
NOTE:
Supplying information in
compilationHints
does not have any observable effect, other than performance. It may be detrimental to performance to provide hints for pipelines that never end up being created.
Because a single shader module can hold multiple entry points, and multiple pipelines can be created from a single shader module, it can be more performant for an implementation to do as much compilation as possible once in createShaderModule()
rather than multiple times in the multiple calls to createComputePipeline()
or createRenderPipeline()
.
Hints are only applied to the entry points they explicitly name. Unlike GPUProgrammableStage.entryPoint
, there is no default, even if only one entry point is present in the module.
Note: Hints are not validated in an observable way, but user agents may surface identifiable errors (like unknown entry point names or incompatible pipeline layouts) to developers, for example in the browser developer console.
createShaderModule(descriptor)
Creates a GPUShaderModule
.
:
Let error be any error that results from shader module creation with the WGSL source descriptor.code
, or null
if no errors occured.
If any of the following requirements are unmet, generate a validation error, invalidate sm, and return.
Note: Uncategorized errors cannot arise from shader module creation. Implementations which detect such errors during shader module creation must behave as if the shader module is valid, and defer surfacing the error until pipeline creation.
NOTE:
User agents
should notinclude detailed compiler error messages or shader text in the
message
text of validation errors arising here: these details are accessible via
getCompilationInfo()
. User agents
shouldsurface human-readable, formatted error details
to developersfor easier debugging (for example as a warning in the browser developer console, expandable to show full shader source).
As shader compilation errors should be rare in production applications, user agents could choose to surface them to developers regardless of error handling (GPU error scopes or uncapturederror
event handlers), e.g. as an expandable warning. If not, they should provide and document another way for developers to access human-readable error details, for example by adding a checkbox to show errors unconditionally, or by showing human-readable details when logging a GPUCompilationInfo
object to the console.
Create a
GPUShaderModule
from WGSL code:
// A simple vertex and fragment shader pair that will fill the viewport with red. const shaderSource = ` var<private> pos : array<vec2<f32>, 3> = array<vec2<f32>, 3>( vec2(-1.0, -1.0), vec2(-1.0, 3.0), vec2(3.0, -1.0)); @vertex fn vertexMain(@builtin(vertex_index) vertexIndex : u32) -> @builtin(position) vec4<f32> { return vec4(pos[vertexIndex], 1.0, 1.0); } @fragment fn fragmentMain() -> @location(0) vec4<f32> { return vec4(1.0, 0.0, 0.0, 1.0); } `; const shaderModule = gpuDevice.createShaderModule({ code: shaderSource, });9.1.1.1. Shader Module Compilation Hints
Shader module compilation hints are optional, additional information indicating how a given GPUShaderModule
entry point is intended to be used in the future. For some implementations this information may aid in compiling the shader module earlier, potentially increasing performance.
dictionaryGPUShaderModuleCompilationHint
{ required USVStringentryPoint
; (GPUPipelineLayout or GPUAutoLayoutMode) layout; };
layout
, of type (GPUPipelineLayout or GPUAutoLayoutMode)
A GPUPipelineLayout
that the GPUShaderModule
may be used with in a future createComputePipeline()
or createRenderPipeline()
call. If set to "auto"
the layout will be the default pipeline layout for the entry point associated with this hint will be used.
enumGPUCompilationMessageType
{"error"
,"warning"
,"info"
, }; [Exposed=(Window, Worker), Serializable, SecureContext] interfaceGPUCompilationMessage
{ readonly attribute DOMString message; readonly attribute GPUCompilationMessageType type; readonly attribute unsigned long long lineNum; readonly attribute unsigned long long linePos; readonly attribute unsigned long long offset; readonly attribute unsigned long long length; }; [Exposed=(Window, Worker), Serializable, SecureContext] interfaceGPUCompilationInfo
{ readonly attribute FrozenArray<GPUCompilationMessage>messages
; };
A GPUCompilationMessage
is an informational, warning, or error message generated by the GPUShaderModule
compiler. The messages are intended to be human readable to help developers diagnose issues with their shader code
. Each message may correspond to either a single point in the shader code, a substring of the shader code, or may not correspond to any specific point in the code at all.
GPUCompilationMessage
has the following attributes:
message
, of type DOMString, readonly
The human-readable, localizable text for this compilation message.
Note: The message
should follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.
Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.
type
, of type GPUCompilationMessageType, readonly
The severity level of the message.
If the type
is "error"
, it corresponds to a shader-creation error.
lineNum
, of type unsigned long long, readonly
The line number in the shader code
the message
corresponds to. Value is one-based, such that a lineNum of 1
indicates the first line of the shader code
. Lines are delimited by line breaks.
If the message
corresponds to a substring this points to the line on which the substring begins. Must be 0
if the message
does not correspond to any specific point in the shader code
.
linePos
, of type unsigned long long, readonly
The offset, in UTF-16 code units, from the beginning of line lineNum
of the shader code
to the point or beginning of the substring that the message
corresponds to. Value is one-based, such that a linePos
of 1
indicates the first code unit of the line.
If message
corresponds to a substring this points to the first UTF-16 code unit of the substring. Must be 0
if the message
does not correspond to any specific point in the shader code
.
offset
, of type unsigned long long, readonly
The offset from the beginning of the shader code
in UTF-16 code units to the point or beginning of the substring that message
corresponds to. Must reference the same position as lineNum
and linePos
. Must be 0
if the message
does not correspond to any specific point in the shader code
.
length
, of type unsigned long long, readonly
The number of UTF-16 code units in the substring that message
corresponds to. If the message does not correspond with a substring then length
must be 0.
Note: GPUCompilationMessage
.lineNum
and GPUCompilationMessage
.linePos
are one-based since the most common use for them is expected to be printing human readable messages that can be correlated with the line and column numbers shown in many text editors.
Note: GPUCompilationMessage
.offset
and GPUCompilationMessage
.length
are appropriate to pass to substr()
in order to retrieve the substring of the shader code
the message
corresponds to.
getCompilationInfo()
Returns any messages generated during the GPUShaderModule
’s compilation.
The locations, order, and contents of messages are implementation-defined In particular, messages may not be ordered by lineNum
.
steps:
Let info be a new GPUCompilationInfo
.
Let messages be a list of any errors, warnings, or informational messages generated during shader module creation for this, or the empty list []
if the device was lost.
For each message in messages:
Let m be a new GPUCompilationMessage
.
Set m.message
to be the text of message.
code
:
Set m.lineNum
to the one-based number of the first line that the message refers to.
Set m.linePos
to the one-based number of the first UTF-16 code units on m.lineNum
that the message refers to, or 1
if the message refers to the entire line.
Set m.offset
to the number of UTF-16 code units from the beginning of the shader to beginning of the substring or position that message refers to.
Set m.length
the length of the substring in UTF-16 code units that message refers to, or 0 if message refers to a position
Resolve promise with info.
A pipeline, be it GPUComputePipeline
or GPURenderPipeline
, represents the complete function done by a combination of the GPU hardware, the driver, and the user agent, that process the input data in the shape of bindings and vertex buffers, and produces some output, like the colors in the output render targets.
Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.
Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.
This combination state is created as a single object (a GPUComputePipeline
or GPURenderPipeline
) and switched using one command (GPUComputePassEncoder
.setPipeline()
or GPURenderCommandsMixin
.setPipeline()
respectively).
There are two ways to create pipelines:
createComputePipeline()
and createRenderPipeline()
return a pipeline object which can be used immediately in a pass encoder.
When this fails, the pipeline object will be invalid and the call will generate either a validation error or an internal error.
Note: A handle object is returned immediately, but actual pipeline creation is not synchronous. If pipeline creation takes a long time, this can incur a stall in the device timeline at some point between the creation call and execution of the submit()
in which it is first used. The point is unspecified, but most likely to be one of: at creation, at the first usage of the pipeline in setPipeline()
, at the corresponding finish()
of that GPUCommandEncoder
or GPURenderBundleEncoder
, or at submit()
of that GPUCommandBuffer
.
createComputePipelineAsync()
and createRenderPipelineAsync()
return a Promise
which resolves to a pipeline object when creation of the pipeline has completed.
When this fails, the Promise
rejects with a GPUPipelineError
.
GPUPipelineError
describes a pipeline creation failure.
[Exposed=(Window, Worker), SecureContext, Serializable] interface GPUPipelineError : DOMException { constructor(optional DOMString message = "", GPUPipelineErrorInit options); readonly attribute GPUPipelineErrorReason reason; }; dictionaryGPUPipelineErrorInit
{ required GPUPipelineErrorReasonreason
; }; enum GPUPipelineErrorReason { "validation", "internal", };
GPUPipelineError
constructor:
constructor()
GPUPipelineError
has the following attributes:
reason
, of type GPUPipelineErrorReason, readonly
A read-only slot-backed attribute exposing the type of error encountered in pipeline creation as a GPUPipelineErrorReason
:
"validation"
: A validation error.
"internal"
: An internal error.
GPUPipelineError
objects are serializable objects.
enumGPUAutoLayoutMode
{"auto"
, }; dictionaryGPUPipelineDescriptorBase
: GPUObjectDescriptorBase { required (GPUPipelineLayout or GPUAutoLayoutMode) layout; };
layout
, of type (GPUPipelineLayout or GPUAutoLayoutMode)
The GPUPipelineLayout
for this pipeline, or "auto"
to generate the pipeline layout automatically.
Note: If "auto"
is used the pipeline cannot share GPUBindGroup
s with any other pipelines.
interface mixin GPUPipelineBase
{
[NewObject] GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};
GPUPipelineBase
has the following device timeline properties:
[[layout]]
, of type GPUPipelineLayout
The definition of the layout of resources which can be used with this
.
GPUPipelineBase
has the following methods:
getBindGroupLayout(index)
Gets a GPUBindGroupLayout
that is compatible with the GPUPipelineBase
’s GPUBindGroupLayout
at index
.
A GPUPipelineBase
object that was created with a layout
set to "auto"
has a default layout created and used instead.
Note: Default layouts are provided as a convenience for simple pipelines, but use of explicit layouts is recommended in most cases. Bind groups created from default layouts cannot be used with other pipelines, and the structure of the default layout may change when altering shaders, causing unexpected bind group creation errors.
10.1.2.GPUProgrammableStage
A GPUProgrammableStage
describes the entry point in the user-provided GPUShaderModule
that controls one of the programmable stages of a pipeline. Entry point names follow the rules defined in WGSL identifier comparison.
dictionary GPUProgrammableStage { required GPUShaderModule module; USVString entryPoint; record<USVString, GPUPipelineConstantValue> constants = {}; }; typedef double GPUPipelineConstantValue; // May represent WGSL's bool, f32, i32, u32, and f16 if enabled.
GPUProgrammableStage
has the following members:
module
, of type GPUShaderModule
The GPUShaderModule
containing the code that this programmable stage will execute.
entryPoint
, of type USVString
The name of the function in module
that this stage will use to perform its work.
NOTE: Since the entryPoint
dictionary member is not required, methods which consume a GPUProgrammableStage
must use the "get the entry point" algorithm to determine which entry point it refers to.
constants
, of type record<USVString, GPUPipelineConstantValue>, defaulting to {}
Specifies the values of pipeline-overridable constants in the shader module module
.
Each such pipeline-overridable constant is uniquely identified by a single pipeline-overridable constant identifier string, representing the pipeline constant ID of the constant if its declaration specifies one, and otherwise the constant’s identifier name.
The key of each key-value pair must equal the identifier string of one such constant, with the comparison performed according to the rules for WGSL identifier comparison. When the pipeline is executed, that constant will have the specified value.
Values are specified as GPUPipelineConstantValue
, which is a double
. They are converted to WGSL type of the pipeline-overridable constant (bool
/i32
/u32
/f32
/f16
). If conversion fails, a validation error is generated.
Pipeline-overridable constants defined in WGSL:
@id(0) override has_point_light: bool = true; // Algorithmic control. @id(1200) override specular_param: f32 = 2.3; // Numeric control. @id(1300) override gain: f32; // Must be overridden. override width: f32 = 0.0; // Specifed at the API level // using the name "width". override depth: f32; // Specifed at the API level // using the name "depth". // Must be overridden. override height = 2 * depth; // The default value // (if not set at the API level), // depends on another // overridable constant.
Corresponding JavaScript code, providing only the overrides which are required (have no defaults):
{ // ... constants: { 1300: 2.0, // "gain" depth: -1, // "depth" } }
Corresponding JavaScript code, overriding all constants:
{ // ... constants: { 0: false, // "has_point_light" 1200: 3.0, // "specular_param" 1300: 2.0, // "gain" width: 20, // "width" depth: -1, // "depth" height: 15, // "height" } }
To
get the entry point(
GPUShaderStage
stage
,
GPUProgrammableStage
descriptor
), run the following
device timelinesteps:
If descriptor.entryPoint
is provided:
If descriptor.module
contains an entry point whose name equals descriptor.entryPoint
, and whose shader stage equals stage, return that entry point.
Otherwise, return null
.
Otherwise:
If there is exactly one entry point in descriptor.module
whose shader stage equals stage, return that entry point.
Otherwise, return null
.
(
stage,
descriptor,
layout,
device)
Arguments:
GPUShaderStage
stage
GPUProgrammableStage
descriptor
GPUPipelineLayout
layout
GPUDevice
device
All of the requirements in the following steps must be met. If any are unmet, return false
; otherwise, return true
.
descriptor.module
must be valid to use with device.
Let entryPoint be get the entry point(stage, descriptor).
entryPoint must not be null
.
For each binding that is statically used by entryPoint:
validating shader binding(binding, layout) must return true
.
For each texture builtin function call in any of the functions in the shader stage rooted at entryPoint, if it uses a textureBinding of sampled texture or depth texture type together with a samplerBinding of sampler
type (excluding sampler_comparison
):
Let texture be the GPUBindGroupLayoutEntry
corresponding to textureBinding.
Let sampler be the GPUBindGroupLayoutEntry
corresponding to samplerBinding.
If sampler.type
is "filtering"
, then texture.sampleType
must be "float"
.
Note: "comparison"
samplers can also only be used with "depth"
textures, because they are the only texture type that can be bound to WGSL texture_depth_*
bindings.
For each key → value in descriptor.constants
:
key must equal the pipeline-overridable constant identifier string of some pipeline-overridable constant defined in the shader module descriptor.module
by the rules defined in WGSL identifier comparison. The pipeline-overridable constant is not required to be statically used by entryPoint. Let the type of that constant be T.
Converting the IDL value value to WGSL type T must not throw a TypeError
.
For each pipeline-overridable constant identifier string key which is statically used by entryPoint:
If the pipeline-overridable constant identified by key does not have a default value, descriptor.constants
must contain key.
Pipeline-creation program errors must not result from the rules of the [WGSL] specification.
(
variable,
layout)
Arguments:
shader binding declaration variable, a module-scope variable declaration reflected from a shader module
GPUPipelineLayout
layout
Let bindGroup be the bind group index, and bindIndex be the binding index, of the shader binding declaration variable.
Return true
if all of the following conditions are satisfied:
layout.[[bindGroupLayouts]]
[bindGroup] contains a GPUBindGroupLayoutEntry
entry whose entry.binding
== bindIndex.
If the defined binding member for entry is:
buffer
"uniform"
variable is declared with address space uniform
.
"storage"
variable is declared with address space storage
and access mode read_write
.
"read-only-storage"
variable is declared with address space storage
and access mode read
.
If entry.buffer
.minBindingSize
is not 0
, then it must be at least the minimum buffer binding size for the associated buffer binding variable in the shader.
sampler
"filtering"
or "non-filtering"
variable has type sampler
.
"comparison"
variable has type sampler_comparison
.
texture
If, and only if, entry.texture
.multisampled
is true
, variable has type texture_multisampled_2d<T>
or texture_depth_multisampled_2d<T>
.
If entry.texture
.sampleType
is:
"float"
, "unfilterable-float"
, "sint"
or "uint"
variable has one of the types:
texture_1d<T>
texture_2d<T>
texture_2d_array<T>
texture_cube<T>
texture_cube_array<T>
texture_3d<T>
texture_multisampled_2d<T>
If entry.texture
.sampleType
is:
"float"
or "unfilterable-float"
The sampled type T
is f32
.
"sint"
The sampled type T
is i32
.
"uint"
The sampled type T
is u32
.
"depth"
variable has one of the types:
texture_2d<T>
texture_2d_array<T>
texture_cube<T>
texture_cube_array<T>
texture_multisampled_2d<T>
texture_depth_2d
texture_depth_2d_array
texture_depth_cube
texture_depth_cube_array
texture_depth_multisampled_2d
where the sampled type T
is f32
.
If entry.texture
.viewDimension
is:
"1d"
variable has type texture_1d<T>
.
"2d"
variable has type texture_2d<T>
or texture_multisampled_2d<T>
.
"2d-array"
variable has type texture_2d_array<T>
.
"cube"
variable has type texture_cube<T>
.
"cube-array"
variable has type texture_cube_array<T>
.
"3d"
variable has type texture_3d<T>
.
storageTexture
If entry.storageTexture
.viewDimension
is:
"1d"
variable has type texture_storage_1d<T, A>
.
"2d"
variable has type texture_storage_2d<T, A>
.
"2d-array"
variable has type texture_storage_2d_array<T, A>
.
"3d"
variable has type texture_storage_3d<T, A>
.
If entry.storageTexture
.access
is:
"write-only"
The access mode A
is write
.
"read-only"
The access mode A
is read
.
"read-write"
The access mode A
is read_write
or write
.
The texel format T
equals entry.storageTexture
.format
.
The
minimum buffer binding sizefor a buffer binding variable
varis computed as follows:
Let T be the store type of var.
If T is a runtime-sized array, or contains a runtime-sized array, replace that array<E>
with array<E, 1>
.
Note: This ensures there’s always enough memory for one element, which allows array indices to be clamped to the length of the array resulting in an in-memory access.
Return SizeOf(T).
Note: Enforcing this lower bound ensures reads and writes via the buffer variable only access memory locations within the bound region of the buffer.
10.2.GPUComputePipeline
A GPUComputePipeline
is a kind of pipeline that controls the compute shader stage, and can be used in GPUComputePassEncoder
.
Compute inputs and outputs are all contained in the bindings, according to the given GPUPipelineLayout
. The outputs correspond to buffer
bindings with a type of "storage"
and storageTexture
bindings with a type of "write-only"
or "read-write"
.
Stages of a compute pipeline:
Compute shader
[Exposed=(Window, Worker), SecureContext] interface GPUComputePipeline { }; GPUComputePipeline includes GPUObjectBase; GPUComputePipeline includes GPUPipelineBase;10.2.1. Compute Pipeline Creation
A GPUComputePipelineDescriptor
describes a compute pipeline. See § 23.1 Computing for additional details.
dictionary GPUComputePipelineDescriptor
: GPUPipelineDescriptorBase {
required GPUProgrammableStage compute;
};
GPUComputePipelineDescriptor
has the following members:
compute
, of type GPUProgrammableStage
Describes the compute shader entry point of the pipeline.
createComputePipeline(descriptor)
Creates a GPUComputePipeline
using immediate pipeline creation.
createComputePipelineAsync(descriptor)
Creates a GPUComputePipeline
using async pipeline creation. The returned Promise
resolves when the created pipeline is ready to be used without additional delay.
If pipeline creation fails, the returned Promise
rejects with an GPUPipelineError
. (A GPUError
is not dispatched to the device.)
Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Creating a simple
GPUComputePipeline
:
const computePipeline = gpuDevice.createComputePipeline({ layout: pipelineLayout, compute: { module: computeShaderModule, entryPoint: 'computeMain', } });10.3.
GPURenderPipeline
A GPURenderPipeline
is a kind of pipeline that controls the vertex and fragment shader stages, and can be used in GPURenderPassEncoder
as well as GPURenderBundleEncoder
.
Render pipeline inputs are:
bindings, according to the given GPUPipelineLayout
vertex and index buffers, described by GPUVertexState
the color attachments, described by GPUColorTargetState
optionally, the depth-stencil attachment, described by GPUDepthStencilState
Render pipeline outputs are:
storageTexture
bindings with a access
of "write-only"
or "read-write"
the color attachments, described by GPUColorTargetState
optionally, depth-stencil attachment, described by GPUDepthStencilState
A render pipeline is comprised of the following render stages:
Vertex fetch, controlled by GPUVertexState.buffers
Vertex shader, controlled by GPUVertexState
Primitive assembly, controlled by GPUPrimitiveState
Rasterization, controlled by GPUPrimitiveState
, GPUDepthStencilState
, and GPUMultisampleState
Fragment shader, controlled by GPUFragmentState
Stencil test and operation, controlled by GPUDepthStencilState
Depth test and write, controlled by GPUDepthStencilState
Output merging, controlled by GPUFragmentState.targets
[Exposed=(Window, Worker), SecureContext] interface GPURenderPipeline { }; GPURenderPipeline includes GPUObjectBase; GPURenderPipeline includes GPUPipelineBase;
GPURenderPipeline
has the following device timeline properties:
[[descriptor]]
, of type GPURenderPipelineDescriptor
, readonly
The GPURenderPipelineDescriptor
describing this pipeline.
All optional fields of GPURenderPipelineDescriptor
are defined.
[[writesDepth]]
, of type boolean
, readonly
True if the pipeline writes to the depth component of the depth/stencil attachment
[[writesStencil]]
, of type boolean
, readonly
True if the pipeline writes to the stencil component of the depth/stencil attachment
A GPURenderPipelineDescriptor
describes a render pipeline by configuring each of the render stages. See § 23.2 Rendering for additional details.
dictionary GPURenderPipelineDescriptor
: GPUPipelineDescriptorBase {
required GPUVertexState vertex;
GPUPrimitiveState primitive = {};
GPUDepthStencilState depthStencil;
GPUMultisampleState multisample = {};
GPUFragmentState fragment;
};
GPURenderPipelineDescriptor
has the following members:
vertex
, of type GPUVertexState
Describes the vertex shader entry point of the pipeline and its input buffer layouts.
primitive
, of type GPUPrimitiveState, defaulting to {}
Describes the primitive-related properties of the pipeline.
depthStencil
, of type GPUDepthStencilState
Describes the optional depth-stencil properties, including the testing, operations, and bias.
multisample
, of type GPUMultisampleState, defaulting to {}
Describes the multi-sampling properties of the pipeline.
fragment
, of type GPUFragmentState
Describes the fragment shader entry point of the pipeline and its output colors. If not provided, the § 23.2.8 No Color Output mode is enabled.
createRenderPipeline(descriptor)
Creates a GPURenderPipeline
using immediate pipeline creation.
:
Let layout be a new default pipeline layout for pipeline if descriptor.layout
is "auto"
, and descriptor.layout
otherwise.
All of the requirements in the following steps must be met. If any are unmet, generate a validation error, invalidate pipeline, and return.
If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, invalidate pipeline and return.
Note: Even if the implementation detected uncategorized errors in shader module creation, the error is surfaced here.
Set pipeline.[[descriptor]]
to descriptor.
Set pipeline.[[writesDepth]]
to false.
Set pipeline.[[writesStencil]]
to false.
Let depthStencil be descriptor.depthStencil
.
If depthStencil is not null:
If depthStencil.depthWriteEnabled
is provided:
Set pipeline.[[writesDepth]]
to depthStencil.depthWriteEnabled
.
If depthStencil.stencilWriteMask
is not 0:
Let stencilFront be depthStencil.stencilFront
.
Let stencilBack be depthStencil.stencilBack
.
If cullMode is not "front"
, and any of stencilFront.passOp
, stencilFront.depthFailOp
, or stencilFront.failOp
is not "keep"
:
Set pipeline.[[writesStencil]]
to true.
If cullMode is not "back"
, and any of stencilBack.passOp
, stencilBack.depthFailOp
, or stencilBack.failOp
is not "keep"
:
Set pipeline.[[writesStencil]]
to true.
Set pipeline.[[layout]]
to layout.
createRenderPipelineAsync(descriptor)
Creates a GPURenderPipeline
using async pipeline creation. The returned Promise
resolves when the created pipeline is ready to be used without additional delay.
If pipeline creation fails, the returned Promise
rejects with an GPUPipelineError
. (A GPUError
is not dispatched to the device.)
Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
(descriptor, layout, device)
Arguments:
GPURenderPipelineDescriptor
descriptor
GPUPipelineLayout
layout
GPUDevice
device
Device timeline steps:
Return true
if all of the following conditions are satisfied:
validating GPUVertexState(device, descriptor.vertex
, layout) succeeds.
If descriptor.fragment
is provided:
validating GPUFragmentState(device, descriptor.fragment
, layout) succeeds.
If the sample_mask builtin is a shader stage output of descriptor.fragment
:
descriptor.multisample
.alphaToCoverageEnabled
is false
.
If the frag_depth builtin is a shader stage output of descriptor.fragment
:
descriptor.depthStencil
must be provided, and descriptor.depthStencil
.format
must have a depth aspect.
validating GPUPrimitiveState(descriptor.primitive
, device) succeeds.
If descriptor.depthStencil
is provided:
validating GPUDepthStencilState(descriptor.depthStencil
, descriptor.primitive
.topology
) succeeds.
validating GPUMultisampleState(descriptor.multisample
) succeeds.
If descriptor.multisample
.alphaToCoverageEnabled
is true:
There must exist at least one attachment, either:
A descriptor.depthStencil
.
validating inter-stage interfaces(device, descriptor) returns true
.
(
device,
descriptor)
Arguments:
GPUDevice
device
GPURenderPipelineDescriptor
descriptor
Returns: boolean
Device timeline steps:
Let maxVertexShaderOutputVariables be device.limits.maxInterStageShaderVariables
.
Let maxVertexShaderOutputLocation be device.limits.maxInterStageShaderVariables
- 1.
If descriptor.primitive
.topology
is "point-list"
:
Decrement maxVertexShaderOutputVariables by 1.
If clip_distances is declared in the output of descriptor.vertex
:
Let clipDistancesSize be the array size of clip_distances.
Decrement maxVertexShaderOutputVariables by ceil(clipDistancesSize / 4).
Decrement maxVertexShaderOutputLocation by ceil(clipDistancesSize / 4).
Return false
if any of the following requirements are unmet:
If descriptor.fragment
is provided:
Let maxFragmentShaderInputVariables be device.limits.maxInterStageShaderVariables
.
If any of the front_facing
, sample_index
, or sample_mask
builtins are an input of descriptor.fragment
:
Decrement maxFragmentShaderInputVariables by 1.
Return false
if any of the following requirements are unmet:
For each user-defined input of descriptor.fragment
there must be a user-defined output of descriptor.vertex
that location, type, and interpolation of the input.
Note: Vertex-only pipelines can have user-defined outputs in the vertex stage; their values will be discarded.
There must be no more than maxFragmentShaderInputVariables user-defined inputs for descriptor.fragment
.
Assert that the location of each user-defined input of descriptor.fragment
is less than device.limits.maxInterStageShaderVariables
. (This follows from the above rules.)
Return true
.
Creating a simple
GPURenderPipeline
:
const renderPipeline = gpuDevice.createRenderPipeline({ layout: pipelineLayout, vertex: { module: shaderModule, entryPoint: 'vertexMain' }, fragment: { module: shaderModule, entryPoint: 'fragmentMain', targets: [{ format: 'bgra8unorm', }], } });10.3.2. Primitive State
dictionary GPUPrimitiveState
{
GPUPrimitiveTopology topology = "triangle-list";
GPUIndexFormat stripIndexFormat;
GPUFrontFace frontFace = "ccw";
GPUCullMode cullMode = "none";
// Requires "depth-clip-control" feature.
boolean unclippedDepth = false;
};
GPUPrimitiveState
has the following members, which describe how a GPURenderPipeline
constructs and rasterizes primitives from its vertex inputs:
topology
, of type GPUPrimitiveTopology, defaulting to "triangle-list"
The type of primitive to be constructed from the vertex inputs.
stripIndexFormat
, of type GPUIndexFormat
For pipelines with strip topologies ("line-strip"
or "triangle-strip"
), this determines the index buffer format and primitive restart value ("uint16"
/0xFFFF
or "uint32"
/0xFFFFFFFF
). It is not allowed on pipelines with non-strip topologies.
Note: Some implementations require knowledge of the primitive restart value to compile pipeline state objects.
To use a strip-topology pipeline with an indexed draw call (drawIndexed()
or drawIndexedIndirect()
), this must be set, and it must match the index buffer format used with the draw call (set in setIndexBuffer()
).
See § 23.2.3 Primitive Assembly for additional details.
frontFace
, of type GPUFrontFace, defaulting to "ccw"
Defines which polygons are considered front-facing.
cullMode
, of type GPUCullMode, defaulting to "none"
Defines which polygon orientation will be culled, if any.
unclippedDepth
, of type boolean, defaulting to false
If true, indicates that depth clipping is disabled.
Requires the "depth-clip-control"
feature to be enabled.
enum GPUPrimitiveTopology
{
"point-list",
"line-list",
"line-strip",
"triangle-list",
"triangle-strip",
};
GPUPrimitiveTopology
defines the primitive type draw calls made with a GPURenderPipeline
will use. See § 23.2.5 Rasterization for additional details:
"point-list"
Each vertex defines a point primitive.
"line-list"
Each consecutive pair of two vertices defines a line primitive.
"line-strip"
Each vertex after the first defines a line primitive between it and the previous vertex.
"triangle-list"
Each consecutive triplet of three vertices defines a triangle primitive.
"triangle-strip"
Each vertex after the first two defines a triangle primitive between it and the previous two vertices.
enum GPUFrontFace
{
"ccw",
"cw",
};
GPUFrontFace
defines which polygons are considered front-facing by a GPURenderPipeline
. See § 23.2.5.4 Polygon Rasterization for additional details:
"ccw"
Polygons with vertices whose framebuffer coordinates are given in counter-clockwise order are considered front-facing.
"cw"
Polygons with vertices whose framebuffer coordinates are given in clockwise order are considered front-facing.
enum GPUCullMode
{
"none",
"front",
"back",
};
GPUPrimitiveTopology
defines which polygons will be culled by draw calls made with a GPURenderPipeline
. See § 23.2.5.4 Polygon Rasterization for additional details:
"none"
No polygons are discarded.
"front"
Front-facing polygons are discarded.
"back"
Back-facing polygons are discarded.
Note: GPUFrontFace
and GPUCullMode
have no effect on "point-list"
, "line-list"
, or "line-strip"
topologies.
dictionary GPUMultisampleState
{
GPUSize32 count = 1;
GPUSampleMask mask = 0xFFFFFFFF;
boolean alphaToCoverageEnabled = false;
};
GPUMultisampleState
has the following members, which describe how a GPURenderPipeline
interacts with a render pass’s multisampled attachments.
count
, of type GPUSize32, defaulting to 1
Number of samples per pixel. This GPURenderPipeline
will be compatible only with attachment textures (colorAttachments
and depthStencilAttachment
) with matching sampleCount
s.
mask
, of type GPUSampleMask, defaulting to 0xFFFFFFFF
Mask determining which samples are written to.
alphaToCoverageEnabled
, of type boolean, defaulting to false
When true
indicates that a fragment’s alpha channel should be used to generate a sample coverage mask.
(
descriptor)
Arguments:GPUMultisampleState
descriptor
Device timeline steps:
Return true
if all of the following conditions are satisfied:
dictionary GPUFragmentState
: GPUProgrammableStage {
required sequence<GPUColorTargetState?> targets;
};
targets
, of type sequence<GPUColorTargetState?>
A list of GPUColorTargetState
defining the formats and behaviors of the color targets this pipeline writes to.
(
device,
descriptor,
layout)
Arguments:
GPUDevice
device
GPUFragmentState
descriptor
GPUPipelineLayout
layout
Device timeline steps:
Return true
if all of the following requirements are met:
validating GPUProgrammableStage(FRAGMENT
, descriptor, layout, device) succeeds.
descriptor.targets
.size must be ≤ device.[[limits]]
.maxColorAttachments
.
Let entryPoint be get the entry point(FRAGMENT
, descriptor).
Let usesDualSourceBlending be false
.
For each index of the indices of descriptor.targets
containing a non-null
value colorState:
colorState.format
must be listed in § 26.1.1 Plain color formats with RENDER_ATTACHMENT
capability.
colorState.writeMask
must be < 16.
If colorState.blend
is provided:
colorState.blend
.color
must be a valid GPUBlendComponent.
colorState.blend
.alpha
must be a valid GPUBlendComponent.
If colorState.blend
.color
.srcFactor
or colorState.blend
.color
.dstFactor
or colorState.blend
.alpha
.srcFactor
or colorState.blend
.alpha
.dstFactor
uses the second input of the corresponding blending unit (is any of "src1"
, "one-minus-src1"
, "src1-alpha"
, "one-minus-src1-alpha"
), then:
Set usesDualSourceBlending to true
.
For each shader stage output value output with location attribute equal to index in entryPoint:
For each component in colorState.format
, there must be a corresponding component in output. (That is, RGBA requires vec4, RGB requires vec3 or vec4, RG requires vec2 or vec3 or vec4.)
If the GPUTextureSampleType
s for colorState.format
(defined in § 26.1 Texture Format Capabilities) are:
"float"
and/or "unfilterable-float"
output must have a floating-point scalar type.
"sint"
output must have a signed integer scalar type.
"uint"
output must have an unsigned integer scalar type.
If colorState.blend
is provided and colorState.blend
.color
.srcFactor
or .dstFactor
uses the source alpha (is any of "src-alpha"
, "one-minus-src-alpha"
, "src-alpha-saturated"
, "src1-alpha"
or "one-minus-src1-alpha"
), then:
output must have an alpha channel (that is, it must be a vec4).
If colorState.writeMask
is not 0:
entryPoint must have a shader stage output with location equal to index and blend_src omitted or equal to 0.
If usesDualSourceBlending is true
:
All the shader stage outputs with location in entryPoint must be in one struct and use dual source blending.
Validating GPUFragmentState’s color attachment bytes per sample(device, descriptor.targets
) succeeds.
Note: The fragment shader may output more values than what the pipeline uses. If that is the case the values are ignored.
GPUBlendComponent
component
is a
valid GPUBlendComponentwith logical
device deviceif it meets
the following requirements:
10.3.5. Color Target Statedictionary GPUColorTargetState
{
required GPUTextureFormat format;
GPUBlendState blend;
GPUColorWriteFlags writeMask = 0xF; // GPUColorWrite.ALL
};
format
, of type GPUTextureFormat
The GPUTextureFormat
of this color target. The pipeline will only be compatible with GPURenderPassEncoder
s which use a GPUTextureView
of this format in the corresponding color attachment.
blend
, of type GPUBlendState
The blending behavior for this color target. If left undefined, disables blending for this color target.
writeMask
, of type GPUColorWriteFlags, defaulting to 0xF
Bitmask controlling which channels are are written to when drawing to this color target.
dictionary GPUBlendState
{
required GPUBlendComponent color;
required GPUBlendComponent alpha;
};
color
, of type GPUBlendComponent
Defines the blending behavior of the corresponding render target for color channels.
alpha
, of type GPUBlendComponent
Defines the blending behavior of the corresponding render target for the alpha channel.
typedef [EnforceRange] unsigned long10.3.5.1. Blend StateGPUColorWriteFlags
; [Exposed=(Window, Worker), SecureContext] namespaceGPUColorWrite
{ const GPUFlagsConstantRED
= 0x1; const GPUFlagsConstantGREEN
= 0x2; const GPUFlagsConstantBLUE
= 0x4; const GPUFlagsConstantALPHA
= 0x8; const GPUFlagsConstantALL
= 0xF; };
dictionary GPUBlendComponent
{
GPUBlendOperation operation = "add";
GPUBlendFactor srcFactor = "one";
GPUBlendFactor dstFactor = "zero";
};
GPUBlendComponent
has the following members, which describe how the color or alpha components of a fragment are blended:
operation
, of type GPUBlendOperation, defaulting to "add"
Defines the GPUBlendOperation
used to calculate the values written to the target attachment components.
srcFactor
, of type GPUBlendFactor, defaulting to "one"
Defines the GPUBlendFactor
operation to be performed on values from the fragment shader.
dstFactor
, of type GPUBlendFactor, defaulting to "zero"
Defines the GPUBlendFactor
operation to be performed on values from the target attachment.
The following tables use this notation to describe color components for a given fragment location:
RGBAsrc
Color output by the fragment shader for the color attachment. If the shader doesn’t return an alpha channel, src-alpha blend factors cannot be used. RGBAsrc1
Color output by the fragment shader for the color attachment with "@blend_src" attribute equal to 1
. If the shader doesn’t return an alpha channel, src1-alpha blend factors cannot be used. RGBAdst
Color currently in the color attachment. Missing green/blue/alpha channels default to 0, 0, 1
, respectively. RGBAconst
The current [[blendConstant]]
. RGBAsrcFactor
The source blend factor components, as defined by srcFactor
. RGBAdstFactor
The destination blend factor components, as defined by dstFactor
.
enum GPUBlendFactor
{
"zero",
"one",
"src",
"one-minus-src",
"src-alpha",
"one-minus-src-alpha",
"dst",
"one-minus-dst",
"dst-alpha",
"one-minus-dst-alpha",
"src-alpha-saturated",
"constant",
"one-minus-constant",
"src1",
"one-minus-src1",
"src1-alpha",
"one-minus-src1-alpha",
};
GPUBlendFactor
defines how either a source or destination blend factors is calculated:
"zero"
(0, 0, 0, 0)
"one"
(1, 1, 1, 1)
"src"
(Rsrc, Gsrc, Bsrc, Asrc)
"one-minus-src"
(1 - Rsrc, 1 - Gsrc, 1 - Bsrc, 1 - Asrc)
"src-alpha"
(Asrc, Asrc, Asrc, Asrc)
"one-minus-src-alpha"
(1 - Asrc, 1 - Asrc, 1 - Asrc, 1 - Asrc)
"dst"
(Rdst, Gdst, Bdst, Adst)
"one-minus-dst"
(1 - Rdst, 1 - Gdst, 1 - Bdst, 1 - Adst)
"dst-alpha"
(Adst, Adst, Adst, Adst)
"one-minus-dst-alpha"
(1 - Adst, 1 - Adst, 1 - Adst, 1 - Adst)
"src-alpha-saturated"
(min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), 1)
"constant"
(Rconst, Gconst, Bconst, Aconst)
"one-minus-constant"
(1 - Rconst, 1 - Gconst, 1 - Bconst, 1 - Aconst)
"src1"
(Rsrc1, Gsrc1, Bsrc1, Asrc1)
dual-source-blending
"one-minus-src1"
(1 - Rsrc1, 1 - Gsrc1, 1 - Bsrc1, 1 - Asrc1)
"src1-alpha"
(Asrc1, Asrc1, Asrc1, Asrc1)
"one-minus-src1-alpha"
(1 - Asrc1, 1 - Asrc1, 1 - Asrc1, 1 - Asrc1)
enum GPUBlendOperation
{
"add",
"subtract",
"reverse-subtract",
"min",
"max",
};
GPUBlendOperation
defines the algorithm used to combine source and destination blend factors:
"add"
RGBAsrc × RGBAsrcFactor + RGBAdst × RGBAdstFactor
"subtract"
RGBAsrc × RGBAsrcFactor - RGBAdst × RGBAdstFactor
"reverse-subtract"
RGBAdst × RGBAdstFactor - RGBAsrc × RGBAsrcFactor
"min"
min(RGBAsrc, RGBAdst)
"max"
max(RGBAsrc, RGBAdst)
10.3.6. Depth/Stencil State
dictionary GPUDepthStencilState
{
required GPUTextureFormat format;
boolean depthWriteEnabled;
GPUCompareFunction depthCompare;
GPUStencilFaceState stencilFront = {};
GPUStencilFaceState stencilBack = {};
GPUStencilValue stencilReadMask = 0xFFFFFFFF;
GPUStencilValue stencilWriteMask = 0xFFFFFFFF;
GPUDepthBias depthBias = 0;
float depthBiasSlopeScale = 0;
float depthBiasClamp = 0;
};
GPUDepthStencilState
has the following members, which describe how a GPURenderPipeline
will affect a render pass’s depthStencilAttachment
:
format
, of type GPUTextureFormat
The format
of depthStencilAttachment
this GPURenderPipeline
will be compatible with.
depthWriteEnabled
, of type boolean
Indicates if this GPURenderPipeline
can modify depthStencilAttachment
depth values.
depthCompare
, of type GPUCompareFunction
The comparison operation used to test fragment depths against depthStencilAttachment
depth values.
stencilFront
, of type GPUStencilFaceState, defaulting to {}
Defines how stencil comparisons and operations are performed for front-facing primitives.
stencilBack
, of type GPUStencilFaceState, defaulting to {}
Defines how stencil comparisons and operations are performed for back-facing primitives.
stencilReadMask
, of type GPUStencilValue, defaulting to 0xFFFFFFFF
Bitmask controlling which depthStencilAttachment
stencil value bits are read when performing stencil comparison tests.
stencilWriteMask
, of type GPUStencilValue, defaulting to 0xFFFFFFFF
Bitmask controlling which depthStencilAttachment
stencil value bits are written to when performing stencil operations.
depthBias
, of type GPUDepthBias, defaulting to 0
Constant depth bias added to each triangle fragment. See biased fragment depth for details.
depthBiasSlopeScale
, of type float, defaulting to 0
Depth bias that scales with the triangle fragment’s slope. See biased fragment depth for details.
depthBiasClamp
, of type float, defaulting to 0
The maximum depth bias of a triangle fragment. See biased fragment depth for details.
Note: depthBias
, depthBiasSlopeScale
, and depthBiasClamp
have no effect on "point-list"
, "line-list"
, and "line-strip"
primitives, and must be 0.
dictionary GPUStencilFaceState
{
GPUCompareFunction compare = "always";
GPUStencilOperation failOp = "keep";
GPUStencilOperation depthFailOp = "keep";
GPUStencilOperation passOp = "keep";
};
GPUStencilFaceState
has the following members, which describe how stencil comparisons and operations are performed:
compare
, of type GPUCompareFunction, defaulting to "always"
The GPUCompareFunction
used when testing the [[stencilReference]]
value against the fragment’s depthStencilAttachment
stencil values.
failOp
, of type GPUStencilOperation, defaulting to "keep"
The GPUStencilOperation
performed if the fragment stencil comparison test described by compare
fails.
depthFailOp
, of type GPUStencilOperation, defaulting to "keep"
The GPUStencilOperation
performed if the fragment depth comparison described by depthCompare
fails.
passOp
, of type GPUStencilOperation, defaulting to "keep"
The GPUStencilOperation
performed if the fragment stencil comparison test described by compare
passes.
enum GPUStencilOperation
{
"keep",
"zero",
"replace",
"invert",
"increment-clamp",
"decrement-clamp",
"increment-wrap",
"decrement-wrap",
};
GPUStencilOperation
defines the following operations:
"keep"
Keep the current stencil value.
"zero"
Set the stencil value to 0
.
"replace"
Set the stencil value to [[stencilReference]]
.
"invert"
Bitwise-invert the current stencil value.
"increment-clamp"
Increments the current stencil value, clamping to the maximum representable value of the depthStencilAttachment
’s stencil aspect.
"decrement-clamp"
Decrement the current stencil value, clamping to 0
.
"increment-wrap"
Increments the current stencil value, wrapping to zero if the value exceeds the maximum representable value of the depthStencilAttachment
’s stencil aspect.
"decrement-wrap"
Decrement the current stencil value, wrapping to the maximum representable value of the depthStencilAttachment
’s stencil aspect if the value goes below 0
.
enum GPUIndexFormat
{
"uint16",
"uint32",
};
The index format determines both the data type of index values in a buffer and, when used with strip primitive topologies ("line-strip"
or "triangle-strip"
) also specifies the primitive restart value. The primitive restart value indicates which index value indicates that a new primitive should be started rather than continuing to construct the triangle strip with the prior indexed vertices.
GPUPrimitiveState
s that specify a strip primitive topology must specify a stripIndexFormat
if they are used for indexed draws so that the primitive restart value that will be used is known at pipeline creation time. GPUPrimitiveState
s that specify a list primitive topology will use the index format passed to setIndexBuffer()
when doing indexed rendering.
"uint16"
2 0xFFFF "uint32"
4 0xFFFFFFFF 10.3.7.1. Vertex Formats
The GPUVertexFormat
of a vertex attribute indicates how data from a vertex buffer will be interpreted and exposed to the shader. The name of the format specifies the order of components, bits per component, and vertex data type for the component.
Each vertex data type can map to any WGSL scalar type of the same base type, regardless of the bits per component:
Vertex format prefix Vertex data type Compatible WGSL typesuint
unsigned int u32
sint
signed int i32
unorm
unsigned normalized f16
, f32
snorm
signed normalized float
floating point
The multi-component formats specify the number of components after "x". Mismatches in the number of components between the vertex format and shader type are allowed, with components being either dropped or filled with default values to compensate.
A vertex attribute with a format of
"unorm8x2"
and byte values
[0x7F, 0xFF]
can be accessed in the shader with the following types:
Shader type Shader valuef16
0.5h
f32
0.5f
vec2<f16>
vec2(0.5h, 1.0h)
vec2<f32>
vec2(0.5f, 1.0f)
vec3<f16>
vec2(0.5h, 1.0h, 0.0h)
vec3<f32>
vec2(0.5f, 1.0f, 0.0f)
vec4<f16>
vec2(0.5h, 1.0h, 0.0h, 1.0h)
vec4<f32>
vec2(0.5f, 1.0f, 0.0f, 1.0f)
See § 23.2.2 Vertex Processing for additional information about how vertex formats are exposed in the shader.
enum GPUVertexFormat
{
"uint8",
"uint8x2",
"uint8x4",
"sint8",
"sint8x2",
"sint8x4",
"unorm8",
"unorm8x2",
"unorm8x4",
"snorm8",
"snorm8x2",
"snorm8x4",
"uint16",
"uint16x2",
"uint16x4",
"sint16",
"sint16x2",
"sint16x4",
"unorm16",
"unorm16x2",
"unorm16x4",
"snorm16",
"snorm16x2",
"snorm16x4",
"float16",
"float16x2",
"float16x4",
"float32",
"float32x2",
"float32x3",
"float32x4",
"uint32",
"uint32x2",
"uint32x3",
"uint32x4",
"sint32",
"sint32x2",
"sint32x3",
"sint32x4",
"unorm10-10-10-2",
"unorm8x4-bgra",
};
Vertex format Data type Components byteSize Example WGSL type "uint8"
unsigned int 1 1 u32
"uint8x2"
unsigned int 2 2 vec2<u32>
"uint8x4"
unsigned int 4 4 vec4<u32>
"sint8"
signed int 1 1 i32
"sint8x2"
signed int 2 2 vec2<i32>
"sint8x4"
signed int 4 4 vec4<i32>
"unorm8"
unsigned normalized 1 1 f32
"unorm8x2"
unsigned normalized 2 2 vec2<f32>
"unorm8x4"
unsigned normalized 4 4 vec4<f32>
"snorm8"
signed normalized 1 1 f32
"snorm8x2"
signed normalized 2 2 vec2<f32>
"snorm8x4"
signed normalized 4 4 vec4<f32>
"uint16"
unsigned int 1 2 u32
"uint16x2"
unsigned int 2 4 vec2<u32>
"uint16x4"
unsigned int 4 8 vec4<u32>
"sint16"
signed int 1 2 i32
"sint16x2"
signed int 2 4 vec2<i32>
"sint16x4"
signed int 4 8 vec4<i32>
"unorm16"
unsigned normalized 1 2 f32
"unorm16x2"
unsigned normalized 2 4 vec2<f32>
"unorm16x4"
unsigned normalized 4 8 vec4<f32>
"snorm16"
signed normalized 1 2 f32
"snorm16x2"
signed normalized 2 4 vec2<f32>
"snorm16x4"
signed normalized 4 8 vec4<f32>
"float16"
float 1 2 f32
"float16x2"
float 2 4 vec2<f16>
"float16x4"
float 4 8 vec4<f16>
"float32"
float 1 4 f32
"float32x2"
float 2 8 vec2<f32>
"float32x3"
float 3 12 vec3<f32>
"float32x4"
float 4 16 vec4<f32>
"uint32"
unsigned int 1 4 u32
"uint32x2"
unsigned int 2 8 vec2<u32>
"uint32x3"
unsigned int 3 12 vec3<u32>
"uint32x4"
unsigned int 4 16 vec4<u32>
"sint32"
signed int 1 4 i32
"sint32x2"
signed int 2 8 vec2<i32>
"sint32x3"
signed int 3 12 vec3<i32>
"sint32x4"
signed int 4 16 vec4<i32>
"unorm10-10-10-2"
unsigned normalized 4 4 vec4<f32>
"unorm8x4-bgra"
unsigned normalized 4 4 vec4<f32>
enum GPUVertexStepMode
{
"vertex",
"instance",
};
The step mode configures how an address for vertex buffer data is computed, based on the current vertex or instance index:
"vertex"
The address is advanced by arrayStride
for each vertex, and reset between instances.
"instance"
The address is advanced by arrayStride
for each instance.
dictionary GPUVertexState
: GPUProgrammableStage {
sequence<GPUVertexBufferLayout?> buffers = [];
};
buffers
, of type sequence<GPUVertexBufferLayout?>
, defaulting to []
A list of GPUVertexBufferLayout
s, each defining the layout of vertex attribute data in a vertex buffer used by this pipeline.
A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride
is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes
, which describe the members of the structure.
Each GPUVertexAttribute
describes its format
and its offset
, in bytes, within the structure.
Each attribute appears as a separate input in a vertex shader, each bound by a numeric location, which is specified by shaderLocation
. Every location must be unique within the GPUVertexState
.
dictionary GPUVertexBufferLayout
{
required GPUSize64 arrayStride;
GPUVertexStepMode stepMode = "vertex";
required sequence<GPUVertexAttribute> attributes;
};
arrayStride
, of type GPUSize64
The stride, in bytes, between elements of this array.
stepMode
, of type GPUVertexStepMode, defaulting to "vertex"
Whether each element of this array represents per-vertex data or per-instance data
attributes
, of type sequence<GPUVertexAttribute>
An array defining the layout of the vertex attributes within each element.
dictionary GPUVertexAttribute
{
required GPUVertexFormat format;
required GPUSize64 offset;
required GPUIndex32 shaderLocation;
};
format
, of type GPUVertexFormat
The GPUVertexFormat
of the attribute.
offset
, of type GPUSize64
The offset, in bytes, from the beginning of the element to the data for the attribute.
shaderLocation
, of type GPUIndex32
The numeric location associated with this attribute, which will correspond with a "@location" attribute declared in the vertex
.module
.
(device, descriptor, layout)
Arguments:
GPUDevice
device
GPUVertexState
descriptor
GPUPipelineLayout
layout
Device timeline steps:
Let entryPoint be get the entry point(VERTEX
, descriptor).
Assert entryPoint is not null
.
All of the requirements in the following steps must be met.
validating GPUProgrammableStage(VERTEX
, descriptor, layout, device) must succeed.
descriptor.buffers
.size must be ≤ device.[[device]]
.[[limits]]
.maxVertexBuffers
.
Each vertexBuffer layout descriptor in the list descriptor.buffers
must pass validating GPUVertexBufferLayout(device, vertexBuffer).
The sum of vertexBuffer.attributes
.size, over every vertexBuffer in descriptor.buffers
, must be ≤ device.[[device]]
.[[limits]]
.maxVertexAttributes
.
For every vertex attribute declaration (at location location with type T) that is statically used by entryPoint, there must be exactly one pair (i, j) for which descriptor.buffers
[i]?.attributes
[j].shaderLocation
== location.
Let attrib be that GPUVertexAttribute
.
T must be compatible with attrib.format
’s vertex data type:
T must be f32
or vecN<f32>
.
T must be u32
or vecN<u32>
.
T must be i32
or vecN<i32>
.
Buffer copy operations operate on raw bytes.
WebGPU provides "buffered" GPUCommandEncoder
commands:
and "immediate" GPUQueue
operations:
writeBuffer()
, for ArrayBuffer
-to-GPUBuffer
writes
Texel copy operations operate on texture/"image" data, rather than bytes.
WebGPU provides "buffered" GPUCommandEncoder
commands:
and "immediate" GPUQueue
operations:
writeTexture()
, for ArrayBuffer
-to-GPUTexture
writes
copyExternalImageToTexture()
, for copies from Web Platform image sources to textures
During a texel copy texels are copied over with an equivalent texel representation. Texel copies only guarantee that valid, normal numeric values in the source have the same numeric value in the destination, and may not preserve the bit-representations of the the following values:
snorm values may represent -1.0 as either -127 or -128.
The signs of zero values may not be preserved.
Subnormal floating-point values may be replaced by either -0.0 or +0.0.
Any NaN or Infinity is an invalid value and may be replace by an indeterminate value.
Note: Copies may be performed with WGSL shaders, which means that any of the documented WGSL floating point behaviors may be observed.
The following definitions are used by these methods:
11.2.1.GPUTexelCopyBufferLayout
"GPUTexelCopyBufferLayout
" describes the "layout" of texels in a "buffer" of bytes (GPUBuffer
or AllowSharedBufferSource
) in a "texel copy" operation.
dictionary GPUTexelCopyBufferLayout { GPUSize64 offset = 0; GPUSize32 bytesPerRow; GPUSize32 rowsPerImage; };
A texel image is comprised of one or more rows of texel blocks, referred to here as texel block rows. Each texel block row of a texel image must contain the same number of texel blocks, and all texel blocks in a texel image are of the same GPUTextureFormat
.
A GPUTexelCopyBufferLayout
is a layout of texel images within some linear memory. It’s used when copying data between a texture and a GPUBuffer
, or when scheduling a write into a texture from the GPUQueue
.
For 2d
textures, data is copied between one or multiple contiguous texel images and array layers.
For 3d
textures, data is copied between one or multiple contiguous texel images and depth slices.
Operations that copy between byte arrays and textures always operate on whole texel block. It’s not possible to update only a part of a texel block.
Texel blocks are tightly packed within each texel block row in the linear memory layout of a texel copy, with each subsequent texel block immediately following the previous texel block, with no padding. This includes copies to/from specific aspects of depth-or-stencil format textures: stencil values are tightly packed in an array of bytes; depth values are tightly packed in an array of the appropriate type ("depth16unorm" or "depth32float").
offset
, of type GPUSize64, defaulting to 0
The offset, in bytes, from the beginning of the texel data source (such as a GPUTexelCopyBufferInfo.buffer
) to the start of the texel data within that source.
bytesPerRow
, of type GPUSize32
The stride, in bytes, between the beginning of each texel block row and the subsequent texel block row.
Required if there are multiple texel block rows (i.e. the copy height or depth is more than one block).
rowsPerImage
, of type GPUSize32
Number of texel block rows per single texel image of the texture. rowsPerImage
× bytesPerRow
is the stride, in bytes, between the beginning of each texel image of data and the subsequent texel image.
Required if there are multiple texel images (i.e. the copy depth is more than one).
GPUTexelCopyBufferInfo
"GPUTexelCopyBufferInfo
" describes the "info" (GPUBuffer
and GPUTexelCopyBufferLayout
) about a "buffer" source or destination of a "texel copy" operation. Together with the copySize
, it describes the footprint of a region of texels in a GPUBuffer
.
dictionary GPUTexelCopyBufferInfo : GPUTexelCopyBufferLayout { required GPUBuffer buffer; };
buffer
, of type GPUBuffer
A buffer which either contains texel data to be copied or will store the texel data being copied, depending on the method it is being passed to.
GPUTexelCopyTextureInfo
"GPUTexelCopyTextureInfo
" describes the "info" (GPUTexture
, etc.) about a "texture" source or destination of a "texel copy" operation. Together with the copySize
, it describes a sub-region of a texture (spanning one or more contiguous texture subresources at the same mip-map level).
dictionary GPUTexelCopyTextureInfo { required GPUTexture texture; GPUIntegerCoordinate mipLevel = 0; GPUOrigin3D origin = {}; GPUTextureAspect aspect = "all"; };
texture
, of type GPUTexture
Texture to copy to/from.
mipLevel
, of type GPUIntegerCoordinate, defaulting to 0
Mip-map level of the texture
to copy to/from.
origin
, of type GPUOrigin3D, defaulting to {}
Defines the origin of the copy - the minimum corner of the texture sub-region to copy to/from. Together with copySize
, defines the full copy sub-region.
aspect
, of type GPUTextureAspect, defaulting to "all"
Defines which aspects of the texture
to copy to/from.
The
texture copy sub-regionfor depth slice or array layer
indexof
GPUTexelCopyTextureInfo
copyTexture
is determined by running the following steps:
Let texture be copyTexture.texture
.
If texture.dimension
is:
Let textureMip be mip level copyTexture.mipLevel
of depthSliceOrLayer.
Return aspect copyTexture.aspect
of textureMip.
GPUCopyExternalImageDestInfo
WebGPU textures hold raw numeric data, and are not tagged with semantic metadata describing colors. However, copyExternalImageToTexture()
copies from sources that describe colors.
"GPUCopyExternalImageDestInfo
" describes the "info" about the "destination" of a "copyExternalImageToTexture()
" operation. It is a GPUTexelCopyTextureInfo
which is additionally tagged with color space/encoding and alpha-premultiplication metadata, so that semantic color data may be preserved during copies. This metadata affects only the semantics of the copy operation operation, not the state or semantics of the destination texture object.
dictionary GPUCopyExternalImageDestInfo : GPUTexelCopyTextureInfo { PredefinedColorSpace colorSpace = "srgb"; boolean premultipliedAlpha = false; };
colorSpace
, of type PredefinedColorSpace, defaulting to "srgb"
Describes the color space and encoding used to encode data into the destination texture.
This may result in values outside of the range [0, 1] being written to the target texture, if its format can represent them. Otherwise, the results are clamped to the target texture format’s range.
Note: If colorSpace
matches the source image, conversion may not be necessary. See § 3.10.2 Color Space Conversion Elision.
premultipliedAlpha
, of type boolean, defaulting to false
Describes whether the data written into the texture should have its RGB channels premultiplied by the alpha channel, or not.
If this option is set to true
and the source
is also premultiplied, the source RGB values must be preserved even if they exceed their corresponding alpha values.
Note: If premultipliedAlpha
matches the source image, conversion may not be necessary. See § 3.10.2 Color Space Conversion Elision.
GPUCopyExternalImageSourceInfo
"GPUCopyExternalImageSourceInfo
" describes the "info" about the "source" of a "copyExternalImageToTexture()
" operation.
typedef (ImageBitmap or
ImageData or
HTMLImageElement or
HTMLVideoElement or
VideoFrame or
HTMLCanvasElement or
OffscreenCanvas) GPUCopyExternalImageSource
;
dictionary GPUCopyExternalImageSourceInfo {
required GPUCopyExternalImageSource source;
GPUOrigin2D origin = {};
boolean flipY = false;
};
GPUCopyExternalImageSourceInfo
has the following members:
source
, of type GPUCopyExternalImageSource
The source of the texel copy. The copy source data is captured at the moment that copyExternalImageToTexture()
is issued. Source size is determined as described by the external source dimensions table.
origin
, of type GPUOrigin2D, defaulting to {}
Defines the origin of the copy - the minimum (top-left) corner of the source sub-region to copy from. Together with copySize
, defines the full copy sub-region.
flipY
, of type boolean, defaulting to false
Describes whether the source image is vertically flipped, or not.
If this option is set to true
, the copy is flipped vertically: the bottom row of the source region is copied into the first row of the destination region, and so on. The origin
option is still relative to the top-left corner of the source image, increasing downward.
When external sources are used when creating or copying to textures, the external source dimensions are defined by the source type, given by this table:
11.2.6. Subroutines validating linear texture data(layout, byteSize, format, copyExtent)
Arguments:
GPUTexelCopyBufferLayout
layout
Layout of the linear texture data.
GPUSize64
byteSize
Total size of the linear data, in bytes.
GPUTextureFormat
format
Format of the texture.
GPUExtent3D
copyExtent
Extent of the texture to copy.
Device timeline steps:
Let:
widthInBlocks be copyExtent.width ÷ the texel block width of format. Assert this is an integer.
heightInBlocks be copyExtent.height ÷ the texel block height of format. Assert this is an integer.
bytesInLastRow be widthInBlocks × the texel block copy footprint of format.
Fail if the following input validation requirements are not met:
Let:
bytesPerRow be layout.bytesPerRow
?? 0.
rowsPerImage be layout.rowsPerImage
?? 0.
Note: These default values have no effect, as they’re always multiplied by 0.
Let requiredBytesInCopy be 0.
If copyExtent.depthOrArrayLayers > 0:
Increment requiredBytesInCopy by bytesPerRow × rowsPerImage × (copyExtent.depthOrArrayLayers − 1).
If heightInBlocks > 0:
Increment requiredBytesInCopy by bytesPerRow × (heightInBlocks − 1) + bytesInLastRow.
Fail if the following condition is not satisfied:
The layout fits inside the linear data: layout.offset
+ requiredBytesInCopy ≤ byteSize.
Two
GPUTextureFormat
s
format1and
format2are
copy-compatibleif:
format1 equals format2, or
format1 and format2 differ only in whether they are srgb
formats (have the -srgb
suffix).
Command buffers are pre-recorded lists of GPU commands (blocks of queue timeline steps) that can be submitted to a GPUQueue
for execution. Each GPU command represents a task to be performed on the queue timeline, such as setting state, drawing, copying resources, etc.
A GPUCommandBuffer
can only be submitted once, at which point it becomes invalidated. To reuse rendering commands across multiple submissions, use GPURenderBundle
.
GPUCommandBuffer
[Exposed=(Window, Worker), SecureContext] interface GPUCommandBuffer { }; GPUCommandBuffer includes GPUObjectBase;
GPUCommandBuffer
has the following device timeline properties:
[[command_list]]
, of type list<GPU command>, readonly
A list of GPU commands to be executed on the Queue timeline when this command buffer is submitted.
[[renderState]]
, of type RenderState, initially null
The current state used by any render pass commands being executed.
dictionary GPUCommandBufferDescriptor
: GPUObjectDescriptorBase {
};
13. Command Encoding 13.1. GPUCommandsMixin
GPUCommandsMixin
defines state common to all interfaces which encode commands. It has no methods.
interface mixin GPUCommandsMixin { };
GPUCommandsMixin
has the following device timeline properties:
[[state]]
, of type encoder state, initially "open"
The current state of the encoder.
[[commands]]
, of type list<GPU command>, initially []
A list of GPU commands to be executed on the Queue timeline when a GPUCommandBuffer
containing these commands is submitted.
The encoder state may be one of the following:
The encoder is available to encode new commands.
The encoder cannot be used, because it is locked by a child encoder: it is a GPUCommandEncoder
, and a GPURenderPassEncoder
or GPUComputePassEncoder
is active. The encoder becomes "open" again when the pass is ended.
Any command issued in this state invalidates the encoder.
The encoder has been ended and new commands can no longer be encoded.
Any command issued in this state will generate a validation error.
GPUCommandEncoder
[Exposed=(Window, Worker), SecureContext] interface GPUCommandEncoder { GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor); GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {}); undefined copyBufferToBuffer( GPUBuffer13.2.1. Command Encoder Creationsource
, GPUBufferdestination
, optional GPUSize64size
); undefined copyBufferToBuffer( GPUBuffer source, GPUSize64 sourceOffset, GPUBuffer destination, GPUSize64 destinationOffset, optional GPUSize64 size); undefined copyBufferToTexture( GPUTexelCopyBufferInfo source, GPUTexelCopyTextureInfo destination, GPUExtent3D copySize); undefined copyTextureToBuffer( GPUTexelCopyTextureInfo source, GPUTexelCopyBufferInfo destination, GPUExtent3D copySize); undefined copyTextureToTexture( GPUTexelCopyTextureInfo source, GPUTexelCopyTextureInfo destination, GPUExtent3D copySize); undefined clearBuffer( GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size); undefined resolveQuerySet( GPUQuerySet querySet, GPUSize32 firstQuery, GPUSize32 queryCount, GPUBuffer destination, GPUSize64 destinationOffset); GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {}); }; GPUCommandEncoder includes GPUObjectBase; GPUCommandEncoder includes GPUCommandsMixin; GPUCommandEncoder includes GPUDebugCommandsMixin;
dictionary GPUCommandEncoderDescriptor
: GPUObjectDescriptorBase {
};
createCommandEncoder(descriptor)
Creates a GPUCommandEncoder
.
Creating a
GPUCommandEncoder
, encoding a command to clear a buffer, finishing the encoder to get a
GPUCommandBuffer
, then submitting it to the
GPUQueue
.
const commandEncoder = gpuDevice.createCommandEncoder(); commandEncoder.clearBuffer(buffer); const commandBuffer = commandEncoder.finish(); gpuDevice.queue.submit([commandBuffer]);13.3. Pass Encoding
beginRenderPass(descriptor)
Begins encoding a render pass described by descriptor.
Device timeline initialization steps:
Validate the encoder state of this. If it returns false, invalidate pass and return.
Let attachmentRegions be a list of [texture subresource, depthSlice
?] pairs, initially empty. Each pair describes the region of the texture to be rendered to, which includes a single depth slice for "3d"
textures only.
For each non-null
colorAttachment in descriptor.colorAttachments
:
Add [colorAttachment.view
, colorAttachment.depthSlice
?? null
] to attachmentRegions.
If colorAttachment.resolveTarget
is not null
:
Add [colorAttachment.resolveTarget
, undefined
] to attachmentRegions.
If any of the following requirements are unmet, invalidate pass and return.
descriptor must meet the Valid Usage rules given device this.[[device]]
.
The set of texture regions in attachmentRegions must be pairwise disjoint. That is, no two texture regions may overlap.
Add each texture subresource in attachmentRegions to pass.[[usage scope]]
with usage attachment.
Let depthStencilAttachment be descriptor.depthStencilAttachment
.
If depthStencilAttachment is not null
:
Let depthStencilView be depthStencilAttachment.view
.
Add the depth subresource of depthStencilView, if any, to pass.[[usage scope]]
with usage attachment-read if depthStencilAttachment.depthReadOnly
is true, or attachment otherwise.
Add the stencil subresource of depthStencilView, if any, to pass.[[usage scope]]
with usage attachment-read if depthStencilAttachment.stencilReadOnly
is true, or attachment otherwise.
Set pass.[[depthReadOnly]]
to depthStencilAttachment.depthReadOnly
.
Set pass.[[stencilReadOnly]]
to depthStencilAttachment.stencilReadOnly
.
Set pass.[[layout]]
to derive render targets layout from pass(descriptor).
If descriptor.timestampWrites
is provided:
Let timestampWrites be descriptor.timestampWrites
.
If timestampWrites.beginningOfPassWriteIndex
is provided, append a GPU command to this.[[commands]]
with the following steps:
If timestampWrites.endOfPassWriteIndex
is provided, set pass.[[endTimestampWrite]]
to a GPU command with the following steps:
Set pass.[[drawCount]]
to 0.
Set pass.[[maxDrawCount]]
to descriptor.maxDrawCount
.
Set pass.[[maxDrawCount]]
to descriptor.maxDrawCount
.
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
beginComputePass(descriptor)
Begins encoding a compute pass described by descriptor.
copyBufferToBuffer() has two overloads:
copyBufferToBuffer(source, destination, size)
Shorthand, equivalent to copyBufferToBuffer(source, 0, destination, 0, size)
.
copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of a GPUBuffer
to a sub-region of another GPUBuffer
.
steps:
Copy size bytes of source, beginning at sourceOffset, into destination, beginning at destinationOffset.
clearBuffer(buffer, offset, size)
Encode a command into the GPUCommandEncoder
that fills a sub-region of a GPUBuffer
with zeros.
steps:
Set size bytes of buffer to 0
starting at offset.
copyBufferToTexture(source, destination, copySize)
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of a GPUBuffer
to a sub-region of one or multiple continuous texture subresources.
copyTextureToBuffer(source, destination, copySize)
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of one or multiple continuous texture subresources to a sub-region of a GPUBuffer
.
copyTextureToTexture(source, destination, copySize)
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of one or multiple contiguous texture subresources to another sub-region of one or multiple continuous texture subresources.
steps:
Let blockWidth be the texel block width of source.texture
.
Let blockHeight be the texel block height of source.texture
.
Let srcOrigin be source.origin
.
Let srcBlockOriginX be (srcOrigin.x ÷ blockWidth).
Let srcBlockOriginY be (srcOrigin.y ÷ blockHeight).
Let dstOrigin be destination.origin
.
Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).
Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).
Let blockColumns be (copySize.width ÷ blockWidth).
Let blockRows be (copySize.height ÷ blockHeight).
Assert that srcBlockOriginX, srcBlockOriginY, dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.
For each z in the range [0, copySize.depthOrArrayLayers − 1]:
Let srcSubregion be texture copy sub-region (z + srcOrigin.z) of source.
Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.
For each y in the range [0, blockRows − 1]:
For each x in the range [0, blockColumns − 1]:
Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to texel block (srcBlockOriginX + x, srcBlockOriginY + y) of srcSubregion.
resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset)
Resolves query results from a GPUQuerySet
out into a range of a GPUBuffer
.
steps:
Let queryIndex be firstQuery.
Let offset be destinationOffset.
While queryIndex < firstQuery + queryCount:
Set 8 bytes of destination, beginning at offset, to be the value of querySet at queryIndex.
Set queryIndex to be queryIndex + 1.
Set offset to be offset + 8.
A GPUCommandBuffer
containing the commands recorded by the GPUCommandEncoder
can be created by calling finish()
. Once finish()
has been called the command encoder can no longer be used.
finish(descriptor)
Completes recording of the commands sequence and returns a corresponding GPUCommandBuffer
.
interface mixin GPUBindingCommandsMixin
{
undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);
undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
Uint32Array dynamicOffsetsData,
GPUSize64 dynamicOffsetsDataStart,
GPUSize32 dynamicOffsetsDataLength);
};
GPUBindingCommandsMixin
assumes the presence of GPUObjectBase
and GPUCommandsMixin
members on the same object. It must only be included by interfaces which also include those mixins.
GPUBindingCommandsMixin
has the following device timeline properties:
[[bind_groups]]
, of type ordered map<GPUIndex32
, GPUBindGroup
>, initially empty
The current GPUBindGroup
for each index.
[[dynamic_offsets]]
, of type ordered map<GPUIndex32
, list<GPUBufferDynamicOffset
>>, initally empty
The current dynamic offsets for each [[bind_groups]]
entry.
setBindGroup() has two overloads:
setBindGroup(index, bindGroup, dynamicOffsets)
Sets the current GPUBindGroup
for the given index.
GPUBindingCommandsMixin
this.
Arguments:
index
, of type GPUIndex32
, non-nullable, required
The index to set the bind group at.
bindGroup
, of type GPUBindGroup
, nullable, required
Bind group to use for subsequent render or compute commands.
dynamicOffsets
, of type sequence<GPUBufferDynamicOffset
>, non-nullable, defaulting to []
Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer
.hasDynamicOffset
, ordered by GPUBindGroupLayoutEntry
.binding
. See note for additional details.
Returns: undefined
Content timeline steps:
Issue the subsequent steps on the Device timeline of this.[[device]]
.
setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength)
Sets the current GPUBindGroup
for the given index, specifying dynamic offsets as a subset of a Uint32Array
.
GPUBindingCommandsMixin
this
.
Arguments:
Returns: undefined
Content timeline steps:
If any of the following requirements are unmet, throw a RangeError
and return.
dynamicOffsetsDataStart must be ≥ 0.
dynamicOffsetsDataStart + dynamicOffsetsDataLength must be ≤ dynamicOffsetsData.length
.
Let dynamicOffsets be a list containing the range, starting at index dynamicOffsetsDataStart, of dynamicOffsetsDataLength elements of a copy of dynamicOffsetsData.
Call this.setBindGroup
(index, bindGroup, dynamicOffsets).
NOTE:
Dynamic offset are applied in
GPUBindGroupLayoutEntry
.
binding
order.
This means that if dynamic bindings
is the list of each GPUBindGroupLayoutEntry
in the GPUBindGroupLayout
with buffer
?.hasDynamicOffset
set to true
, sorted by GPUBindGroupLayoutEntry
.binding
, then dynamic offset[i]
, as supplied to setBindGroup(), will correspond to dynamic bindings[i]
.
For a
GPUBindGroupLayout
created with the following call:
// Note the bindings are listed out-of-order in this array, but it // doesn’t matter because they will be sorted by binding index. let layout = gpuDevice.createBindGroupLayout({ entries: [{ binding: 1, buffer: {}, }, { binding: 2, buffer: { dynamicOffset: true }, }, { binding: 0, buffer: { dynamicOffset: true }, }] });
Used by a GPUBindGroup
created with the following call:
// Like above, the array order doesn’t matter here. // It doesn’t even need to match the order used in the layout. let bindGroup = gpuDevice.createBindGroup({ layout: layout, entries: [{ binding: 1, resource: { buffer: bufferA, offset: 256 }, }, { binding: 2, resource: { buffer: bufferB, offset: 512 }, }, { binding: 0, resource: { buffer: bufferC }, }] });
And bound with the following call:
pass.setBindGroup(0, bindGroup, [1024, 2048]);
The following buffer offsets will be applied:
Binding Buffer Offset 0 bufferC 1024 (Dynamic) 1 bufferA 256 (Static) 2 bufferB 2560 (Static + Dynamic) Validate encoder bind groups(encoder, pipeline)
Arguments:
GPUBindingCommandsMixin
encoder
Encoder whose bind groups are being validated.
GPUPipelineBase
pipeline
Pipeline to validate encoders bind groups are compatible with.
Device timeline steps:
If any of the following conditions are unsatisfied, return false
:
Otherwise return true
.
(
encoder,
pipeline) if any writable buffer binding range overlaps with any other binding range of the same buffer, or any writable texture binding overlaps in
texture subresourceswith any other texture binding (which may use the same or a different
GPUTextureView
object).
Note: This algorithm limits the use of the usage scope storage exception.
Arguments:
GPUBindingCommandsMixin
encoder
Encoder whose bind groups are being validated.
GPUPipelineBase
pipeline
Pipeline to validate encoders bind groups are compatible with.
Device timeline steps:
For each stage in [VERTEX
, FRAGMENT
, COMPUTE
]:
Let bufferBindings be a list of (GPUBufferBinding
, boolean
) pairs, where the latter indicates whether the resource was used as writable.
Let textureViews be a list of (GPUTextureView
, boolean
) pairs, where the latter indicates whether the resource was used as writable.
For each pair of (GPUIndex32
bindGroupIndex, GPUBindGroupLayout
bindGroupLayout) in pipeline.[[layout]]
.[[bindGroupLayouts]]
:
Let bindGroup be encoder.[[bind_groups]]
[bindGroupIndex].
Let bindGroupLayoutEntries be bindGroupLayout.[[descriptor]]
.entries
.
Let bufferRanges be the bound buffer ranges of bindGroup, given dynamic offsets encoder.[[dynamic_offsets]]
[bindGroupIndex]
For each (GPUBindGroupLayoutEntry
bindGroupLayoutEntry, GPUBufferBinding
resource) in bufferRanges, in which bindGroupLayoutEntry.visibility
contains stage:
Let resourceWritable be (bindGroupLayoutEntry.buffer
.type
== "storage"
).
For each pair (GPUBufferBinding
pastResource, boolean
pastResourceWritable) in bufferBindings:
If (resourceWritable or pastResourceWritable) is true, and pastResource and resource are buffer-binding-aliasing, return true
.
Append (resource, resourceWritable) to bufferBindings.
For each GPUBindGroupLayoutEntry
bindGroupLayoutEntry in bindGroupLayoutEntries, and corresponding GPUTextureView
resource in bindGroup, in which bindGroupLayoutEntry.visibility
contains stage:
If bindGroupLayoutEntry.storageTexture
is not provided, continue.
Let resourceWritable be whether bindGroupLayoutEntry.storageTexture
.access
is a writable access mode.
For each pair (GPUTextureView
pastResource, boolean
pastResourceWritable) in textureViews,
If (resourceWritable or pastResourceWritable) is true, and pastResource and resource is texture-view-aliasing, return true
.
Append (resource, resourceWritable) to textureViews.
Return false
.
Note: Implementations are strongly encouraged to optimize this algorithm.
15. Debug MarkersGPUDebugCommandsMixin
provides methods to apply debug labels to groups of commands or insert a single label into the command sequence.
Debug groups can be nested to create a hierarchy of labeled commands, and must be well-balanced.
Like object labels
, these labels have no required behavior, but may be shown in error messages and browser developer tools, and may be passed to native API backends.
interface mixin GPUDebugCommandsMixin { undefined pushDebugGroup(USVString groupLabel); undefined popDebugGroup(); undefined insertDebugMarker(USVString markerLabel); };
GPUDebugCommandsMixin
assumes the presence of GPUObjectBase
and GPUCommandsMixin
members on the same object. It must only be included by interfaces which also include those mixins.
GPUDebugCommandsMixin
has the following device timeline properties:
[[debug_group_stack]]
, of type stack<USVString
>
A stack of active debug group labels.
GPUDebugCommandsMixin
has the following methods:
pushDebugGroup(groupLabel)
Begins a labeled debug group containing subsequent commands.
popDebugGroup()
Ends the labeled debug group most recently started by pushDebugGroup()
.
insertDebugMarker(markerLabel)
Marks a point in a stream of commands with a label.
GPUComputePassEncoder
[Exposed=(Window, Worker), SecureContext] interface GPUComputePassEncoder { undefined setPipeline(GPUComputePipeline pipeline); undefined dispatchWorkgroups(GPUSize32 workgroupCountX, optional GPUSize32 workgroupCountY = 1, optional GPUSize32 workgroupCountZ = 1); undefined dispatchWorkgroupsIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset); undefined end(); }; GPUComputePassEncoder includes GPUObjectBase; GPUComputePassEncoder includes GPUCommandsMixin; GPUComputePassEncoder includes GPUDebugCommandsMixin; GPUComputePassEncoder includes GPUBindingCommandsMixin;
GPUComputePassEncoder
has the following device timeline properties:
[[command_encoder]]
, of type GPUCommandEncoder
, readonly
The GPUCommandEncoder
that created this compute pass encoder.
[[endTimestampWrite]]
, of type GPU command?, readonly, defaulting to null
GPU command, if any, writing a timestamp when the pass ends.
[[pipeline]]
, of type GPUComputePipeline
, initially null
The current GPUComputePipeline
.
dictionary GPUComputePassTimestampWrites
{
required GPUQuerySet querySet;
GPUSize32 beginningOfPassWriteIndex;
GPUSize32 endOfPassWriteIndex;
};
querySet
, of type GPUQuerySet
The GPUQuerySet
, of type "timestamp"
, that the query results will be written to.
beginningOfPassWriteIndex
, of type GPUSize32
If defined, indicates the query index in querySet
into which the timestamp at the beginning of the compute pass will be written.
endOfPassWriteIndex
, of type GPUSize32
If defined, indicates the query index in querySet
into which the timestamp at the end of the compute pass will be written.
Note: Timestamp query values are written in nanoseconds, but how the value is determined is implementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.
dictionary GPUComputePassDescriptor
: GPUObjectDescriptorBase {
GPUComputePassTimestampWrites timestampWrites;
};
timestampWrites
, of type GPUComputePassTimestampWrites
Defines which timestamp values will be written for this pass, and where to write them to.
setPipeline(pipeline)
Sets the current GPUComputePipeline
.
dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ)
Dispatch work to be performed with the current GPUComputePipeline
. See § 23.1 Computing for the detailed specification.
GPUComputePassEncoder
this.
Arguments:
NOTE:
The
x
,
y
, and
z
values passed to
dispatchWorkgroups()
and
dispatchWorkgroupsIndirect()
are the number of
workgroupsto dispatch for each dimension,
notthe number of shader invocations to perform across each dimension. This matches the behavior of modern native GPU APIs, but differs from the behavior of OpenCL.
This means that if a GPUShaderModule
defines an entry point with @workgroup_size(4, 4)
, and work is dispatched to it with the call computePass.dispatchWorkgroups(8, 8);
the entry point will be invoked 1024 times total: Dispatching a 4x4 workgroup 8 times along both the X and Y axes. (4*4*8*8=1024
)
Returns: undefined
Content timeline steps:
Issue the subsequent steps on the Device timeline of this.[[device]]
.
dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset)
Dispatch work to be performed with the current GPUComputePipeline
using parameters read from a GPUBuffer
. See § 23.1 Computing for the detailed specification.
The indirect dispatch parameters encoded in the buffer must be a tightly packed block of three 32-bit unsigned integer values (12 bytes total), given in the same order as the arguments for dispatchWorkgroups()
. For example:
let dispatchIndirectParameters = new Uint32Array(3); dispatchIndirectParameters[0] = workgroupCountX; dispatchIndirectParameters[1] = workgroupCountY; dispatchIndirectParameters[2] = workgroupCountZ;Queue timeline
steps:
Let workgroupCountX be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.
Let workgroupCountY be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.
Let workgroupCountZ be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.
If workgroupCountX, workgroupCountY, or workgroupCountZ is greater than this.device.limits.maxComputeWorkgroupsPerDimension
, return.
Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with bindingState.[[pipeline]]
using bindingState.[[bind_groups]]
.
The compute pass encoder can be ended by calling end()
once the user has finished recording commands for the pass. Once end()
has been called the compute pass encoder can no longer be used.
end()
Completes recording of the compute pass commands sequence.
GPURenderPassEncoder
[Exposed=(Window, Worker), SecureContext] interface GPURenderPassEncoder { undefined setViewport(float x, float y, float width, float height, float minDepth, float maxDepth); undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y, GPUIntegerCoordinate width, GPUIntegerCoordinate height); undefined setBlendConstant(GPUColor color); undefined setStencilReference(GPUStencilValue reference); undefined beginOcclusionQuery(GPUSize32 queryIndex); undefined endOcclusionQuery(); undefined executeBundles(sequence<GPURenderBundle> bundles); undefined end(); }; GPURenderPassEncoder includes GPUObjectBase; GPURenderPassEncoder includes GPUCommandsMixin; GPURenderPassEncoder includes GPUDebugCommandsMixin; GPURenderPassEncoder includes GPUBindingCommandsMixin; GPURenderPassEncoder includes GPURenderCommandsMixin;
GPURenderPassEncoder
has the following device timeline properties:
[[command_encoder]]
, of type GPUCommandEncoder
, readonly
The GPUCommandEncoder
that created this render pass encoder.
[[attachment_size]]
, readonly
Set to the following extents:
width, height
= the dimensions of the pass’s render attachments
[[occlusion_query_set]]
, of type GPUQuerySet
, readonly
The GPUQuerySet
to store occlusion query results for the pass, which is initialized with GPURenderPassDescriptor
.occlusionQuerySet
at pass creation time.
[[endTimestampWrite]]
, of type GPU command?, readonly, defaulting to null
GPU command, if any, writing a timestamp when the pass ends.
[[maxDrawCount]]
of type GPUSize64
, readonly
The maximum number of draws allowed in this pass.
[[occlusion_query_active]]
, of type boolean
Whether the pass’s [[occlusion_query_set]]
is being written.
When executing encoded render pass commands as part of a GPUCommandBuffer
, an internal RenderState object is used to track the current state required for rendering.
RenderState has the following queue timeline properties:
[[occlusionQueryIndex]]
, of type GPUSize32
The index into [[occlusion_query_set]]
at which to store the occlusion query results.
[[viewport]]
Current viewport rectangle and depth range. Initially set to the following values:
x, y
= 0.0, 0.0
width, height
= the dimensions of the pass’s render targets
minDepth, maxDepth
= 0.0, 1.0
[[scissorRect]]
Current scissor rectangle. Initially set to the following values:
x, y
= 0, 0
width, height
= the dimensions of the pass’s render targets
[[blendConstant]]
, of type GPUColor
Current blend constant value, initially [0, 0, 0, 0]
.
[[stencilReference]]
, of type GPUStencilValue
Current stencil reference value, initially 0
.
[[colorAttachments]]
, of type sequence<GPURenderPassColorAttachment
?>
The color attachments and state for this render pass.
[[depthStencilAttachment]]
, of type GPURenderPassDepthStencilAttachment
?
The depth/stencil attachment and state for this render pass.
Render passes also have framebuffer memory, which contains the texel data associated with each attachment that is written into by draw commands and read from for blending and depth/stencil testing.
Note: Depending on the GPU hardware, framebuffer memory may be the memory allocated by the attachment textures or may be a separate area of memory that the texture data is copied to and from, such as with tile-based architectures.
17.1.1. Render Pass Encoder Creationdictionary GPURenderPassTimestampWrites
{
required GPUQuerySet querySet;
GPUSize32 beginningOfPassWriteIndex;
GPUSize32 endOfPassWriteIndex;
};
querySet
, of type GPUQuerySet
The GPUQuerySet
, of type "timestamp"
, that the query results will be written to.
beginningOfPassWriteIndex
, of type GPUSize32
If defined, indicates the query index in querySet
into which the timestamp at the beginning of the render pass will be written.
endOfPassWriteIndex
, of type GPUSize32
If defined, indicates the query index in querySet
into which the timestamp at the end of the render pass will be written.
Note: Timestamp query values are written in nanoseconds, but how the value is determined is implementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.
dictionary GPURenderPassDescriptor
: GPUObjectDescriptorBase {
required sequence<GPURenderPassColorAttachment?> colorAttachments;
GPURenderPassDepthStencilAttachment depthStencilAttachment;
GPUQuerySet occlusionQuerySet;
GPURenderPassTimestampWrites timestampWrites;
GPUSize64 maxDrawCount = 50000000;
};
colorAttachments
, of type sequence<GPURenderPassColorAttachment?>
The set of GPURenderPassColorAttachment
values in this sequence defines which color attachments will be output to when executing this render pass.
Due to usage compatibility, no color attachment may alias another attachment or any resource used inside the render pass.
depthStencilAttachment
, of type GPURenderPassDepthStencilAttachment
The GPURenderPassDepthStencilAttachment
value that defines the depth/stencil attachment that will be output to and tested against when executing this render pass.
Due to usage compatibility, no writable depth/stencil attachment may alias another attachment or any resource used inside the render pass.
occlusionQuerySet
, of type GPUQuerySet
The GPUQuerySet
value defines where the occlusion query results will be stored for this pass.
timestampWrites
, of type GPURenderPassTimestampWrites
Defines which timestamp values will be written for this pass, and where to write them to.
maxDrawCount
, of type GPUSize64, defaulting to 50000000
The maximum number of draw calls that will be done in the render pass. Used by some implementations to size work injected before the render pass. Keeping the default value is a good default, unless it is known that more draw calls will be done.
dictionary GPURenderPassColorAttachment
{
required GPUTextureView view;
GPUIntegerCoordinate depthSlice;
GPUTextureView resolveTarget;
GPUColor clearValue;
required GPULoadOp loadOp;
required GPUStoreOp storeOp;
};
view
, of type GPUTextureView
A GPUTextureView
describing the texture subresource that will be output to for this color attachment.
depthSlice
, of type GPUIntegerCoordinate
Indicates the depth slice index of "3d"
view
that will be output to for this color attachment.
resolveTarget
, of type GPUTextureView
A GPUTextureView
describing the texture subresource that will receive the resolved output for this color attachment if view
is multisampled.
clearValue
, of type GPUColor
Indicates the value to clear view
to prior to executing the render pass. If not provided, defaults to {r: 0, g: 0, b: 0, a: 0}
. Ignored if loadOp
is not "clear"
.
The components of clearValue
are all double values. They are converted to a texel value of texture format matching the render attachment. If conversion fails, a validation error is generated.
loadOp
, of type GPULoadOp
Indicates the load operation to perform on view
prior to executing the render pass.
Note: It is recommended to prefer clearing; see "clear"
for details.
storeOp
, of type GPUStoreOp
The store operation to perform on view
after executing the render pass.
dictionary GPURenderPassDepthStencilAttachment
{
required GPUTextureView view;
float depthClearValue;
GPULoadOp depthLoadOp;
GPUStoreOp depthStoreOp;
boolean depthReadOnly = false;
GPUStencilValue stencilClearValue = 0;
GPULoadOp stencilLoadOp;
GPUStoreOp stencilStoreOp;
boolean stencilReadOnly = false;
};
view
, of type GPUTextureView
A GPUTextureView
describing the texture subresource that will be output to and read from for this depth/stencil attachment.
depthClearValue
, of type float
Indicates the value to clear view
’s depth component to prior to executing the render pass. Ignored if depthLoadOp
is not "clear"
. Must be between 0.0 and 1.0, inclusive.
depthLoadOp
, of type GPULoadOp
Indicates the load operation to perform on view
’s depth component prior to executing the render pass.
Note: It is recommended to prefer clearing; see "clear"
for details.
depthStoreOp
, of type GPUStoreOp
The store operation to perform on view
’s depth component after executing the render pass.
depthReadOnly
, of type boolean, defaulting to false
Indicates that the depth component of view
is read only.
stencilClearValue
, of type GPUStencilValue, defaulting to 0
Indicates the value to clear view
’s stencil component to prior to executing the render pass. Ignored if stencilLoadOp
is not "clear"
.
The value will be converted to the type of the stencil aspect of view by taking the same number of LSBs as the number of bits in the stencil aspect of one texel of view.
stencilLoadOp
, of type GPULoadOp
Indicates the load operation to perform on view
’s stencil component prior to executing the render pass.
Note: It is recommended to prefer clearing; see "clear"
for details.
stencilStoreOp
, of type GPUStoreOp
The store operation to perform on view
’s stencil component after executing the render pass.
stencilReadOnly
, of type boolean, defaulting to false
Indicates that the stencil component of view
is read only.
enum GPULoadOp
{
"load",
"clear",
};
"load"
Loads the existing value for this attachment into the render pass.
"clear"
Loads a clear value for this attachment into the render pass.
Note: On some GPU hardware (primarily mobile), "clear"
is significantly cheaper because it avoids loading data from main memory into tile-local memory. On other GPU hardware, there isn’t a significant difference. As a result, it is recommended to use "clear"
rather than "load"
in cases where the initial value doesn’t matter (e.g. the render target will be cleared using a skybox).
enum GPUStoreOp
{
"store",
"discard",
};
"store"
Stores the resulting value of the render pass for this attachment.
"discard"
Discards the resulting value of the render pass for this attachment.
Note: Discarded attachments behave as if they are cleared to zero, but implementations are not required to perform a clear at the end of the render pass. Implementations which do not explicitly clear discarded attachments at the end of a pass must lazily clear them prior to the reading the attachment contents, which occurs via sampling, copies, attaching to a later render pass with "load"
, displaying or reading back the canvas (get a copy of the image contents of a context), etc.
GPURenderPassLayout
declares the layout of the render targets of a GPURenderBundle
. It is also used internally to describe GPURenderPassEncoder
layouts and GPURenderPipeline
layouts. It determines compatibility between render passes, render bundles, and render pipelines.
dictionary GPURenderPassLayout
: GPUObjectDescriptorBase {
required sequence<GPUTextureFormat?> colorFormats;
GPUTextureFormat depthStencilFormat;
GPUSize32 sampleCount = 1;
};
colorFormats
, of type sequence<GPUTextureFormat?>
A list of the GPUTextureFormat
s of the color attachments for this pass or bundle.
depthStencilFormat
, of type GPUTextureFormat
The GPUTextureFormat
of the depth/stencil attachment for this pass or bundle.
sampleCount
, of type GPUSize32, defaulting to 1
Number of samples per pixel in the attachments for this pass or bundle.
The render pass encoder can be ended by calling end()
once the user has finished recording commands for the pass. Once end()
has been called the render pass encoder can no longer be used.
end()
Completes recording of the render pass commands sequence.
GPURenderCommandsMixin
GPURenderCommandsMixin
defines rendering commands common to GPURenderPassEncoder
and GPURenderBundleEncoder
.
interface mixin GPURenderCommandsMixin { undefined setPipeline(GPURenderPipeline pipeline); undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size); undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer? buffer, optional GPUSize64 offset = 0, optional GPUSize64 size); undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1, optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0); undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1, optional GPUSize32 firstIndex = 0, optional GPUSignedOffset32 baseVertex = 0, optional GPUSize32 firstInstance = 0); undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset); undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset); };
GPURenderCommandsMixin
assumes the presence of GPUObjectBase
, GPUCommandsMixin
, and GPUBindingCommandsMixin
members on the same object. It must only be included by interfaces which also include those mixins.
GPURenderCommandsMixin
has the following device timeline properties:
[[layout]]
, of type GPURenderPassLayout
, readonly
The layout of the render pass.
[[depthReadOnly]]
, of type boolean
, readonly
If true
, indicates that the depth component is not modified.
[[stencilReadOnly]]
, of type boolean
, readonly
If true
, indicates that the stencil component is not modified.
[[usage scope]]
, of type usage scope, initially empty
The usage scope for this render pass or bundle.
[[pipeline]]
, of type GPURenderPipeline
, initially null
The current GPURenderPipeline
.
[[index_buffer]]
, of type GPUBuffer
, initially null
The current buffer to read index data from.
[[index_format]]
, of type GPUIndexFormat
The format of the index data in [[index_buffer]]
.
[[index_buffer_offset]]
, of type GPUSize64
The offset in bytes of the section of [[index_buffer]]
currently set.
[[index_buffer_size]]
, of type GPUSize64
The size in bytes of the section of [[index_buffer]]
currently set, initially 0
.
[[vertex_buffers]]
, of type ordered map<slot, GPUBuffer
>, initially empty
The current GPUBuffer
s to read vertex data from for each slot.
[[vertex_buffer_sizes]]
, of type ordered map<slot, GPUSize64
>, initially empty
The size in bytes of the section of GPUBuffer
currently set for each slot.
[[drawCount]]
, of type GPUSize64
The number of draw commands recorded in this encoder.
setPipeline(pipeline)
Sets the current GPURenderPipeline
.
setIndexBuffer(buffer, indexFormat, offset, size)
Sets the current index buffer.
setVertexBuffer(slot, buffer, offset, size)
Sets the current vertex buffer for the given slot.
draw(vertexCount, instanceCount, firstVertex, firstInstance)
Draws primitives. See § 23.2 Rendering for the detailed specification.
Queue timelinesteps:
Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of vertexCount vertices, starting with vertex firstVertex, with the states from bindingState and renderState.
drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance)
Draws indexed primitives. See § 23.2 Rendering for the detailed specification.
Queue timelinesteps:
Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of indexCount indexed vertices, starting with index firstIndex from vertex baseVertex, with the states from bindingState and renderState.
Note: WebGPU applications should never use index data with indices out of bounds of any bound vertex buffer that has GPUVertexStepMode
"vertex"
. WebGPU implementations have different ways of handling this, and therefore a range of behaviors is allowed. Either the whole draw call is discarded, or the access to those attributes out of bounds is described by WGSL’s invalid memory reference.
drawIndirect(indirectBuffer, indirectOffset)
Draws primitives using parameters read from a GPUBuffer
. See § 23.2 Rendering for the detailed specification.
The indirect draw parameters encoded in the buffer must be a tightly packed block of four 32-bit unsigned integer values (16 bytes total), given in the same order as the arguments for draw()
. For example:
let drawIndirectParameters = new Uint32Array(4); drawIndirectParameters[0] = vertexCount; drawIndirectParameters[1] = instanceCount; drawIndirectParameters[2] = firstVertex; drawIndirectParameters[3] = firstInstance;
The value corresponding to firstInstance
must be 0, unless the "indirect-first-instance"
feature is enabled. If the "indirect-first-instance"
feature is not enabled and firstInstance
is not zero the drawIndirect()
call will be treated as a no-op.
steps:
Let vertexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.
Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.
Let firstVertex be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.
Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 12) bytes.
Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of vertexCount vertices, starting with vertex firstVertex, with the states from bindingState and renderState.
drawIndexedIndirect(indirectBuffer, indirectOffset)
Draws indexed primitives using parameters read from a GPUBuffer
. See § 23.2 Rendering for the detailed specification.
The indirect drawIndexed parameters encoded in the buffer must be a tightly packed block of five 32-bit values (20 bytes total), given in the same order as the arguments for drawIndexed()
. The value corresponding to baseVertex
is a signed 32-bit integer, and all others are unsigned 32-bit integers. For example:
let drawIndexedIndirectParameters = new Uint32Array(5); let drawIndexedIndirectParametersSigned = new Int32Array(drawIndexedIndirectParameters.buffer); drawIndexedIndirectParameters[0] = indexCount; drawIndexedIndirectParameters[1] = instanceCount; drawIndexedIndirectParameters[2] = firstIndex; // baseVertex is a signed value. drawIndexedIndirectParametersSigned[3] = baseVertex; drawIndexedIndirectParameters[4] = firstInstance;
The value corresponding to firstInstance
must be 0, unless the "indirect-first-instance"
feature is enabled. If the "indirect-first-instance"
feature is not enabled and firstInstance
is not zero the drawIndexedIndirect()
call will be treated as a no-op.
steps:
Let indexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.
Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.
Let firstIndex be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.
Let baseVertex be a signed 32-bit integer read from indirectBuffer at (indirectOffset + 12) bytes.
Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 16) bytes.
Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of indexCount indexed vertices, starting with index firstIndex from vertex baseVertex, with the states from bindingState and renderState.
To determine if it’s
valid to drawwith
GPURenderCommandsMixin
encoder
, run the following
device timelinesteps:
If any of the following conditions are unsatisfied, return false
:
Otherwise return true
.
To determine if it’s
valid to draw indexedwith
GPURenderCommandsMixin
encoder
, run the following
device timelinesteps:
If any of the following conditions are unsatisfied, return false
:
Otherwise return true
.
The GPURenderPassEncoder
has several methods which affect how draw commands are rasterized to attachments used by this encoder.
setViewport(x, y, width, height, minDepth, maxDepth)
Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.
Queue timelinesteps:
Round x, y, width, and height to some uniform precision, no less precise than integer rounding.
Set renderState.[[viewport]]
to the extents x, y, width, height, minDepth, and maxDepth.
setScissorRect(x, y, width, height)
Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.
setBlendConstant(color)
Sets the constant blend color and alpha values used with "constant"
and "one-minus-constant"
GPUBlendFactor
s.
setStencilReference(reference)
Sets the [[stencilReference]]
value used during stencil tests with the "replace"
GPUStencilOperation
.
beginOcclusionQuery(queryIndex)
endOcclusionQuery()
executeBundles(bundles)
Executes the commands previously recorded into the given GPURenderBundle
s as part of this render pass.
When a GPURenderBundle
is executed, it does not inherit the render pass’s pipeline, bind groups, or vertex and index buffers. After a GPURenderBundle
has executed, the render pass’s pipeline, bind group, and vertex/index buffer state is cleared (to the initial, empty values).
Note: The state is cleared, not restored to the previous state. This occurs even if zero GPURenderBundles
are executed.
A bundle is a partial, limited pass that is encoded once and can then be executed multiple times as part of future pass encoders without expiring after use like typical command buffers. This can reduce the overhead of encoding and submission of commands which are issued repeatedly without changing.
18.1.GPURenderBundle
[Exposed=(Window, Worker), SecureContext] interface GPURenderBundle { }; GPURenderBundle includes GPUObjectBase;
[[command_list]]
, of type list<GPU command>
A list of GPU commands to be submitted to the GPURenderPassEncoder
when the GPURenderBundle
is executed.
[[usage scope]]
, of type usage scope, initially empty
The usage scope for this render bundle, stored for later merging into the GPURenderPassEncoder
’s [[usage scope]]
in executeBundles()
.
[[layout]]
, of type GPURenderPassLayout
The layout of the render bundle.
[[depthReadOnly]]
, of type boolean
If true
, indicates that the depth component is not modified by executing this render bundle.
[[stencilReadOnly]]
, of type boolean
If true
, indicates that the stencil component is not modified by executing this render bundle.
[[drawCount]]
, of type GPUSize64
The number of draw commands in this GPURenderBundle
.
dictionary GPURenderBundleDescriptor
: GPUObjectDescriptorBase {
};
[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundleEncoder
{
GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUBindingCommandsMixin;
GPURenderBundleEncoder includes GPURenderCommandsMixin;
createRenderBundleEncoder(descriptor)
Creates a GPURenderBundleEncoder
.
dictionary GPURenderBundleEncoderDescriptor
: GPURenderPassLayout {
boolean depthReadOnly = false;
boolean stencilReadOnly = false;
};
depthReadOnly
, of type boolean, defaulting to false
If true
, indicates that the render bundle does not modify the depth component of the GPURenderPassDepthStencilAttachment
of any render pass the render bundle is executed in.
stencilReadOnly
, of type boolean, defaulting to false
If true
, indicates that the render bundle does not modify the stencil component of the GPURenderPassDepthStencilAttachment
of any render pass the render bundle is executed in.
finish(descriptor)
Completes recording of the render bundle commands sequence.
GPUQueueDescriptor
GPUQueueDescriptor
describes a queue request.
dictionary GPUQueueDescriptor : GPUObjectDescriptorBase { };19.2.
GPUQueue
[Exposed=(Window, Worker), SecureContext] interface GPUQueue { undefined submit(sequence<GPUCommandBuffer> commandBuffers); Promise<undefined> onSubmittedWorkDone(); undefined writeBuffer( GPUBuffer buffer, GPUSize64 bufferOffset, AllowSharedBufferSource data, optional GPUSize64 dataOffset = 0, optional GPUSize64 size); undefined writeTexture( GPUTexelCopyTextureInfo destination, AllowSharedBufferSource data, GPUTexelCopyBufferLayout dataLayout, GPUExtent3D size); undefined copyExternalImageToTexture( GPUCopyExternalImageSourceInfo source, GPUCopyExternalImageDestInfo destination, GPUExtent3D copySize); }; GPUQueue includes GPUObjectBase;
GPUQueue
has the following methods:
writeBuffer(buffer, bufferOffset, data, dataOffset, size)
Issues a write operation of the provided data into a GPUBuffer
.
GPUQueue
this
.
Arguments:
Returns: undefined
Content timeline steps:
If data is an ArrayBuffer
or DataView
, let the element type be "byte". Otherwise, data is a TypedArray; let the element type be the type of the TypedArray.
Let dataSize be the size of data, in elements.
If size is missing, let contentsSize be dataSize − dataOffset. Otherwise, let contentsSize be size.
If any of the following conditions are unsatisfied, throw an OperationError
and return.
contentsSize ≥ 0.
dataOffset + contentsSize ≤ dataSize.
contentsSize, converted to bytes, is a multiple of 4 bytes.
Let dataContents be a copy of the bytes held by the buffer source data.
Let contents be the contentsSize elements of dataContents starting at an offset of dataOffset elements.
Issue the subsequent steps on the Device timeline of this.
steps:
Write contents into buffer starting at bufferOffset.
writeTexture(destination, data, dataLayout, size)
Issues a write operation of the provided data into a GPUTexture
.
copyExternalImageToTexture(source, destination, copySize)
Issues a copy operation of the contents of a platform image/canvas into the destination texture.
This operation performs color encoding into the destination encoding according to the parameters of GPUCopyExternalImageDestInfo
.
Copying into a -srgb
texture results in the same texture bytes, not the same decoded values, as copying into the corresponding non--srgb
format. Thus, after a copy operation, sampling the destination texture has different results depending on whether its format is -srgb
, all else unchanged.
NOTE:
When copying from a
"webgl"
/
"webgl2"
context canvas, the
WebGL Drawing Buffermay be not exist during certain points in the frame presentation cycle (after the image has been moved to the compositor for display). To avoid this, either:
Issue copyExternalImageToTexture()
in the same task with WebGL rendering operation, to ensure the copy occurs before the WebGL canvas is presented.
If not possible, set the preserveDrawingBuffer
option in WebGLContextAttributes
to true
, so that the drawing buffer will still contain a copy of the frame contents after they’ve been presented. Note, this extra copy may have a performance cost.
submit(commandBuffers)
Schedules the execution of the command buffers by the GPU on this queue.
Submitted command buffers cannot be used again.
onSubmittedWorkDone()
Returns a Promise
that resolves once this queue finishes processing all the work submitted up to this moment.
Resolution of this Promise
implies the completion of mapAsync()
calls made prior to that call, on GPUBuffer
s last used exclusively on that queue.
GPUQuerySet
[Exposed=(Window, Worker), SecureContext] interface GPUQuerySet { undefined destroy(); readonly attribute GPUQueryType type; readonly attribute GPUSize32Out count; }; GPUQuerySet includes GPUObjectBase;
GPUQuerySet
has the following immutable properties:
type
, of type GPUQueryType, readonly
The type of the queries managed by this GPUQuerySet
.
count
, of type GPUSize32Out, readonly
The number of queries managed by this GPUQuerySet
.
GPUQuerySet
has the following device timeline properties:
[[destroyed]]
, of type boolean
, initially false
If the query set is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.
A GPUQuerySetDescriptor
specifies the options to use in creating a GPUQuerySet
.
dictionary GPUQuerySetDescriptor
: GPUObjectDescriptorBase {
required GPUQueryType type;
required GPUSize32 count;
};
type
, of type GPUQueryType
The type of queries managed by GPUQuerySet
.
count
, of type GPUSize32
The number of queries managed by GPUQuerySet
.
createQuerySet(descriptor)
Creates a GPUQuerySet
.
Creating a
GPUQuerySet
which holds 32 occlusion query results.
const querySet = gpuDevice.createQuerySet({ type: 'occlusion', count: 32 });20.1.2. Query Set Destruction
An application that no longer requires a GPUQuerySet
can choose to lose access to it before garbage collection by calling destroy()
.
GPUQuerySet
has the following methods:
destroy()
Destroys the GPUQuerySet
.
enum20.3. Occlusion QueryGPUQueryType
{"occlusion"
,"timestamp"
, };
Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.
When beginning a render pass, GPURenderPassDescriptor
.occlusionQuerySet
must be set to be able to use occlusion queries during the pass. An occlusion query is begun and ended by calling beginOcclusionQuery()
and endOcclusionQuery()
in pairs that cannot be nested, and resolved into a GPUBuffer
as a 64-bit unsigned integer by GPUCommandEncoder
.resolveQuerySet()
.
Timestamp queries allow applications to write timestamps to a GPUQuerySet
, using:
and then resolve timestamp values (in nanoseconds as a 64-bit unsigned integer) into a GPUBuffer
, using GPUCommandEncoder
.resolveQuerySet()
.
Timestamp values are implementation-defined and may not increase monotonically. The physical device may reset the timestamp counter occasionally, which can result in unexpected values such as negative deltas between timestamps that logically should be monotonically increasing. These instances should be rare and can safely be ignored. Applications should not be written in such a way that unexpected timestamps cause an application failure.
There is a tracking vector here. Timestamp queries are implemented using high-resolution timers (see § 2.1.7.2 Device/queue-timeline timing). To mitigate security and privacy concerns, their precision must be reduced:
To get the
current queue timestamp, run the following
queue timelinesteps:
Let fineTimestamp be the current timestamp value of the current queue timeline, in nanoseconds, relative to an implementation-defined point in the past.
Return the result of calling coarsen time on fineTimestamp with crossOriginIsolatedCapability
set to false
.
Note: Since cross-origin isolation may not apply to the device timeline or queue timeline, crossOriginIsolatedCapability
is never set to true
.
HTMLCanvasElement.getContext()
A GPUCanvasContext
object is created via the getContext()
method of an HTMLCanvasElement
instance by passing the string literal 'webgpu'
as its contextType
argument.
Unlike WebGL or 2D context creation, the second argument of HTMLCanvasElement.getContext()
or OffscreenCanvas.getContext()
, the context creation attribute dictionary options
, is ignored. Instead, use GPUCanvasContext.configure()
, which allows changing the canvas configuration without replacing the canvas.
[Exposed=(Window, Worker), SecureContext]
interface GPUCanvasContext
{
readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;
undefined configure(GPUCanvasConfiguration configuration);
undefined unconfigure();
GPUCanvasConfiguration? getConfiguration();
GPUTexture getCurrentTexture();
};
GPUCanvasContext
has the following content timeline properties:
canvas
, of type (HTMLCanvasElement or OffscreenCanvas)
, readonly
The canvas this context was created from.
[[configuration]]
, of type GPUCanvasConfiguration
?, initially null
The options this context is currently configured with.
null
if the context has not been configured or has been unconfigured
.
[[textureDescriptor]]
, of type GPUTextureDescriptor
?, initially null
The currently configured texture descriptor, derived from the [[configuration]]
and canvas.
null
if the context has not been configured or has been unconfigured
.
[[drawingBuffer]]
, an image, initially a transparent black image with the same size as the canvas
The drawing buffer is the working-copy image data of the canvas. It is exposed as writable by [[currentTexture]]
(returned by getCurrentTexture()
).
The drawing buffer is used to get a copy of the image contents of a context, which occurs when the canvas is displayed or otherwise read. It may be transparent, even if [[configuration]]
.alphaMode
is "opaque"
. The alphaMode
only affects the result of the "get a copy of the image contents of a context" algorithm.
The drawing buffer outlives the [[currentTexture]]
and contains the previously-rendered contents even after the canvas has been presented. It is only cleared in Replace the drawing buffer.
Any time the drawing buffer is read, implementations must ensure that all previously submitted work (e.g. queue submissions) have completed writing to it via [[currentTexture]]
.
[[currentTexture]]
, of type GPUTexture
?, initially null
The GPUTexture
to draw into for the current frame. It exposes a writable view onto the underlying [[drawingBuffer]]
. getCurrentTexture()
populates this slot if null
, then returns it.
In the steady-state of a visible canvas, any changes to the drawing buffer made through the currentTexture get presented when updating the rendering of a WebGPU canvas. At or before that point, the texture is also destroyed and [[currentTexture]]
is set to to null
, signalling that a new one is to be created by the next call to getCurrentTexture()
.
Destroying
the currentTexture has no effect on the drawing buffer contents; it only terminates write-access to the drawing buffer early. During the same frame, getCurrentTexture()
continues returning the same destroyed texture.
Expire the current texture sets the currentTexture to null
. It is called by configure()
, resizing the canvas, presentation, transferToImageBitmap()
, and others.
[[lastPresentedImage]]
, of type (readonly image)?
, initially null
The image most recently presented for this canvas in "updating the rendering of a WebGPU canvas". If the device is lost or destroyed, this image may be used as a fallback in "get a copy of the image contents of a context" in order to prevent the canvas from going blank.
Note: This property only needs to exist in implementations which implement the fallback, which is optional.
GPUCanvasContext
has the following methods:
configure(configuration)
Configures the context for this canvas. This clears the drawing buffer to transparent black (in Replace the drawing buffer).
unconfigure()
Removes the context configuration. Destroys any textures produced while configured.
getConfiguration()
Returns the context configuration.
NOTE:
In scenarios where
getConfiguration()
shows that
toneMapping
is implemented and the
dynamic-rangemedia query indicates HDR support, then WebGPU canvas
shouldrender content using the full HDR range instead of clamping values to the SDR range of the HDR display.
getCurrentTexture()
Get the GPUTexture
that will be composited to the document by the GPUCanvasContext
next.
NOTE:
An application
shouldcall
getCurrentTexture()
in the same task that renders to the canvas texture. Otherwise, the texture could get destroyed by these steps before the application is finished rendering to it.
The expiry task (defined below) is optional to implement. Even if implemented, task source priority is not normatively defined, so may happen as early as the next task, or as late as after all other task sources are empty (see automatic expiry task source). Expiry is only guaranteed when a visible canvas is displayed (updating the rendering of a WebGPU canvas) and in other callers of "Expire the current texture".
Note: The same GPUTexture
object will be returned by every call to getCurrentTexture()
until "Expire the current texture" runs, even if that GPUTexture
is destroyed, failed validation, or failed to allocate.
To
get a copy of the image contents of a context:
Arguments:
context: the GPUCanvasContext
Returns: image contents
Content timeline steps:
Let snapshot be a transparent black image of the same size as context.canvas
.
Let configuration be context.[[configuration]]
.
If configuration is null
:
Return snapshot.
Note: The configuration will be null
if the context has not been configured or has been unconfigured
. This is identical to the behavior when the canvas has no context.
Ensure that all submitted work items (e.g. queue submissions) have completed writing to the image (via context.[[currentTexture]]
).
If configuration.device
is found to be valid:
Set snapshot to a copy of the context.[[drawingBuffer]]
.
Else, if context.[[lastPresentedImage]]
is not null
:
Optionally, set snapshot to a copy of context.[[lastPresentedImage]]
.
Note: This is optional because the [[lastPresentedImage]]
may no longer exist, depending on what caused device loss. Implementations may choose to skip it even if do they still have access to that image.
Let alphaMode be configuration.alphaMode
.
"opaque"
:
Clear the alpha channel of snapshot to 1.0.
Tag snapshot as being opaque.
Note: If the [[currentTexture]]
, if any, has been destroyed (for example in "Expire the current texture"), the alpha channel is unobservable, and implementations may clear the alpha channel in-place.
Tag snapshot with alphaMode.
Tag snapshot with the colorSpace
and toneMapping
of configuration.
Return snapshot.
To
Replace the drawing bufferof a
GPUCanvasContext
context
, run the following
content timelinesteps:
Expire the current texture of context.
Let configuration be context.[[configuration]]
.
Set context.[[drawingBuffer]]
to a transparent black image of the same size as context.canvas
.
If configuration is null, the drawing buffer is tagged with the color space "srgb"
. In this case, the drawing buffer will remain blank until the context is configured.
If not, the drawing buffer has the specified configuration.format
and is tagged with the specified configuration.colorSpace
and configuration.toneMapping
.
Note: configuration.alphaMode
is ignored until "get a copy of the image contents of a context".
NOTE:
A newly replaced drawing buffer image behaves as if it is cleared to transparent black, but, like after
"discard"
, an implementation can clear it lazily only if it becomes necessary.
Note: This will often be a no-op, if the drawing buffer is already cleared and has the correct configuration.
The following algorithms "hook" into algorithms in the HTML specification, and must run at the specified points.
The update the canvas size algorithm.
The supported context formats are the set of GPUTextureFormat
s: «"bgra8unorm"
, "rgba8unorm"
, "rgba16float"
». These formats must be supported when specified as a GPUCanvasConfiguration
.format
regardless of the given GPUCanvasConfiguration
.device
.
Note: Canvas configuration cannot use srgb
formats like "bgra8unorm-srgb"
. Instead, use the non-srgb
equivalent ("bgra8unorm"
), specify the srgb
format in the viewFormats
, and use createView()
to create a view with an srgb
format.
enum GPUCanvasAlphaMode { "opaque", "premultiplied", }; enum GPUCanvasToneMappingMode { "standard", "extended", }; dictionaryGPUCanvasToneMapping
{ GPUCanvasToneMappingModemode
= "standard"; }; dictionaryGPUCanvasConfiguration
{ required GPUDevice device; required GPUTextureFormat format; GPUTextureUsageFlags usage = 0x10; // GPUTextureUsage.RENDER_ATTACHMENT sequence<GPUTextureFormat> viewFormats = []; PredefinedColorSpace colorSpace = "srgb"; GPUCanvasToneMapping toneMapping = {}; GPUCanvasAlphaMode alphaMode = "opaque"; };
GPUCanvasConfiguration
has the following members:
device
, of type GPUDevice
The GPUDevice
that textures returned by getCurrentTexture()
will be compatible with.
format
, of type GPUTextureFormat
The format that textures returned by getCurrentTexture()
will have. Must be one of the Supported context formats.
usage
, of type GPUTextureUsageFlags, defaulting to 0x10
The usage that textures returned by getCurrentTexture()
will have. RENDER_ATTACHMENT
is the default, but is not automatically included if the usage is explicitly set. Be sure to include RENDER_ATTACHMENT
when setting a custom usage if you wish to use textures returned by getCurrentTexture()
as color targets for a render pass.
viewFormats
, of type sequence<GPUTextureFormat>, defaulting to []
The formats that views created from textures returned by getCurrentTexture()
may use.
colorSpace
, of type PredefinedColorSpace, defaulting to "srgb"
The color space that values written into textures returned by getCurrentTexture()
should be displayed with.
toneMapping
, of type GPUCanvasToneMapping, defaulting to {}
The tone mapping determines how the content of textures returned by getCurrentTexture()
are to be displayed.
Note: If an implementation doesn’t support HDR WebGPU canvases, it should also not expose this member, to allow for feature detection. See getConfiguration()
.
alphaMode
, of type GPUCanvasAlphaMode, defaulting to "opaque"
Determines the effect that alpha values will have on the content of textures returned by getCurrentTexture()
when read, displayed, or used as an image source.
Configure a
GPUCanvasContext
to be used with a specific
GPUDevice
, using the preferred format for this context:
const canvas = document.createElement('canvas'); const context = canvas.getContext('webgpu'); context.configure({ device: gpuDevice, format: navigator.gpu.getPreferredCanvasFormat(), });21.4.1. Canvas Color Space
During presentation, the color values in the canvas are converted to the color space of the screen.
The toneMapping
determines the handling of values outside of the [0, 1]
interval in the color space of the screen.
All canvas configuration is set in configure()
except for the resolution of the canvas, which is set by the canvas’s width
and height
.
Note: Like WebGL and 2d canvas, resizing a WebGPU canvas loses the current contents of the drawing buffer. In WebGPU, it does so by replacing the drawing buffer.
21.5.GPUCanvasToneMappingMode
This enum specifies how color values are displayed to the screen.
"standard"
Color values within the standard dynamic range of the screen are unchanged, and all other color values are projected to the standard dynamic range of the screen.
Note: This projection is often accomplished by clamping color values in the color space of the screen to the [0, 1]
interval.
For example, suppose that the value
(1.035, -0.175, -0.140)
is written to an
'srgb'
canvas.
If this is presented to an sRGB screen, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), then projected into the display’s space. Using component-wise clamping, this results in the sRGB value (1.0, 0.0, 0.0)
.
If this is presented to a Display P3 screen, then this will be converted to the value (0.948, 0.106, 0.01)
in the Display P3 color space, and no clamping will be needed.
"extended"
Color values in the extended dynamic range of the screen are unchanged, and all other color values are projected to the extended dynamic range of the screen.
Note: This projection is often accomplished by clamping color values in the color space of the screen to the interval of values that the screen is capable of displaying, which may include values greater than 1
.
For example, suppose that the value
(2.5, -0.15, -0.15)
is written to an
'srgb'
canvas.
If this is presented to an sRGB screen that is capable of displaying values in the [0, 4]
interval in sRGB space, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), then projected into the display’s space. If using component-wise clamping, this results in the sRGB value (2.5, 0.0, 0.0)
.
If this is presented to a Display P3 screen that is capable of displaying values in the [0, 2]
interval in Display P3 space, then this will be converted to the value (2.3, 0.545, 0.386)
in the Display P3 color space, then projected into the display’s space. If using component-wise clamping, this results in the Display P3 value (2.0, 0.545, 0.386)
.
GPUCanvasAlphaMode
This enum selects how the contents of the canvas will be interpreted when read, when displayed to the screen or used as an image source (in drawImage, toDataURL, etc.)
Below, src
is a value in the canvas texture, and dst
is an image that the canvas is being composited into (e.g. an HTML page rendering, or a 2D canvas).
"opaque"
Read RGB as opaque and ignore alpha values. If the content is not already opaque, the alpha channel is cleared to 1.0 in "get a copy of the image contents of a context".
"premultiplied"
Read RGBA as premultiplied: color values are premultiplied by their alpha value. 100% red at 50% alpha is [0.5, 0, 0, 0.5]
.
If the canvas texture contains out-of-gamut premultiplied RGBA values at the time the canvas contents are read, the behavior depends on whether the canvas is:
Values are preserved, as described in color space conversion.
Compositing results are undefined.
Note: This is true even if color space conversion would produce in-gamut values before compositing, because the intermediate format for compositing is not specified.
During the normal course of operation of WebGPU, errors are raised via dispatch error.
After a device is lost, errors are no longer surfaced, where possible. After this point, implementations do not need to run validation or error tracking:
The validity of objects on the device becomes unobservable.
popErrorScope()
and uncapturederror
stop reporting errors. (No errors are generated by the device loss itself. Instead, the GPUDevice
.lost
promise resolves to indicate the device is lost.)
All operations which send a message back to the content timeline will skip their usual steps. Most will appear to succeed, except for mapAsync()
, which produces an error because it is impossible to provide the correct mapped data after the device has been lost.
This makes it unobservable whether other types of operations (that don’t send messages back) actually execute or not.
enumGPUDeviceLostReason
{"unknown"
,"destroyed"
, }; [Exposed=(Window, Worker), SecureContext] interfaceGPUDeviceLostInfo
{ readonly attribute GPUDeviceLostReasonreason
; readonly attribute DOMStringmessage
; }; partial interface GPUDevice { readonly attribute Promise<GPUDeviceLostInfo> lost; };
GPUDevice
has the following additional attributes:
lost
, of type Promise<GPUDeviceLostInfo>, readonly
A slot-backed attribute holding a promise which is created with the device, remains pending for the lifetime of the device, then resolves when the device is lost.
Upon initialization, it is set to a new promise.
GPUError
[Exposed=(Window, Worker), SecureContext] interface GPUError { readonly attribute DOMString message; };
GPUError
is the base interface for all errors surfaced from popErrorScope()
and the uncapturederror
event.
Errors must only be generated for operations that explicitly state the conditions one may be generated under in their respective algorithms, and the subtype of error that is generated.
No errors are generated from a device which is lost. See § 22 Errors & Debugging.
Note: GPUError
may gain new subtypes in future versions of this spec. Applications should handle this possibility, using only the error’s message
when possible, and specializing using instanceof
. Use error.constructor.name
when it’s necessary to serialize an error (e.g. into JSON, for a debug report).
GPUError
has the following immutable properties:
message
, of type DOMString, readonly
A human-readable, localizable text message providing information about the error that occurred.
Note: This message is generally intended for application developers to debug their applications and capture information for debug reports, not to be surfaced to end-users.
Note: User agents should not include potentially machine-parsable details in this message, such as free system memory on "out-of-memory"
or other details about the conditions under which memory was exhausted.
Note: The message
should follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.
Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.
[Exposed=(Window, Worker), SecureContext] interfaceGPUValidationError
: GPUError {constructor
(DOMStringmessage
); };
GPUValidationError
is a subtype of GPUError
which indicates that an operation did not satisfy all validation requirements. Validation errors are always indicative of an application error, and is expected to fail the same way across all devices assuming the same [[features]]
and [[limits]]
are in use.
[Exposed=(Window, Worker), SecureContext] interfaceGPUOutOfMemoryError
: GPUError {constructor
(DOMStringmessage
); };
GPUOutOfMemoryError
is a subtype of GPUError
which indicates that there was not enough free memory to complete the requested operation. The operation may succeed if attempted again with a lower memory requirement (like using smaller texture dimensions), or if memory used by other resources is released first.
[Exposed=(Window, Worker), SecureContext] interfaceGPUInternalError
: GPUError {constructor
(DOMStringmessage
); };
GPUInternalError
is a subtype of GPUError
which indicates than an operation failed for a system or implementation-specific reason even when all validation requirements have been satisfied. For example, the operation may exceed the capabilities of the implementation in a way not easily captured by the supported limits. The same operation may succeed on other devices or under difference circumstances.
A GPU error scope captures GPUError
s that were generated while the GPU error scope was current. Error scopes are used to isolate errors that occur within a set of WebGPU calls, typically for debugging purposes or to make an operation more fault tolerant.
GPU error scope has the following device timeline properties:
[[errors]]
, of type list<GPUError
>, initially []
The GPUError
s, if any, observed while the GPU error scope was current.
[[filter]]
, of type GPUErrorFilter
Determines what type of GPUError
this GPU error scope observes.
enum GPUErrorFilter
{
"validation",
"out-of-memory",
"internal",
};
partial interface GPUDevice {
undefined pushErrorScope(GPUErrorFilter filter);
Promise<GPUError?> popErrorScope();
};
GPUErrorFilter
defines the type of errors that should be caught when calling pushErrorScope()
:
"validation"
Indicates that the error scope will catch a GPUValidationError
.
"out-of-memory"
Indicates that the error scope will catch a GPUOutOfMemoryError
.
"internal"
Indicates that the error scope will catch a GPUInternalError
.
GPUDevice
has the following device timeline properties:
[[errorScopeStack]]
, of type stack<GPU error scope>
A stack of GPU error scopes that have been pushed to the GPUDevice
.
pushErrorScope(filter)
Pushes a new GPU error scope onto the [[errorScopeStack]]
for this.
popErrorScope()
Pops a GPU error scope off the [[errorScopeStack]]
for this and resolves to any GPUError
observed by the error scope, or null
if none.
There is no guarantee of the ordering of promise resolution.
Device timeline check steps:
If this is lost:
Issue the following steps on contentTimeline:
Return.
Note: No errors are generated from a device which is lost. See § 22 Errors & Debugging.
If any of the following requirements are unmet:
Then issue the following steps on contentTimeline and return:
Let scope be the result of popping an item off of this.[[errorScopeStack]]
.
Let error be any one of the items in scope.[[errors]]
, or null
if there are none.
For any two errors E1 and E2 in the list, if E2 was caused by E1, E2 should not be the one selected.
Note: For example, if E1 comes from t
= createTexture()
, and E2 comes from t
.createView()
because t
was invalid, E1 should be be preferred since it will be easier for a developer to understand what went wrong. Since both of these are GPUValidationError
s, the only difference will be in the message
field, which is meant only to be read by humans anyway.
At an unspecified point now or in the future, issue the subsequent steps on contentTimeline.
Note: By allowing popErrorScope()
calls to resolve in any order, with any of the errors observed by the scope, this spec allows validation to complete out of order, as long as any state observations are made at the appropriate point in adherence to this spec. For example, this allows implementations to perform shader compilation, which depends only on non-stateful inputs, to be completed on a background thread in parallel with other device-timeline work, and report any resulting errors later.
Using error scopes to capture validation errors from a
GPUDevice
operation that may fail:
gpuDevice.pushErrorScope('validation'); let sampler = gpuDevice.createSampler({ maxAnisotropy: 0, // Invalid, maxAnisotropy must be at least 1. }); gpuDevice.popErrorScope().then((error) => { if (error) { // There was an error creating the sampler, so discard it. sampler = null; console.error(`An error occured while creating sampler: ${error.message}`); } });
NOTE:
Error scopes can encompass as many commands as needed. The number of commands an error scope covers will generally be correlated to what sort of action the application intends to take in response to an error occuring.
For example: An error scope that only contains the creation of a single resource, such as a texture or buffer, can be used to detect failures such as out of memory conditions, in which case the application may try freeing some resources and trying the allocation again.
Error scopes do not identify which command failed, however. So, for instance, wrapping all the commands executed while loading a model in a single error scope will not offer enough granularity to determine if the issue was due to memory constraints. As a result freeing resources would usually not be a productive response to a failure of that scope. A more appropriate response would be to allow the application to fall back to a different model or produce a warning that the model could not be loaded. If responding to memory constraints is desired, the operations allocating memory can always be wrapped in a smaller nested error scope.
22.4. TelemetryWhen a GPUError
is generated that is not observed by any GPU error scope, the user agent may fire an event named uncapturederror
at a GPUDevice
using GPUUncapturedErrorEvent
.
Note: uncapturederror
events are intended to be used for telemetry and reporting unexpected errors. They may not be dispatched for all uncaptured errors (for example, there may be a limit on the number of errors surfaced), and should not be used for handling known error cases that may occur during normal operation of an application. Prefer using pushErrorScope()
and popErrorScope()
in those cases.
[Exposed=(Window, Worker), SecureContext] interfaceGPUUncapturedErrorEvent
: Event {constructor
( DOMStringtype
, GPUUncapturedErrorEventInitgpuUncapturedErrorEventInitDict
); [SameObject] readonly attribute GPUError error; }; dictionaryGPUUncapturedErrorEventInit
: EventInit { required GPUErrorerror
; };
GPUUncapturedErrorEvent
has the following attributes:
error
, of type GPUError, readonly
A slot-backed attribute holding an object representing the error that was uncaptured. This has the same type as errors returned by popErrorScope()
.
partial interface GPUDevice { attribute EventHandler onuncapturederror; };
GPUDevice
has the following content timeline properties:
onuncapturederror
, of type EventHandler
An event handler IDL attribute for the uncapturederror
event type.
Listening for uncaptured errors from a
GPUDevice
:
gpuDevice.addEventListener('uncapturederror', (event) => { // Re-surface the error, because adding an event listener may silence console logs. console.error('A WebGPU error was not captured:', event.error); myEngineDebugReport.uncapturedErrors.push({ type: event.error.constructor.name, message: event.error.message, }); });23. Detailed Operations
This section describes the details of various GPU operations.
23.1. ComputingComputing operations provide direct access to GPU’s programmable hardware. Compute shaders do not have shader stage inputs or outputs; their results are side effects from writing data into storage bindings bound either as GPUBufferBindingLayout
with GPUBufferBindingType
"storage"
or as GPUStorageTextureBindingLayout
. These operations are encoded within GPUComputePassEncoder
as:
The main compute algorithm:
compute(descriptor, drawCall, state)
Arguments:
descriptor: Description of the current GPUComputePipeline
.
dispatchCall: The dispatch call parameters. May come from function arguments or an INDIRECT
buffer.
Let computeStage be descriptor.compute
.
Let workgroupSize be the computed workgroup size for computeStage.entryPoint
after applying computeStage.constants
to computeStage.module
.
For workgroupX in range [0, dispatchCall.
:workgroupCountX
]
For workgroupY in range [0, dispatchCall.
:workgroupCountY
]
For workgroupZ in range [0, dispatchCall.
:workgroupCountZ
]
For localX in range [0, workgroupSize.
:x
]
For localY in range [0, workgroupSize.
:y
]
For localZ in range [0, workgroupSize.
:y
]
Let invocation be { computeStage, workgroupX, workgroupY, workgroupZ, localX, localY, localZ }
Append invocation to computeInvocations.
For every invocation in computeInvocations, in any order the device chooses, including in parallel:
Set the shader builtins:
Set the num_workgroups builtin, if any, to (
dispatchCall.workgroupCountX
,
dispatchCall.workgroupCountY
,
dispatchCall.workgroupCountZ
)
Set the workgroup_id builtin, if any, to (
invocation.workgroupX,
invocation.workgroupY,
invocation.workgroupZ
)
Set the local_invocation_id builtin, if any, to (
invocation.localX,
invocation.localY,
invocation.localZ
)
Set the global_invocation_id builtin, if any, to (
.
invocation.workgroupX * workgroupSize.x
+ invocation.localX,
invocation.workgroupY * workgroupSize.y
+ invocation.localY,
invocation.workgroupZ * workgroupSize.z
+ invocation.localZ
)
Set the local_invocation_index builtin, if any, to invocation.localX + (invocation.localY * workgroupSize.
x
) + (invocation.localZ * workgroupSize.x
* workgroupSize.y
)
Invoke the compute shader entry point described by invocation.computeStage.
Note: Shader invocations have no guaranteed order, and will generally run in parallel according to device capabilities. Developers should not assume that any given invocation or workgroup will complete before any other one is started. Some devices may appear to execute in a consistent order, but this behavior should not be relied on as it will not perform identically across all devices. Shaders that require synchronization across invocations must use Synchronization Built-in Functions to coordinate execution.
The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.
23.2. RenderingRendering is done by a set of GPU operations that are executed within GPURenderPassEncoder
, and result in modifications of the texture data, viewed by the render pass attachments. These operations are encoded with:
Note: rendering is the traditional use of GPUs, and is supported by multiple fixed-function blocks in hardware.
The main rendering algorithm:
render(pipeline, drawCall, state)
Arguments:
pipeline: The current GPURenderPipeline
.
drawCall: The draw call parameters. May come from function arguments or an INDIRECT
buffer.
state: RenderState of the GPURenderCommandsMixin
where the draw call is issued.
Let descriptor be pipeline.[[descriptor]]
.
Resolve indices. See § 23.2.1 Index Resolution.
Let vertexList be the result of resolve indices(drawCall, state).
Process vertices. See § 23.2.2 Vertex Processing.
Execute process vertices(vertexList, drawCall, descriptor.vertex
, state).
Assemble primitives. See § 23.2.3 Primitive Assembly.
Execute assemble primitives(vertexList, drawCall, descriptor.primitive
).
Clip primitives. See § 23.2.4 Primitive Clipping.
Let primitiveList be the result of this stage.
Rasterize. See § 23.2.5 Rasterization.
Let rasterizationList be the result of rasterize(primitiveList, state).
Process fragments. See § 23.2.6 Fragment Processing.
Gather a list of fragments, resulting from executing process fragment(rasterPoint, descriptor, state) for each rasterPoint in rasterizationList.
Write pixels. See § 23.2.7 Output Merging.
For each non-null fragment of fragments:
Execute process depth stencil(fragment, pipeline, state).
Execute process color attachments(fragment, pipeline, state).
At the first stage of rendering, the pipeline builds a list of vertices to process for each instance.
resolve indices(drawCall, state)
Arguments:
drawCall: The draw call parameters. May come from function arguments or an INDIRECT
buffer.
state: The snapshot of the GPURenderCommandsMixin
state at the time of the draw call.
Returns: list of integer indices.
Let vertexIndexList be an empty list of indices.
If drawCall is an indexed draw call:
Initialize the vertexIndexList with drawCall.indexCount integers.
For i in range 0 .. drawCall.indexCount (non-inclusive):
Let relativeVertexIndex be fetch index(i + drawCall.firstIndex
, state.[[index_buffer]]
).
If relativeVertexIndex has the special value "out of bounds"
, return the empty list.
Note: Implementations may choose to display a warning when this occurs, especially when it is easy to detect (like in non-indirect indexed draw calls).
Append drawCall.baseVertex
+ relativeVertexIndex to the vertexIndexList.
Otherwise:
Initialize the vertexIndexList with drawCall.vertexCount integers.
Set each vertexIndexList item i to the value drawCall.firstVertex + i.
Return vertexIndexList.
Note: in the case of indirect draw calls, the indexCount
, vertexCount
, and other properties of drawCall are read from the indirect buffer instead of the draw command itself.
Vertex processing stage is a programmable stage of the render pipeline that processes the vertex attribute data, and produces clip space positions for § 23.2.4 Primitive Clipping, as well as other data for the § 23.2.6 Fragment Processing.
process vertices(vertexIndexList, drawCall, desc, state)
Arguments:
vertexIndexList: List of vertex indices to process (mutable, passed by reference).
drawCall: The draw call parameters. May come from function arguments or an INDIRECT
buffer.
desc: The descriptor of type GPUVertexState
.
state: The snapshot of the GPURenderCommandsMixin
state at the time of the draw call.
Each vertex vertexIndex in the vertexIndexList, in each instance of index rawInstanceIndex, is processed independently. The rawInstanceIndex is in range from 0 to drawCall.instanceCount - 1, inclusive. This processing happens in parallel, and any side effects, such as writes into GPUBufferBindingType
"storage"
bindings, may happen in any order.
Let instanceIndex be rawInstanceIndex + drawCall.firstInstance.
For each non-null
vertexBufferLayout in the list of desc.buffers
:
Let i be the index of the buffer layout in this list.
Let vertexBuffer, vertexBufferOffset, and vertexBufferBindingSize be the buffer, offset, and size at slot i of state.[[vertex_buffers]]
.
Let vertexElementIndex be dependent on vertexBufferLayout.stepMode
:
"vertex"
vertexIndex
"instance"
instanceIndex
Let drawCallOutOfBounds be false
.
For each attributeDesc in vertexBufferLayout.attributes
:
Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride
+ attributeDesc.offset
.
If attributeOffset + byteSize(attributeDesc.format
) > vertexBufferOffset + vertexBufferBindingSize:
Set drawCallOutOfBounds to true
.
Optionally (implementation-defined), empty vertexIndexList and return, cancelling the draw call.
Note: This allows implementations to detect out-of-bounds values in the index buffer before issuing a draw call, instead of using invalid memory reference behavior.
For each attributeDesc in vertexBufferLayout.attributes
:
If drawCallOutOfBounds is true
:
Load the attribute data according to WGSL’s invalid memory reference behavior, from vertexBuffer.
Note: Invalid memory reference allows several behaviors, including actually loading the "correct" result for an attribute that is in-bounds, even when the draw-call-wide drawCallOutOfBounds is true
.
Otherwise:
Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride
+ attributeDesc.offset
.
Load the attribute data of format attributeDesc.format
from vertexBuffer starting at offset attributeOffset. The components are loaded in the order x
, y
, z
, w
from buffer memory.
Convert the data into a shader-visible format, according to channel formats rules.
An attribute of type
"snorm8x2"
and byte values of
[0x70, 0xD0]
will be converted to
vec2<f32>(0.88, -0.38)
in WGSL.
Adjust the data size to the shader type:
if both are scalar, or both are vectors of the same dimensionality, no adjustment is needed.
if data is vector but the shader type is scalar, then only the first component is extracted.
if both are vectors, and data has a higher dimension, the extra components are dropped.
An attribute of type
"float32x3"
and value
vec3<f32>(1.0, 2.0, 3.0)
will exposed to the shader as
vec2<f32>(1.0, 2.0)
if a 2-component vector is expected.
if the shader type is a vector of higher dimensionality, or the data is a scalar, then the missing components are filled from vec4<*>(0, 0, 0, 1)
value.
An attribute of type
"sint32"
and value
5
will be exposed to the shader as
vec4<i32>(5, 0, 0, 1)
if a 4-component vector is expected.
Bind the data to vertex shader input location attributeDesc.shaderLocation
.
For each GPUBindGroup
group at index in state.[[bind_groups]]
:
For each resource GPUBindingResource
in the bind group:
Let entry be the corresponding GPUBindGroupLayoutEntry
for this resource.
If entry.visibility
includes VERTEX
:
Bind the resource to the shader under group index and binding GPUBindGroupLayoutEntry.binding
.
Set the shader builtins:
Set the vertex_index
builtin, if any, to vertexIndex.
Set the instance_index
builtin, if any, to instanceIndex.
Invoke the vertex shader entry point described by desc.
Note: The target platform caches the results of vertex shader invocations. There is no guarantee that any vertexIndex that repeats more than once will result in multiple invocations. Similarly, there is no guarantee that a single vertexIndex will only be processed once.
The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.
Primitives are assembled by a fixed-function stage of GPUs.
assemble primitives(vertexIndexList, drawCall, desc)
Arguments:
vertexIndexList: List of vertex indices to process.
drawCall: The draw call parameters. May come from function arguments or an INDIRECT
buffer.
desc: The descriptor of type GPUPrimitiveState
.
For each instance, the primitives get assembled from the vertices that have been processed by the shaders, based on the vertexIndexList.
First, if the primitive topology is a strip, (which means that desc.stripIndexFormat
is not undefined) and the drawCall is indexed, the vertexIndexList is split into sub-lists using the maximum value of desc.stripIndexFormat
as a separator.
Example: a vertexIndexList with values [1, 2, 65535, 4, 5, 6]
of type "uint16"
will be split in sub-lists [1, 2]
and [4, 5, 6]
.
For each of the sub-lists vl, primitive generation is done according to the desc.topology
:
"line-list"
Line primitives are composed from (vl.0, vl.1), then (vl.2, vl.3), then (vl.4 to vl.5), etc. Each subsequent primitive takes 2 vertices.
"line-strip"
Line primitives are composed from (vl.0, vl.1), then (vl.1, vl.2), then (vl.2, vl.3), etc. Each subsequent primitive takes 1 vertex.
"triangle-list"
Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.3, vl.4, vl.5), then (vl.6, vl.7, vl.8), etc. Each subsequent primitive takes 3 vertices.
"triangle-strip"
Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.2, vl.1, vl.3), then (vl.2, vl.3, vl.4), then (vl.4, vl.3, vl.5), etc. Each subsequent primitive takes 1 vertices.
Any incomplete primitives are dropped.
Vertex shaders have to produce a built-in position (of type vec4<f32>
), which denotes the clip position of a vertex in clip space coordinates.
Primitives are clipped to the clip volume, which, for any clip position p inside a primitive, is defined by the following inequalities:
−p.w ≤ p.x ≤ p.w
−p.w ≤ p.y ≤ p.w
0 ≤ p.z ≤ p.w (depth clipping)
When the "clip-distances"
feature is enabled, this clip volume can be further restricted by user-defined half-spaces by declaring clip_distances in the output of vertex stage. Each value in the clip_distances array will be linearly interpolated across the primitive, and the portion of the primitive with interpolated distances less than 0 will be clipped.
If descriptor.primitive
.unclippedDepth
is true
, depth clipping is not applied: the clip volume is not bounded in the z dimension.
A primitive passes through this stage unchanged if every one of its edges lie entirely inside the clip volume. If the edges of a primitives intersect the boundary of the clip volume, the intersecting edges are reconnected by new edges that lie along the boundary of the clip volume. For triangular primitives (descriptor.primitive
.topology
is "triangle-list"
or "triangle-strip"
), this reconnection may result in introduction of new vertices into the polygon, internally.
If a primitive intersects an edge of the clip volume’s boundary, the clipped polygon must include a point on this boundary edge.
If the vertex shader outputs other floating-point values (scalars and vectors), qualified with "perspective" interpolation, they also get clipped. The output values associated with a vertex that lies within the clip volume are unaffected by clipping. If a primitive is clipped, however, the output values assigned to vertices produced by clipping are clipped.
Considering an edge between vertices a and b that got clipped, resulting in the vertex c, let’s define t to be the ratio between the edge vertices: c.p = t × a.p + (1 − t) × b.p, where x.p is the output clip position of a vertex x.
For each vertex output value "v" with a corresponding fragment input, a.v and b.v would be the outputs for a and b vertices respectively. The clipped shader output c.v is produced based on the interpolation qualifier:
Flat interpolation is unaffected, and is based on the provoking vertex, which is determined by the interpolation sampling mode declared in the shader. The output value is the same for the whole primitive, and matches the vertex output of the provoking vertex.
The interpolation ratio gets adjusted against the perspective coordinates of the clip positions, so that the result of interpolation is linear in screen space.
The value is linearly interpolated in clip space, producing perspective-correct values.
The result of primitive clipping is a new set of primitives, which are contained within the clip volume.
23.2.5. RasterizationRasterization is the hardware processing stage that maps the generated primitives to the 2-dimensional rendering area of the framebuffer - the set of render attachments in the current GPURenderPassEncoder
. This rendering area is split into an even grid of pixels.
The framebuffer coordinates start from the top-left corner of the render targets. Each unit corresponds exactly to one pixel. See § 3.3 Coordinate Systems for more information.
Rasterization determines the set of pixels affected by a primitive. In case of multi-sampling, each pixel is further split into descriptor.multisample
.count
samples. The standard sample patterns are as follows, with positions in framebuffer coordinates relative to the top-left corner of the pixel, such that the pixel ranges from (0, 0) to (1, 1):
multisample
.count
Sample positions 1 Sample 0: (0.5, 0.5) 4 Sample 0: (0.375, 0.125)
Implementations must use the standard sample pattern for the given multisample
.count
when performing rasterization.
Let’s define a FragmentDestination to contain:
the 2D pixel position using framebuffer coordinates
an integer in case § 23.2.10 Per-Sample Shading is active, or null
otherwise
We’ll also use a notion of normalized device coordinates, or NDC. In this coordinate system, the viewport bounds range in X and Y from -1 to 1, and in Z from 0 to 1.
Rasterization produces a list of RasterizationPoints, each containing the following data:
refers to FragmentDestination
refers to multisample coverage mask (see § 23.2.11 Sample Masking)
is true if it’s a point on the front face of a primitive
refers to interpolated 1.0 ÷ W across the primitive
refers to the depth in viewport coordinates, i.e. between the [[viewport]]
minDepth
and maxDepth
.
refers to the list of vertex outputs forming the primitive
refers to § 23.2.5.3 Barycentric coordinates
(primitiveList, state)
Arguments:
primitiveList: List of primitives to rasterize.
state: The active RenderState.
Returns: list of RasterizationPoint.
Each primitive in primitiveList is processed independently. However, the order of primitives affects later stages, such as depth/stencil operations and pixel writes.
First, the clipped vertices are transformed into NDC - normalized device coordinates. Given the output position p, the NDC position and perspective divisor are:
ndc(p) = vector(p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w)
divisor(p) = 1.0 ÷ p.w
Let vp be state.[[viewport]]
. Map the NDC position n into viewport coordinates:
Compute framebuffer coordinates from the render target offset and size:
framebufferCoords(n) = vector(vp.x
+ 0.5 × (n.x + 1) × vp.width
, vp.y
+ 0.5 × (−n.y + 1) × vp.height
)
Compute depth by linearly mapping [0,1] to the viewport depth range:
depth(n) = vp.minDepth
+ n.z
× ( vp.maxDepth
- vp.minDepth
)
Let rasterizationPoints be the list of points, each having its attributes (divisor(p)
, framebufferCoords(n)
, depth(n)
, etc.) interpolated according to its position on the primitive, using the same interpolation as § 23.2.4 Primitive Clipping. If the attribute is user-defined (not a built-in output value) then the interpolation type specified by the @interpolate WGSL attribute is used.
Proceed with a specific rasterization algorithm, depending on primitive
.topology
:
"point-list"
The point, if not filtered by § 23.2.4 Primitive Clipping, goes into § 23.2.5.1 Point Rasterization.
"line-list"
or "line-strip"
The line cut by § 23.2.4 Primitive Clipping goes into § 23.2.5.2 Line Rasterization.
"triangle-list"
or "triangle-strip"
The polygon produced in § 23.2.4 Primitive Clipping goes into § 23.2.5.4 Polygon Rasterization.
Remove all the points rp from rasterizationPoints that have rp.destination.position outside of state.[[scissorRect]]
.
Return rasterizationPoints.
A single FragmentDestination is selected within the pixel containing the framebuffer coordinates of the point.
The coverage mask depends on multi-sampling mode:
coverageMask = 1 ≪ sampleIndex
coverageMask = 1 ≪ descriptor.multisample
.count
− 1
coverageMask = 1
The exact algorithm used for line rasterization is not defined, and may differ between implementations. For example, the line may be drawn using § 23.2.5.4 Polygon Rasterization of a 1px-width rectangle around the line segment, or using Bresenham’s line algorithm to select the FragmentDestinations.
Note: See Basic Line Segment Rasterization and Bresenham Line Segment Rasterization in the Vulkan 1.3 spec for more details of how line these line rasterization algorithms may be implemented.
23.2.5.3. Barycentric coordinatesBarycentric coordinates is a list of n numbers bi, defined for a point p inside a convex polygon with n vertices vi in framebuffer space. Each bi is in range 0 to 1, inclusive, and represents the proximity to vertex vi. Their sum is always constant:
∑ (bi) = 1
These coordinates uniquely specify any point p within the polygon (or on its boundary) as:
p = ∑ (bi × pi)
For a polygon with 3 vertices - a triangle, barycentric coordinates of any point p can be computed as follows:
Apolygon = A(v1, v2, v3) b1 = A(p, b2, b3) ÷ Apolygon b2 = A(b1, p, b3) ÷ Apolygon b3 = A(b1, b2, p) ÷ Apolygon
Where A(list of points) is the area of the polygon with the given set of vertices.
For polygons with more than 3 vertices, the exact algorithm is implementation-dependent. One of the possible implementations is to triangulate the polygon and compute the barycentrics of a point based on the triangle it falls into.
23.2.5.4. Polygon RasterizationA polygon is front-facing if it’s oriented towards the projection. Otherwise, the polygon is back-facing.
rasterize polygon()
Arguments:
Returns: list of RasterizationPoint.
Let rasterizationPoints be an empty list.
Let v(i) be the framebuffer coordinates for the clipped vertex number i (starting with 1) in a rasterized polygon of n vertices.
Note: this section uses the term "polygon" instead of a "triangle", since § 23.2.4 Primitive Clipping stage may have introduced additional vertices. This is non-observable by the application.
Determine if the polygon is front-facing, which depends on the sign of the area occupied by the polygon in framebuffer coordinates:
area = 0.5 × ((v1.x × vn.y − vn.x × v1.y) + ∑ (vi+1.x × vi.y − vi.x × vi+1.y))
The sign of area is interpreted based on the primitive
.frontFace
:
"ccw"
area > 0 is considered front-facing, otherwise back-facing
"cw"
area < 0 is considered front-facing, otherwise back-facing
Cull based on primitive
.cullMode
:
"none"
All polygons pass this test.
"front"
The front-facing polygons are discarded, and do not process in later stages of the render pipeline.
"back"
The back-facing polygons are discarded.
Determine a set of fragments inside the polygon in framebuffer space - these are locations scheduled for the per-fragment operations. This operation is known as "point sampling". The logic is based on descriptor.multisample
:
Fragments are associated with pixel centers. That is, all the points with coordinates C, where fract(C) = vector2(0.5, 0.5) in the framebuffer space, enclosed into the polygon, are included. If a pixel center is on the edge of the polygon, whether or not it’s included is not defined.
Note: this becomes a subject of precision for the rasterizer.
Each pixel is associated with descriptor.multisample
.count
locations, which are implementation-defined. The locations are ordered, and the list is the same for each pixel of the framebuffer. Each location corresponds to one fragment in the multisampled framebuffer.
The rasterizer builds a mask of locations being hit inside each pixel and provides is as "sample-mask" built-in to the fragment shader.
For each produced fragment of type FragmentDestination:
Let rp be a new RasterizationPoint object
Compute the list b as § 23.2.5.3 Barycentric coordinates of that fragment. Set rp.barycentricCoordinates to b.
Let di be the depth value of vi.
Set rp.depth to ∑ (bi × di)
Append rp to rasterizationPoints.
Return rasterizationPoints.
The fragment processing stage is a programmable stage of the render pipeline that computes the fragment data (often a color) to be written into render targets.
This stage produces a Fragment for each RasterizationPoint:
process fragment(rp, descriptor, state)
Arguments:
rp: The RasterizationPoint, produced by § 23.2.5 Rasterization.
descriptor: The descriptor of type GPURenderPipelineDescriptor
.
state: The active RenderState.
Returns: Fragment or null
.
Let fragmentDesc be descriptor.fragment
.
Let depthStencilDesc be descriptor.depthStencil
.
Let fragment be a new Fragment object.
Set fragment.destination to rp.destination.
Set fragment.frontFacing to rp.frontFacing.
Set fragment.coverageMask to rp.coverageMask.
If frag_depth
builtin is not produced by the shader:
Set fragment.depthPassed to the result of compare fragment(fragment.destination, fragment.depth, "depth", state.[[depthStencilAttachment]]
, depthStencilDesc?.depthCompare
).
Set stencilState to depthStencilDesc?.stencilFront
if rp.frontFacing is true
and depthStencilDesc?.stencilBack
otherwise.
Set fragment.stencilPassed to the result of compare fragment(fragment.destination, state.[[stencilReference]]
, "stencil", state.[[depthStencilAttachment]]
, stencilState?.compare
).
If fragmentDesc is not null
:
If fragment.depthPassed is false
, the frag_depth
builtin is not produced by the shader entry point, and the shader entry point does not write to any storage bindings, the following steps may be skipped.
Set the shader input builtins. For each non-composite argument of the entry point, annotated as a builtin, set its value based on the annotation:
position
vec4<f32>
(rp.destination.position, rp.depth, rp.perspectiveDivisor)
front_facing
rp.frontFacing
sample_index
sample_mask
rp.coverageMask
For each user-specified shader stage input of the fragment stage:
Let value be the interpolated fragment input, based on rp.barycentricCoordinates, rp.primitiveVertices, and the interpolation qualifier on the input.
Set the corresponding fragment shader location input to value.
Invoke the fragment shader entry point described by fragmentDesc.
The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.
If the fragment issued discard
, return null
.
Set fragment.colors to the user-specified shader stage output values from the shader.
Take the shader output builtins:
If frag_depth
builtin is produced by the shader as value:
Let vp be state.[[viewport]]
.
Set fragment.depth to clamp(value, vp.minDepth
, vp.maxDepth
).
Set fragment.depthPassed to the result of compare fragment(fragment.destination, fragment.depth, "depth", state.[[depthStencilAttachment]]
, depthStencilDesc?.depthCompare
).
If sample_mask
builtin is produced by the shader as value:
Set fragment.coverageMask to fragment.coverageMask ∧ value.
Otherwise we are in § 23.2.8 No Color Output mode, and fragment.colors is empty.
Return fragment.
(destination, value, aspect, attachment, compareFunc)
Arguments:
destination: The FragmentDestination.
value: The value to be compared.
aspect: The aspect of attachment to sample values from.
attachment: The attachment to be compared against.
compareFunc: The GPUCompareFunction
to use, or undefined
.
Returns: true
if the comparison passes, or false
otherwise
If attachment is undefined
or does not have aspect, return true
.
If compareFunc is undefined
or "always"
, return true
.
Let attachmentValue be the value of aspect of attachment at destination.
Return true
if comparing value with attachmentValue using compareFunc succeeds, and false
otherwise.
Processing of fragments happens in parallel, while any side effects, such as writes into GPUBufferBindingType
"storage"
bindings, may happen in any order.
Output merging is a fixed-function stage of the render pipeline that outputs the fragment color, depth and stencil data to be written into the render pass attachments.
The depth input to this stage, if any, is clamped to the current [[viewport]]
depth range (regardless of whether the fragment shader stage writes the frag_depth
builtin).
In no-color-output mode, pipeline does not produce any color attachment outputs.
The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.
23.2.9. Alpha to CoverageIn alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the fragment shader output value at @location(0)
.
The algorithm of producing the extra mask is platform-dependent and can vary for different pixels. It guarantees that:
if alpha ≤ 0.0, the result is 0x0
if alpha ≥ 1.0, the result is 0xFFFFFFFF
intermediate alpha values should result in a proportionate number of bits set to 1 in the mask. Some platforms may not guarantee that the number of bits set to 1 in the mask monotonically increases as alpha increases for a given pixel.
When rendering into multisampled render attachments, fragment shaders can be run once per-pixel or once per-sample. Fragment shaders must run once per-sample if either the sample_index
builtin or sample
interpolation sampling is used and contributes to the shader output. Otherwise fragment shaders may run once per-pixel with the result broadcast out to each of the samples included in the final sample mask.
When using per-sample shading, the color output for sample N is produced by the fragment shader execution with sample_index
== N for the current pixel.
The final sample mask for a pixel is computed as: rasterization mask & mask
& shader-output mask.
Only the lower count
bits of the mask are considered.
If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.
The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples included in the shape get the relevant bits 1 in the mask.
The shader-output mask takes the output value of "sample_mask" builtin in the fragment shader. If the builtin is not output from the fragment shader, and alphaToCoverageEnabled
is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.
typedef [EnforceRange] unsigned long24.1. Colors & VectorsGPUBufferDynamicOffset
; typedef [EnforceRange] unsigned longGPUStencilValue
; typedef [EnforceRange] unsigned longGPUSampleMask
; typedef [EnforceRange] longGPUDepthBias
; typedef [EnforceRange] unsigned long longGPUSize64
; typedef [EnforceRange] unsigned longGPUIntegerCoordinate
; typedef [EnforceRange] unsigned longGPUIndex32
; typedef [EnforceRange] unsigned longGPUSize32
; typedef [EnforceRange] longGPUSignedOffset32
; typedef unsigned long longGPUSize64Out
; typedef unsigned longGPUIntegerCoordinateOut
; typedef unsigned longGPUSize32Out
; typedef unsigned longGPUFlagsConstant
;
dictionaryGPUColorDict
{ required double r; required double g; required double b; required double a; }; typedef (sequence<double> or GPUColorDict)GPUColor
;
Note: double
is large enough to precisely hold 32-bit signed/unsigned integers and single-precision floats.
r
, of type double
The red channel value.
g
, of type double
The green channel value.
b
, of type double
The blue channel value.
a
, of type double
The alpha channel value.
For a given
GPUColor
value
color, depending on its type, the syntax:
color.r refers to either GPUColorDict
.r
or the first item of the sequence (asserting there is such an item).
color.g refers to either GPUColorDict
.g
or the second item of the sequence (asserting there is such an item).
color.b refers to either GPUColorDict
.b
or the third item of the sequence (asserting there is such an item).
color.a refers to either GPUColorDict
.a
or the fourth item of the sequence (asserting there is such an item).
dictionaryGPUOrigin2DDict
{ GPUIntegerCoordinatex
= 0; GPUIntegerCoordinatey
= 0; }; typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict)GPUOrigin2D
;
For a given
GPUOrigin2D
value
origin, depending on its type, the syntax:
origin.x refers to either GPUOrigin2DDict
.x
or the first item of the sequence (0 if not present).
origin.y refers to either GPUOrigin2DDict
.y
or the second item of the sequence (0 if not present).
dictionaryGPUOrigin3DDict
{ GPUIntegerCoordinatex
= 0; GPUIntegerCoordinatey
= 0; GPUIntegerCoordinatez
= 0; }; typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict)GPUOrigin3D
;
For a given
GPUOrigin3D
value
origin, depending on its type, the syntax:
origin.x refers to either GPUOrigin3DDict
.x
or the first item of the sequence (0 if not present).
origin.y refers to either GPUOrigin3DDict
.y
or the second item of the sequence (0 if not present).
origin.z refers to either GPUOrigin3DDict
.z
or the third item of the sequence (0 if not present).
dictionaryGPUExtent3DDict
{ required GPUIntegerCoordinate width; GPUIntegerCoordinate height = 1; GPUIntegerCoordinate depthOrArrayLayers = 1; }; typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict)GPUExtent3D
;
width
, of type GPUIntegerCoordinate
The width of the extent.
height
, of type GPUIntegerCoordinate, defaulting to 1
The height of the extent.
depthOrArrayLayers
, of type GPUIntegerCoordinate, defaulting to 1
The depth of the extent or the number of array layers it contains. If used with a GPUTexture
with a GPUTextureDimension
of "3d"
defines the depth of the texture. If used with a GPUTexture
with a GPUTextureDimension
of "2d"
defines the number of array layers in the texture.
"core-features-and-limits"
Allows all Core WebGPU features and limits to be used.
Note: This is currently available on all adapters and enabled automatically on all devices even if not requested.
25.2."depth-clip-control"
Allows depth clipping to be disabled.
This feature adds the following optional API surfaces:
New GPUPrimitiveState
dictionary members:
"depth32float-stencil8"
Allows for explicit creation of textures of format "depth32float-stencil8"
.
This feature adds the following optional API surfaces:
New GPUTextureFormat
enum values:
"texture-compression-bc"
Allows for explicit creation of textures of BC compressed formats. Only supports 2D textures.
Note: Adapters which support "texture-compression-bc"
do not always support "texture-compression-bc-sliced-3d"
. To use "texture-compression-bc-sliced-3d"
, "texture-compression-bc"
must be enabled explicitly as this feature does not enable the BC formats.
This feature adds the following optional API surfaces:
New GPUTextureFormat
enum values:
"texture-compression-bc-sliced-3d"
Allows the 3d
dimension for textures with BC compressed formats.
Note: Adapters which support "texture-compression-bc"
do not always support "texture-compression-bc-sliced-3d"
. To use "texture-compression-bc-sliced-3d"
, "texture-compression-bc"
must be enabled explicitly as this feature does not enable the BC formats.
This feature adds no optional API surfaces.
25.6."texture-compression-etc2"
Allows for explicit creation of textures of ETC2 compressed formats. Only supports 2D textures.
This feature adds the following optional API surfaces:
New GPUTextureFormat
enum values:
"texture-compression-astc"
Allows for explicit creation of textures of ASTC compressed formats. Only supports 2D textures.
This feature adds the following optional API surfaces:
New GPUTextureFormat
enum values:
"texture-compression-astc-sliced-3d"
Allows the 3d
dimension for textures with ASTC compressed formats.
Note: Adapters which support "texture-compression-astc"
do not always support "texture-compression-astc-sliced-3d"
. To use "texture-compression-astc-sliced-3d"
, "texture-compression-astc"
must be enabled explicitly as this feature does not enable the ASTC formats.
This feature adds no optional API surfaces.
25.9."timestamp-query"
Adds the ability to query timestamps from GPU command buffers. See § 20.4 Timestamp Query.
This feature adds the following optional API surfaces:
New GPUQueryType
values:
New GPUComputePassDescriptor
members:
New GPURenderPassDescriptor
members:
"indirect-first-instance"
Allows the use of non-zero firstInstance
values in indirect draw parameters and indirect drawIndexed parameters.
This feature adds no optional API surfaces.
25.11."shader-f16"
Allows the use of the half-precision floating-point type f16 in WGSL.
This feature adds the following optional API surfaces:
New WGSL extensions:
"rg11b10ufloat-renderable"
Allows the RENDER_ATTACHMENT
usage on textures with format "rg11b10ufloat"
, and also allows textures of that format to be blended, multisampled, and resolved.
This feature adds no optional API surfaces.
25.13."bgra8unorm-storage"
Allows the STORAGE_BINDING
usage on textures with format "bgra8unorm"
.
This feature adds no optional API surfaces.
25.14."float32-filterable"
Makes textures with formats "r32float"
, "rg32float"
, and "rgba32float"
filterable.
"float32-blendable"
Makes textures with formats "r32float"
, "rg32float"
, and "rgba32float"
blendable.
"clip-distances"
Allows the use of clip_distances in WGSL.
This feature adds the following optional API surfaces:
New WGSL extensions:
"dual-source-blending"
Allows the use of blend_src in WGSL and simultaneously using both pixel shader outputs (@blend_src(0)
and @blend_src(1)
) as inputs to a blending operation with the single color attachment at location 0
.
This feature adds the following optional API surfaces:
Allows the use of the below GPUBlendFactor
s:
New WGSL extensions:
"subgroups"
Allows the use of the subgroup and quad operations in WGSL.
This feature adds no optional API surfaces, but the following entries of GPUAdapterInfo
expose real values whenever the feature is available on the adapter:
New WGSL extensions:
All plain color formats support COPY_SRC
, COPY_DST
, and TEXTURE_BINDING
usage. Additionally, all plain color formats support textures with "3d"
dimension.
The RENDER_ATTACHMENT
and STORAGE_BINDING
columns specify support for GPUTextureUsage.RENDER_ATTACHMENT
and GPUTextureUsage.STORAGE_BINDING
usage respectively.
The render target pixel byte cost and render target component alignment are used to validate the maxColorAttachmentBytesPerSample
limit.
Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.
8 bits per component (1-byte render target component alignment)r8unorm
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 1 r8snorm
"float"
,
"unfilterable-float"
1 – r8uint
"uint"
✓ ✓ 1 r8sint
"sint"
✓ ✓ 1 rg8unorm
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 2 rg8snorm
"float"
,
"unfilterable-float"
2 – rg8uint
"uint"
✓ ✓ 2 rg8sint
"sint"
✓ ✓ 2 rgba8unorm
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ ✓ ✓ 4 8 rgba8unorm-srgb
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 4 8 rgba8snorm
"float"
,
"unfilterable-float"
✓ ✓ 4 – rgba8uint
"uint"
✓ ✓ ✓ ✓ 4 rgba8sint
"sint"
✓ ✓ ✓ ✓ 4 bgra8unorm
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ If "bgra8unorm-storage"
is enabled 4 8 bgra8unorm-srgb
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 4 8 16 bits per component (2-byte render target component alignment) r16uint
"uint"
✓ ✓ 2 r16sint
"sint"
✓ ✓ 2 r16float
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 2 rg16uint
"uint"
✓ ✓ 4 rg16sint
"sint"
✓ ✓ 4 rg16float
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 4 rgba16uint
"uint"
✓ ✓ ✓ ✓ 8 rgba16sint
"sint"
✓ ✓ ✓ ✓ 8 rgba16float
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ ✓ ✓ 8 32 bits per component (4-byte render target component alignment) r32uint
"uint"
✓ ✓ ✓ ✓ 4 r32sint
"sint"
✓ ✓ ✓ ✓ 4 r32float
"unfilterable-float"
"float"
if "float32-filterable"
is enabled
"float32-blendable"
is enabled ✓ ✓ ✓ ✓ 4 rg32uint
"uint"
✓ ✓ ✓ 8 rg32sint
"sint"
✓ ✓ ✓ 8 rg32float
"unfilterable-float"
"float"
if "float32-filterable"
is enabled
"float32-blendable"
is enabled ✓ ✓ 8 rgba32uint
"uint"
✓ ✓ ✓ 16 rgba32sint
"sint"
✓ ✓ ✓ 16 rgba32float
"unfilterable-float"
"float"
if "float32-filterable"
is enabled
"float32-blendable"
is enabled ✓ ✓ 16 mixed component width, 32 bits per texel (4-byte render target component alignment) rgb10a2uint
"uint"
✓ ✓ 4 8 rgb10a2unorm
"float"
,
"unfilterable-float"
✓ ✓ ✓ ✓ 4 8 rg11b10ufloat
"float"
,
"unfilterable-float"
If "rg11b10ufloat-renderable"
is enabled 4 8 26.1.2. Depth-stencil formats
A depth-or-stencil format is any format with depth and/or stencil aspects. A combined depth-stencil format is a depth-or-stencil format that has both depth and stencil aspects.
All depth-or-stencil formats support the COPY_SRC
, COPY_DST
, TEXTURE_BINDING
, and RENDER_ATTACHMENT
usages. All of these formats support multisampling. However, certain copy operations also restrict the source and destination formats, and none of these formats support textures with "3d"
dimension.
Depth textures cannot be used with "filtering"
samplers, but can always be used with "comparison"
samplers even if they use filtering.
24-bit depth refers to a 24-bit unsigned normalized depth format with a range from 0.0 to 1.0, which would be spelled "depth24unorm" if exposed.
26.1.2.1. Reading and Sampling Depth/Stencil TexturesIt is possible to bind a depth-aspect GPUTextureView
to either a texture_depth_*
binding or a binding with other non-depth 2d/cube texture types.
A stencil-aspect GPUTextureView
must be bound to a normal texture binding type. The sampleType
in the GPUBindGroupLayout
must be "uint"
.
Reading or sampling the depth or stencil aspect of a texture behaves as if the texture contains the values (V, X, X, X)
, where V is the actual depth or stencil value, and each X is an implementation-defined unspecified value.
For depth-aspect bindings, the unspecified values are not visible through bindings with texture_depth_*
types.
If a depth texture is bound to
tex
with type
texture_2d<f32>
:
textureSample(tex, ...)
will return vec4<f32>(D, X, X, X)
.
textureGather(0, tex, ...)
will return vec4<f32>(D1, D2, D3, D4)
.
textureGather(2, tex, ...)
will return vec4<f32>(X1, X2, X3, X4)
(a completely unspecified value).
Note: Short of adding a new more constrained stencil sampler type (like depth), it’s infeasible for implementations to efficiently paper over the driver differences for depth/stencil reads. As this was not a portability pain point for WebGL, it’s not expected to be problematic in WebGPU. In practice, expect either (V, V, V, V)
or (V, 0, 0, 1)
(where V
is the depth or stencil value), depending on hardware.
The depth aspects of depth32float formats ("depth32float"
and "depth32float-stencil8"
have a limited range. As a result, copies into such textures are only valid from other textures of the same format.
The depth aspects of depth24plus formats ("depth24plus"
and "depth24plus-stencil8"
) have opaque representations (implemented as either 24-bit depth or "depth32float"
). As a result, depth-aspect texel copies are not allowed with these formats.
NOTE:
It is possible to imitate these disallowed copies:
All of these formats can be written in a render pass using a fragment shader that outputs depth values via the frag_depth
output.
Textures with "depth24plus" formats can be read as shader textures, and written to a texture (as a render pass attachment) or buffer (via a storage buffer binding in a compute shader).
All packed texture formats support COPY_SRC
, COPY_DST
, and TEXTURE_BINDING
usages. All of these formats are filterable. None of these formats are renderable or support multisampling.
A compressed format is any format with a block size greater than 1×1.
Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3