Baseline Widely available *
The AudioContext
interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode
.
An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext
before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single AudioContext
for several different audio sources and pipeline concurrently.
AudioContext()
Creates and returns a new AudioContext
object.
Also inherits properties from its parent interface, BaseAudioContext
.
AudioContext.baseLatency
Read only
Returns the number of seconds of processing latency incurred by the AudioContext
passing the audio from the AudioDestinationNode
to the audio subsystem.
AudioContext.outputLatency
Read only
Returns an estimation of the output latency of the current audio context.
AudioContext.sinkId
Read only Experimental Secure context
Returns the sink ID of the current output audio device.
Also inherits methods from its parent interface, BaseAudioContext
.
AudioContext.close()
Closes the audio context, releasing any system audio resources that it uses.
AudioContext.createMediaElementSource()
Creates a MediaElementAudioSourceNode
associated with an HTMLMediaElement
. This can be used to play and manipulate audio from <video>
or <audio>
elements.
AudioContext.createMediaStreamSource()
Creates a MediaStreamAudioSourceNode
associated with a MediaStream
representing an audio stream which may come from the local computer microphone or other sources.
AudioContext.createMediaStreamDestination()
Creates a MediaStreamAudioDestinationNode
associated with a MediaStream
representing an audio stream which may be stored in a local file or sent to another computer.
AudioContext.createMediaStreamTrackSource()
Creates a MediaStreamTrackAudioSourceNode
associated with a MediaStream
representing an media stream track.
AudioContext.getOutputTimestamp()
Returns a new AudioTimestamp
object containing two audio timestamp values relating to the current audio context.
AudioContext.resume()
Resumes the progression of time in an audio context that has previously been suspended/paused.
AudioContext.setSinkId()
Experimental Secure context
Sets the output audio device for the AudioContext
.
AudioContext.suspend()
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
sinkchange
Experimental
Fired when the output audio device (and therefore, the AudioContext.sinkId
) has changed.
Basic audio context declaration:
const audioCtx = new AudioContext();
const oscillatorNode = audioCtx.createOscillator();
const gainNode = audioCtx.createGain();
const finish = audioCtx.destination;
// etc.
Specifications Browser compatibility See also
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3