Baseline Widely available
The AudioBufferSourceNode
interface is an AudioScheduledSourceNode
which represents an audio source consisting of in-memory audio data, stored in an AudioBuffer
.
This interface is especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network. To play sounds which require accurate timing but must be streamed from the network or played from disk, use a AudioWorkletNode
to implement its playback.
An AudioBufferSourceNode
has no inputs and exactly one output, which has the same number of channels as the AudioBuffer
indicated by its buffer
property. If there's no buffer setâthat is, if buffer
is null
âthe output contains a single channel of silence (every sample is 0).
An AudioBufferSourceNode
can only be played once; after each call to start()
, you have to create a new node if you want to play the same sound again. Fortunately, these nodes are very inexpensive to create, and the actual AudioBuffer
s can be reused for multiple plays of the sound. Indeed, you can use these nodes in a "fire and forget" manner: create the node, call start()
to begin playing the sound, and don't even bother to hold a reference to it. It will automatically be garbage-collected at an appropriate time, which won't be until sometime after the sound has finished playing.
Multiple calls to stop()
are allowed. The most recent call replaces the previous one, if the AudioBufferSourceNode
has not already reached the end of the buffer.
0
Number of outputs 1
Channel count defined by the associated AudioBuffer
Constructor
AudioBufferSourceNode()
Creates and returns a new AudioBufferSourceNode
object. As an alternative, you can use the BaseAudioContext.createBufferSource()
factory method; see Creating an AudioNode.
Inherits properties from its parent, AudioScheduledSourceNode
.
AudioBufferSourceNode.buffer
An AudioBuffer
that defines the audio asset to be played, or when set to the value null
, defines a single channel of silence (in which every sample is 0.0).
AudioBufferSourceNode.detune
A k-rate AudioParam
representing detuning of playback in cents. This value is compounded with playbackRate
to determine the speed at which the sound is played. Its default value is 0
(meaning no detuning), and its nominal range is -â to â.
AudioBufferSourceNode.loop
A Boolean attribute indicating if the audio asset must be replayed when the end of the AudioBuffer
is reached. Its default value is false
.
AudioBufferSourceNode.loopStart
Optional
A floating-point value indicating the time, in seconds, at which playback of the AudioBuffer
must begin when loop
is true
. Its default value is 0
(meaning that at the beginning of each loop, playback begins at the start of the audio buffer).
AudioBufferSourceNode.loopEnd
Optional
A floating-point number indicating the time, in seconds, at which playback of the AudioBuffer
stops and loops back to the time indicated by loopStart
, if loop
is true
. The default value is 0
.
AudioBufferSourceNode.playbackRate
A k-rate AudioParam
that defines the speed factor at which the audio asset will be played, where a value of 1.0 is the sound's natural sampling rate. Since no pitch correction is applied on the output, this can be used to change the pitch of the sample. This value is compounded with detune
to determine the final playback rate.
Inherits methods from its parent, AudioScheduledSourceNode
, and overrides the following method:.
start()
Schedules playback of the audio data contained in the buffer, or begins playback immediately. Additionally allows the start offset and play duration to be set.
In this example, we create a two-second buffer, fill it with white noise, and then play it using an AudioBufferSourceNode
. The comments should clearly explain what is going on.
Note: You can also run the code live, or view the source.
const audioCtx = new AudioContext();
// Create an empty three-second stereo buffer at the sample rate of the AudioContext
const myArrayBuffer = audioCtx.createBuffer(
2,
audioCtx.sampleRate * 3,
audioCtx.sampleRate,
);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (let channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
// This gives us the actual ArrayBuffer that contains the data
const nowBuffering = myArrayBuffer.getChannelData(channel);
for (let i = 0; i < myArrayBuffer.length; i++) {
// Math.random() is in [0; 1.0]
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
// Get an AudioBufferSourceNode.
// This is the AudioNode to use when we want to play an AudioBuffer
const source = audioCtx.createBufferSource();
// set the buffer in the AudioBufferSourceNode
source.buffer = myArrayBuffer;
// connect the AudioBufferSourceNode to the
// destination so we can hear the sound
source.connect(audioCtx.destination);
// start the source playing
source.start();
Note: For a decodeAudioData()
example, see the AudioContext.decodeAudioData()
page.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3