Limited availability
The SpeechRecognition
interface of the Web Speech API is the controller interface for the recognition service; this also handles the SpeechRecognitionEvent
sent from the recognition service.
Note: On some browsers, like Chrome, using Speech Recognition on a web page involves a server-based recognition engine. Your audio is sent to a web service for recognition processing, so it won't work offline.
EventTarget SpeechRecognition ConstructorSpeechRecognition()
Creates a new SpeechRecognition
object.
SpeechRecognition
also inherits properties from its parent interface, EventTarget
.
SpeechRecognition.grammars
Returns and sets a collection of SpeechGrammar
objects that represent the grammars that will be understood by the current SpeechRecognition
.
SpeechRecognition.lang
Returns and sets the language of the current SpeechRecognition
. If not specified, this defaults to the HTML lang
attribute value, or the user agent's language setting if that isn't set either.
SpeechRecognition.continuous
Controls whether continuous results are returned for each recognition, or only a single result. Defaults to single (false
.)
SpeechRecognition.interimResults
Controls whether interim results should be returned (true
) or not (false
.) Interim results are results that are not yet final (e.g., the SpeechRecognitionResult.isFinal
property is false
.)
SpeechRecognition.maxAlternatives
Sets the maximum number of SpeechRecognitionAlternative
s provided per result. The default value is 1.
SpeechRecognition
also inherits methods from its parent interface, EventTarget
.
SpeechRecognition.abort()
Stops the speech recognition service from listening to incoming audio, and doesn't attempt to return a SpeechRecognitionResult
.
SpeechRecognition.start()
Starts the speech recognition service listening to incoming audio with intent to recognize grammars associated with the current SpeechRecognition
.
SpeechRecognition.stop()
Stops the speech recognition service from listening to incoming audio, and attempts to return a SpeechRecognitionResult
using the audio captured so far.
Listen to these events using addEventListener()
or by assigning an event listener to the oneventname
property of this interface.
audiostart
Fired when the user agent has started to capture audio. Also available via the onaudiostart
property.
audioend
Fired when the user agent has finished capturing audio. Also available via the onaudioend
property.
end
Fired when the speech recognition service has disconnected. Also available via the onend
property.
error
Fired when a speech recognition error occurs. Also available via the onerror
property.
nomatch
Fired when the speech recognition service returns a final result with no significant recognition. This may involve some degree of recognition, which doesn't meet or exceed the confidence
threshold. Also available via the onnomatch
property.
result
Fired when the speech recognition service returns a result â a word or phrase has been positively recognized and this has been communicated back to the app. Also available via the onresult
property.
soundstart
Fired when any sound â recognizable speech or not â has been detected. Also available via the onsoundstart
property.
soundend
Fired when any sound â recognizable speech or not â has stopped being detected. Also available via the onsoundend
property.
speechstart
Fired when sound that is recognized by the speech recognition service as speech has been detected. Also available via the onspeechstart
property.
speechend
Fired when speech recognized by the speech recognition service has stopped being detected. Also available via the onspeechend
property.
start
Fired when the speech recognition service has begun listening to incoming audio with intent to recognize grammars associated with the current SpeechRecognition
. Also available via the onstart
property.
In our simple Speech color changer example, we create a new SpeechRecognition
object instance using the SpeechRecognition()
constructor, create a new SpeechGrammarList
, and set it to be the grammar that will be recognized by the SpeechRecognition
instance using the SpeechRecognition.grammars
property.
After some other values have been defined, we then set it so that the recognition service starts when a click event occurs (see SpeechRecognition.start()
.) When a result has been successfully recognized, the result
event fires, we extract the color that was spoken from the event object, and then set the background color of the <html>
element to that color.
const grammar =
"#JSGF V1.0; grammar colors; public <color> = aqua | azure | beige | bisque | black | blue | brown | chocolate | coral | crimson | cyan | fuchsia | ghostwhite | gold | goldenrod | gray | green | indigo | ivory | khaki | lavender | lime | linen | magenta | maroon | moccasin | navy | olive | orange | orchid | peru | pink | plum | purple | red | salmon | sienna | silver | snow | tan | teal | thistle | tomato | turquoise | violet | white | yellow ;";
const recognition = new SpeechRecognition();
const speechRecognitionList = new SpeechGrammarList();
speechRecognitionList.addFromString(grammar, 1);
recognition.grammars = speechRecognitionList;
recognition.continuous = false;
recognition.lang = "en-US";
recognition.interimResults = false;
recognition.maxAlternatives = 1;
const diagnostic = document.querySelector(".output");
const bg = document.querySelector("html");
document.body.onclick = () => {
recognition.start();
console.log("Ready to receive a color command.");
};
recognition.onresult = (event) => {
const color = event.results[0][0].transcript;
diagnostic.textContent = `Result received: ${color}`;
bg.style.backgroundColor = color;
};
Specifications Browser compatibility See also
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3