Provides information to the recognizer that specifies how to process the request.
JSON representation{ "encoding": enum (FieldsAudioEncoding
), "sampleRateHertz": integer, "audioChannelCount": integer, "enableSeparateRecognitionPerChannel": boolean, "languageCode": string, "alternativeLanguageCodes": [ string ], "maxAlternatives": integer, "profanityFilter": boolean, "adaptation": { object (SpeechAdaptation
) }, "speechContexts": [ { object (SpeechContext
) } ], "enableWordTimeOffsets": boolean, "enableWordConfidence": boolean, "enableAutomaticPunctuation": boolean, "enableSpokenPunctuation": boolean, "enableSpokenEmojis": boolean, "diarizationConfig": { object (SpeakerDiarizationConfig
) }, "metadata": { object (RecognitionMetadata
) }, "model": string, "useEnhanced": boolean }
encoding
enum (
AudioEncoding
)
Encoding of audio data sent in all RecognitionAudio
messages. This field is optional for FLAC
and WAV
audio files and required for all other audio formats. For details, see AudioEncoding
.
sampleRateHertz
integer
Sample rate in Hertz of the audio data sent in all RecognitionAudio
messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see AudioEncoding
.
audioChannelCount
integer
The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16, OGG_OPUS and FLAC are 1
-8
. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1
. If 0
or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set enableSeparateRecognitionPerChannel
to 'true'.
enableSeparateRecognitionPerChannel
boolean
This needs to be set to true
explicitly and audioChannelCount
> 1 to get each channel recognized separately. The recognition result will contain a channelTag
field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: audioChannelCount
multiplied by the length of the audio.
languageCode
string
Required. The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.
alternativeLanguageCodes[]
string
A list of up to 3 additional BCP-47 language tags, listing possible alternative languages of the supplied audio. See Language Support for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main languageCode. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
maxAlternatives
integer
Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative
messages within each SpeechRecognitionResult
. The server may return fewer than maxAlternatives
. Valid values are 0
-30
. A value of 0
or 1
will return a maximum of one. If omitted, will return a maximum of one.
profanityFilter
boolean
If set to true
, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to false
or omitted, profanities won't be filtered out.
adaptation
object (
SpeechAdaptation
)
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the speech adaptation documentation. When speech adaptation is set it supersedes the speechContexts
field.
speechContexts[]
object (
SpeechContext
)
Array of SpeechContext
. A means to provide context to assist the speech recognition. For more information, see speech adaptation.
enableWordTimeOffsets
boolean
If true
, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If false
, no word-level time offset information is returned. The default is false
.
enableWordConfidence
boolean
If true
, the top result includes a list of words and the confidence for those words. If false
, no word-level confidence information is returned. The default is false
.
enableAutomaticPunctuation
boolean
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.
enableSpokenPunctuation
boolean
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
enableSpokenEmojis
boolean
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
diarizationConfig
object (
SpeakerDiarizationConfig
)
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
metadata
object (
RecognitionMetadata
)
Metadata regarding this request.
model
string
Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig.
Model Descriptionlatest_long
latest_short
command_and_search
phone_call
video
default
medical_conversation
medical_dictation
useEnhanced
boolean
Set to true to use an enhanced model for speech recognition. If useEnhanced
is set to true and the model
field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio.
If useEnhanced
is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.
The encoding of the audio data sent in the request.
All encodings support only 1 channel (mono) audio, unless the audioChannelCount
and enableSeparateRecognitionPerChannel
fields are set.
For best results, the audio source should be captured and transmitted using a lossless encoding (FLAC
or LINEAR16
). The accuracy of the speech recognition can be reduced if lossy codecs are used to capture or transmit audio, particularly if background noise is present. Lossy codecs include MULAW
, AMR
, AMR_WB
, OGG_OPUS
, SPEEX_WITH_HEADER_BYTE
, MP3
, and WEBM_OPUS
.
The FLAC
and WAV
audio file formats include a header that describes the included audio content. You can request recognition for WAV
files that contain either LINEAR16
or MULAW
encoded audio. If you send FLAC
or WAV
audio file format in your request, you do not need to specify an AudioEncoding
; the audio encoding format is determined from the file header. If you specify an AudioEncoding
when you send send FLAC
or WAV
audio, the encoding configuration must match the encoding described in the audio header; otherwise the request returns an google.rpc.Code.INVALID_ARGUMENT
error code.
ENCODING_UNSPECIFIED
Not specified. LINEAR16
Uncompressed 16-bit signed little-endian samples (Linear PCM). FLAC
FLAC
(Free Lossless Audio Codec) is the recommended encoding because it is lossless--therefore recognition is not compromised--and requires only about half the bandwidth of LINEAR16
. FLAC
stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO
are supported. MULAW
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. AMR
Adaptive Multi-Rate Narrowband codec. sampleRateHertz
must be 8000. AMR_WB
Adaptive Multi-Rate Wideband codec. sampleRateHertz
must be 16000. OGG_OPUS
Opus encoded audio frames in Ogg container (OggOpus). sampleRateHertz
must be one of 8000, 12000, 16000, 24000, or 48000. WEBM_OPUS
Opus encoded audio frames in WebM container (OggOpus). sampleRateHertz
must be one of 8000, 12000, 16000, 24000, or 48000. SpeechAdaptation
Speech adaptation configuration.
JSON representation{ "phraseSets": [ { object (FieldsPhraseSet
) } ], "phraseSetReferences": [ string ], "customClasses": [ { object (CustomClass
) } ], "abnfGrammar": { object (ABNFGrammar
) } }
phraseSets[]
object (
PhraseSet
)
A collection of phrase sets. To specify the hints inline, leave the phrase set's name
blank and fill in the rest of its fields. Any phrase set can use any custom class.
phraseSetReferences[]
string
A collection of phrase set resource names to use.
customClasses[]
object (
CustomClass
)
A collection of custom classes. To specify the classes inline, leave the class' name
blank and fill in the rest of its fields, giving it a unique customClassId
. Refer to the inline defined class in phrase hints by its customClassId
.
abnfGrammar
object (
ABNFGrammar
)
Augmented Backus-Naur form (ABNF) is a standardized grammar notation comprised by a set of derivation rules. See specifications: https://www.w3.org/TR/speech-grammar
ABNFGrammar JSON representation{ "abnfStrings": [ string ] }Fields
abnfStrings[]
string
All declarations and rules of an ABNF grammar broken up into multiple strings that will end up concatenated.
SpeechContextProvides "hints" to the speech recognizer to favor specific words and phrases in the results.
JSON representation{ "phrases": [ string ], "boost": number }Fields
phrases[]
string
A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.
List items can also be set to classes for groups of words that represent common concepts that occur in natural language. For example, rather than providing phrase hints for every month of the year, using the $MONTH class improves the likelihood of correctly transcribing audio that includes months.
boost
number
Hint Boost. Positive value will increase the probability that a specific phrase will be recognized over other similar sounding phrases. The higher the boost, the higher the chance of false positive recognition as well. Negative boost values would correspond to anti-biasing. Anti-biasing is not enabled, so negative boost will simply be ignored. Though boost
can accept a wide range of positive values, most use cases are best served with values between 0 and 20. We recommend using a binary search approach to finding the optimal value for your use case.
Config to enable speaker diarization.
JSON representation{ "enableSpeakerDiarization": boolean, "minSpeakerCount": integer, "maxSpeakerCount": integer, "speakerTag": integer }Fields
enableSpeakerDiarization
boolean
If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speakerTag provided in the WordInfo.
minSpeakerCount
integer
Minimum number of speakers in the conversation. This range gives you more flexibility by allowing the system to automatically determine the correct number of speakers. If not set, the default value is 2.
maxSpeakerCount
integer
Maximum number of speakers in the conversation. This range gives you more flexibility by allowing the system to automatically determine the correct number of speakers. If not set, the default value is 6.
speakerTag
(deprecated)
integer
This item is deprecated!
Output only. Unused.
This item is deprecated!
Description of audio data to be recognized.
InteractionTypeUse case categories that the audio recognition request can be described by.
EnumsINTERACTION_TYPE_UNSPECIFIED
Use case is either unknown or is something other than one of the other values below. DISCUSSION
Multiple people in a conversation or discussion. For example in a meeting with two or more people actively participating. Typically all the primary people speaking would be in the same room (if not, see PHONE_CALL) PRESENTATION
One or more persons lecturing or presenting to others, mostly uninterrupted. PHONE_CALL
A phone-call or video-conference in which two or more people, who are not in the same room, are actively participating. VOICEMAIL
A recorded message intended for another person to listen to. PROFESSIONALLY_PRODUCED
Professionally produced audio (eg. TV Show, Podcast). VOICE_SEARCH
Transcribe spoken questions and queries into text. VOICE_COMMAND
Transcribe voice commands, such as for controlling a device. DICTATION
Transcribe speech to text to create a written document, such as a text-message, email or report. MicrophoneDistance
Enumerates the types of capture settings describing an audio file.
EnumsMICROPHONE_DISTANCE_UNSPECIFIED
Audio type is not known. NEARFIELD
The audio was captured from a closely placed microphone. Eg. phone, dictaphone, or handheld microphone. Generally if there speaker is within 1 meter of the microphone. MIDFIELD
The speaker if within 3 meters of the microphone. FARFIELD
The speaker is more than 3 meters away from the microphone.
The original media the speech was recorded on.
RecordingDeviceTypeThe type of device the speech was recorded with.
EnumsRECORDING_DEVICE_TYPE_UNSPECIFIED
The recording device is unknown. SMARTPHONE
Speech was recorded on a smartphone. PC
Speech was recorded using a personal computer or tablet. PHONE_LINE
Speech was recorded over a phone line. VEHICLE
Speech was recorded in a vehicle. OTHER_OUTDOOR_DEVICE
Speech was recorded outdoors. OTHER_INDOOR_DEVICE
Speech was recorded indoors.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3