Stay organized with collections Save and categorize content based on your preferences.
This guide shows you how to use the Text Embeddings API to convert text into numerical vectors. This document covers the following topics:
The Text embeddings API converts text into numerical vectors called embeddings. These vector representations capture the semantic meaning and context of the text.
Supported Models:
You can get text embeddings by using the following models:
Model name Description Output Dimensions Max sequence length Supported text languagesgemini-embedding-001
State-of-the-art performance across English, multilingual and code tasks. It unifies the previously specialized models like text-embedding-005
and text-multilingual-embedding-002
and achieves better performance in their respective domains. Read our Tech Report for more detail. up to 3072 2048 tokens Supported text languages text-embedding-005
Specialized in English and code tasks. up to 768 2048 tokens English text-multilingual-embedding-002
Specialized in multilingual tasks. up to 768 2048 tokens Supported text languages
For superior embedding quality, gemini-embedding-001
is our large model designed to provide the highest performance. Note that gemini-embedding-001
supports one instance per request.
PROJECT_ID = PROJECT_ID REGION = us-central1 MODEL_ID = MODEL_ID curl -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://${REGION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${REGION}/publishers/google/models/${MODEL_ID}:predict -d \ '{ "instances": [ ... ], "parameters": { ... } }'Python
PROJECT_ID = PROJECT_ID REGION = us-central1 MODEL_ID = MODEL_ID import vertexai from vertexai.language_models import TextEmbeddingModel vertexai.init(project=PROJECT_ID, location=REGION) model = TextEmbeddingModel.from_pretrained(MODEL_ID) embeddings = model.get_embeddings(...)Request and response Request body
{
"instances": [
{
"task_type": "RETRIEVAL_DOCUMENT",
"title": "document title",
"content": "I would like embeddings for this text!"
},
]
}
Request parameters
instances
: Required. A list of objects that contain the text to embed. The following fields are supported:
content
(string
): The text to generate embeddings for.task_type
(string
): Optional. Specifies the intended downstream application to help the model produce better quality embeddings. If you don't specify a value, the default is RETRIEVAL_QUERY
. For more information about task types, see Choose an embeddings task type.title
(string
): Optional. A title for the text content. This field applies only when task_type
is RETRIEVAL_DOCUMENT
.parameters
: Optional. An object that contains the following fields:
autoTruncate
(bool
): If true
, the input text is truncated if it's longer than the model's maximum length. If false
, an error is returned for oversized input. The default is true
.outputDimensionality
(int
): The desired embedding size. If set, the output embeddings are truncated to this dimension.The following table describes the task_type
parameter values and their use cases:
task_type
Description Use Case RETRIEVAL_QUERY
The input text is a query in a search or retrieval setting. Use for the query text when searching a collection of documents. Pair with RETRIEVAL_DOCUMENT
for the documents. RETRIEVAL_DOCUMENT
The input text is a document in a search or retrieval setting. Use for the documents in a collection that will be searched. Pair with RETRIEVAL_QUERY
for the search query. SEMANTIC_SIMILARITY
The input text is used for Semantic Textual Similarity (STS). Comparing two pieces of text to determine their similarity in meaning. CLASSIFICATION
The embedding will be used for classification tasks. Training a model to categorize text into predefined classes. CLUSTERING
The embedding will be used for clustering tasks. Grouping similar texts together without predefined labels. QUESTION_ANSWERING
The input text is a query for a question-answering system. Finding answers to questions within a set of documents. Use RETRIEVAL_DOCUMENT
for the documents. FACT_VERIFICATION
The input text is a claim to be verified against a set of documents. Verifying the factual accuracy of a statement. Use RETRIEVAL_DOCUMENT
for the documents. CODE_RETRIEVAL_QUERY
The input text is a query for retrieving relevant code snippets (Java and Python). Searching a codebase for relevant functions or snippets. Use RETRIEVAL_DOCUMENT
for the code documents.
task_type=RETRIEVAL_QUERY
for the input text that is a search query.task_type=RETRIEVAL_DOCUMENT
for the input text that is part of the document collection being searched.task_type=SEMANTIC_SIMILARITY
for both input texts to assess their overall similarity in meaning.SEMANTIC_SIMILARITY
is not intended for retrieval use cases like document search and information retrieval. For these use cases, use RETRIEVAL_DOCUMENT
, RETRIEVAL_QUERY
, QUESTION_ANSWERING
, and FACT_VERIFICATION
.{
"predictions": [
{
"embeddings": {
"statistics": {
"truncated": boolean,
"token_count": integer
},
"values": [ number ]
}
}
]
}
Response parameters
predictions
: A list of objects, where each object corresponds to an input instance from the request. Each object contains the following field:
embeddings
: The embedding generated from the input text. It contains the following fields:
values
: A list of floats that represents the embedding vector of the input text.statistics
: The statistics computed from the input text. It contains the following fields:
truncated
(bool
): true
if the input text was truncated because it was longer than the maximum number of tokens allowed by the model.token_count
(int
): The number of tokens in the input text.Sample response
{
"predictions": [
{
"embeddings": {
"values": [
0.0058424929156899452,
0.011848051100969315,
0.032247550785541534,
-0.031829461455345154,
-0.055369812995195389,
...
],
"statistics": {
"token_count": 4,
"truncated": false
}
}
}
]
}
Examples Embed a text string
The following example shows you how to get the embedding for a text string.
RESTAfter you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.
Before using any of the request data, make the following replacements:
textembedding-gecko@001
. The max input token length for textembedding-gecko@001
is 3072. For gemini-embedding-001
, each request can only include a single input text. For more information, see Text embedding limits.false
, text that exceeds the token limit causes the request to fail. The default value is true
.HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/gemini-embedding-001:predict
Request JSON body:
{ "instances": [ { "content": "TEXT"} ], "parameters": { "autoTruncate": AUTO_TRUNCATE } }
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/gemini-embedding-001:predict"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/gemini-embedding-001:predict" | Select-Object -Expand Content
You should receive a JSON response similar to the following. Note that values
has been truncated to save space.
{ "predictions": [ { "embeddings": { "statistics": { "truncated": false, "token_count": 6 }, "values": [ ... ] } } ] }Note the following in the URL for this sample:
generateContent
method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using the streamGenerateContent
method.gemini-2.0-flash
). This sample might support other models as well.To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
GoBefore trying this sample, follow the Go setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Go API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
JavaBefore trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.jsBefore trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Supported text languagesAll text embedding models support English-language text and have been evaluated on it.
The text-multilingual-embedding-002
model also supports the following languages. It has been evaluated on the languages in the Evaluated languages list.
Arabic (ar)
, Bengali (bn)
, English (en)
, Spanish (es)
, German (de)
, Persian (fa)
, Finnish (fi)
, French (fr)
, Hindi (hi)
, Indonesian (id)
, Japanese (ja)
, Korean (ko)
, Russian (ru)
, Swahili (sw)
, Telugu (te)
, Thai (th)
, Yoruba (yo)
, Chinese (zh)
Afrikaans
, Albanian
, Amharic
, Arabic
, Armenian
, Azerbaijani
, Basque
, Belarusiasn
, Bengali
, Bulgarian
, Burmese
, Catalan
, Cebuano
, Chichewa
, Chinese
, Corsican
, Czech
, Danish
, Dutch
, English
, Esperanto
, Estonian
, Filipino
, Finnish
, French
, Galician
, Georgian
, German
, Greek
, Gujarati
, Haitian Creole
, Hausa
, Hawaiian
, Hebrew
, Hindi
, Hmong
, Hungarian
, Icelandic
, Igbo
, Indonesian
, Irish
, Italian
, Japanese
, Javanese
, Kannada
, Kazakh
, Khmer
, Korean
, Kurdish
, Kyrgyz
, Lao
, Latin
, Latvian
, Lithuanian
, Luxembourgish
, Macedonian
, Malagasy
, Malay
, Malayalam
, Maltese
, Maori
, Marathi
, Mongolian
, Nepali
, Norwegian
, Pashto
, Persian
, Polish
, Portuguese
, Punjabi
, Romanian
, Russian
, Samoan
, Scottish Gaelic
, Serbian
, Shona
, Sindhi
, Sinhala
, Slovak
, Slovenian
, Somali
, Sotho
, Spanish
, Sundanese
, Swahili
, Swedish
, Tajik
, Tamil
, Telugu
, Thai
, Turkish
, Ukrainian
, Urdu
, Uzbek
, Vietnamese
, Welsh
, West Frisian
, Xhosa
, Yiddish
, Yoruba
, Zulu
.The gemini-embedding-001
model supports the following languages:
Arabic
, Bengali
, Bulgarian
, Chinese (Simplified and Traditional)
, Croatian
, Czech
, Danish
, Dutch
, English
, Estonian
, Finnish
, French
, German
, Greek
, Hebrew
, Hindi
, Hungarian
, Indonesian
, Italian
, Japanese
, Korean
, Latvian
, Lithuanian
, Norwegian
, Polish
, Portuguese
, Romanian
, Russian
, Serbian
, Slovak
, Slovenian
, Spanish
, Swahili
, Swedish
, Thai
, Turkish
, Ukrainian
, Vietnamese
, Afrikaans
, Amharic
, Assamese
, Azerbaijani
, Belarusian
, Bosnian
, Catalan
, Cebuano
, Corsican
, Welsh
, Dhivehi
, Esperanto
, Basque
, Persian
, Filipino (Tagalog)
, Frisian
, Irish
, Scots Gaelic
, Galician
, Gujarati
, Hausa
, Hawaiian
, Hmong
, Haitian Creole
, Armenian
, Igbo
, Icelandic
, Javanese
, Georgian
, Kazakh
, Khmer
, Kannada
, Krio
, Kurdish
, Kyrgyz
, Latin
, Luxembourgish
, Lao
, Malagasy
, Maori
, Macedonian
, Malayalam
, Mongolian
, Meiteilon (Manipuri)
, Marathi
, Malay
, Maltese
, Myanmar (Burmese)
, Nepali
, Nyanja (Chichewa)
, Odia (Oriya)
, Punjabi
, Pashto
, Sindhi
, Sinhala (Sinhalese)
, Samoan
, Shona
, Somali
, Albanian
, Sesotho
, Sundanese
, Tamil
, Telugu
, Tajik
, Uyghur
, Urdu
, Uzbek
, Xhosa
, Yiddish
, Yoruba
, Zulu
.
To use a current stable model, specify the model version number, for example gemini-embedding-001
.
Specifying a model without a version number isn't recommended because it's a legacy pointer to another model and isn't stable.
For more information, see Model versions and lifecycle.
What's nextLearn more about text embeddings:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4