Stay organized with collections Save and categorize content based on your preferences.
The ML.GENERATE_EMBEDDING functionThis document describes the ML.GENERATE_EMBEDDING
function, which lets you create embeddings that describe an entity—for example, a piece of text or an image.
You can create embeddings for the following types of data:
ObjectRefRuntime
values by the OBJ.GET_ACCESS_URL
function. You can use ObjectRef
values from standard tables as input to the OBJ.GET_ACCESS_URL
function. (Preview)Embeddings are high-dimensional numerical vectors that represent a given entity. Machine learning (ML) models use embeddings to encode semantics about entities to make it easier to reason about and compare them. If two entities are semantically similar, then their respective embeddings are located near each other in the embedding vector space.
Embeddings help you perform the following tasks:
Depending on the task, the ML.GENERATE_EMBEDDING
function works in one of the following ways:
To generate embeddings from text or visual content, ML.GENERATE_EMBEDDING
sends the request to a BigQuery ML remote model that represents one of the Vertex AI embedding models, and then returns the model's response.
The ML.GENERATE_EMBEDDING
function works with the Vertex AI model to perform embedding tasks supported by that model. For more information on the types of tasks these models can perform, see the following documentation:
Typically, you want to use text embedding models for text-only use cases, and use multimodal models for cross-modal search use cases, where embeddings for text and visual content are generated in the same semantic space.
For PCA and autoencoding, ML.GENERATE_EMBEDDING
processes the request using a BigQuery ML PCA or autoencoder model ML.PREDICT
function. ML.GENERATE_EMBEDDING
gathers the ML.PREDICT
output for the model into an array and outputs it as the ml_generate_embedding_result
column. Having all of the embeddings in a single column lets you directly use the VECTOR_SEARCH
function on theML.GENERATE_EMBEDDING
output.
For matrix factorization, ML.GENERATE_EMBEDDING
processes the request using a BigQuery ML matrix factorization model and the ML.WEIGHTS
function. ML.GENERATE_EMBEDDING
gathers the factor_weights.weight
and intercept
values from the ML.WEIGHTS
output for the model into an array and outputs it as the ml_generate_embedding_result
column. Having all of the embeddings in a single column lets you directly use the VECTOR_SEARCH
function on theML.GENERATE_EMBEDDING
output.
ML.GENERATE_EMBEDDING
syntax differs depending on the BigQuery ML model you choose. If you use a remote model, it also differs depending on the Vertex AI model that your remote models targets. Choose the option appropriate for your use case.
multimodalembedding
# Syntax for standard tables ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, { TABLE `PROJECT_ID.DATASET.TABLE_NAME` | (QUERY_STATEMENT) }, STRUCT( [FLATTEN_JSON_OUTPUT AS flatten_json_output] [, OUTPUT_DIMENSIONALITY AS output_dimensionality]) )
# Syntax for object tables ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, { TABLE `PROJECT_ID.DATASET.TABLE_NAME` | (QUERY_STATEMENT) }, STRUCT( [FLATTEN_JSON_OUTPUT AS flatten_json_output] [, START_SECOND AS start_second] [, END_SECOND AS end_second] [, INTERVAL_SECONDS AS interval_seconds] [, OUTPUT_DIMENSIONALITY AS output_dimensionality]) )Arguments
ML.GENERATE_EMBEDDING
takes the following arguments:
PROJECT_ID
: the project that contains the resource.
DATASET
: the BigQuery dataset that contains the resource.
MODEL_NAME
: the name of a remote model over a Vertex AI multimodalembedding@001
model.
You can confirm what LLM is used by the remote model by opening the Google Cloud console and looking at the Remote endpoint field in the model details page.
TABLE_NAME
: one of the following:
If you are creating embeddings for text in a standard table, the name of the BigQuery table that contains the content. The content must be in a STRING
column named content
. If your table does not have a content
column, use the QUERY_STATEMENT
argument instead and provide a SELECT
statement that includes an alias for an existing table column. An error occurs if no content
column is available.
If you are creating embeddings for visual content using data from an an object table, the name of a BigQuery object table that contains the visual content.
QUERY_STATEMENT
: the GoogleSQL query that generates the input data for the function.
If you are creating embeddings from a standard table, the query must produce a column named content
, which you can generate as follows:
For text embeddings, you can pull the value from a STRING
column, or you can specify a string literal in the query.
For visual content embeddings, you can provide an ObjectRefRuntime
value for the content
column. You can generate ObjectRefRuntime
values by using the OBJ.GET_ACCESS_URL
function. The OBJ.GET_ACCESS_URL
function takes an ObjectRef
value as input, which you can provide by either specifying the name of a column that contains ObjectRef
values, or by constructing an ObjectRef
value.
ObjectRefRuntime
values must have the access_url.read_url
and details.gcs_metadata.content_type
elements of the JSON value populated.
If you are creating embeddings from an object table, the query doesn't have to return a content
column. You can only specify WHERE
, ORDER BY
, and LIMIT
clauses in the query.
FLATTEN_JSON_OUTPUT
: a BOOL
value that determines whether the JSON
content returned by the function is parsed into separate columns. The default is TRUE
.
START_SECOND
: a FLOAT64
value that specifies the second in the video at which to start the embedding. The default value is 0
. If you specify this argument, you must also specify the END_SECOND
argument. This value must be positive and less than the END_SECOND
value. This argument only applies to video content.
END_SECOND
: a FLOAT64
value that specifies the second in the video at which to end the embedding. The END_SECOND
value can't be higher than 120
. The default value is 120
. If you specify this argument, you must also specify the START_SECOND
argument. This value must be positive and greater than the START_SECOND
value. This argument only applies to video content.
INTERVAL_SECONDS
: a FLOAT64
value that specifies the interval to use when creating embeddings. For example, if you set START_SECOND
= 0
, END_SECOND
= 120
, and INTERVAL_SECONDS
= 10
, then the video is split into twelve 10 second segments ([0, 10), [10, 20), [20, 30)...
) and embeddings are generated for each segment. This value must be greater than or equal to 4
and less than 120
. The default value is 16
. This argument only applies to video content.
OUTPUT_DIMENSIONALITY
: an INT64
value that specifies the number of dimensions to use when generating embeddings. Valid values are 128
, 256
, 512
, and 1408
. The default value is 1408
. For example, if you specify 256 AS output_dimensionality
, then the ml_generate_embedding_result
output column contains 256 embeddings for each input value.
You can only use this argument when creating text or image embeddings. If you use this argument when creating video embeddings, the function returns an error.
The model and input table must be in the same region.
text-embedding
ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, { TABLE `PROJECT_ID.DATASET.TABLE_NAME` | (QUERY_STATEMENT) }, STRUCT( [FLATTEN_JSON_OUTPUT AS flatten_json_output] [, TASK_TYPE AS task_type] [, OUTPUT_DIMENSIONALITY AS output_dimensionality]) )Arguments
ML.GENERATE_EMBEDDING
takes the following arguments:
PROJECT_ID
: the project that contains the resource.
DATASET
: the BigQuery dataset that contains the resource.
MODEL_NAME
: the name of a remote model over a Vertex AI embedding LLM.
You can confirm what LLM is used by the remote model by opening the Google Cloud console and looking at the Remote endpoint field in the model details page.
TABLE_NAME
: the name of the BigQuery table that contains a STRING
column to embed. The text in the column that's named content
is sent to the model. If your table doesn't have a content
column, use a SELECT
statement for this argument to provide an alias for an existing table column. An error occurs if no content
column exists.
QUERY_STATEMENT
: a query whose result contains a STRING
column that's named content
. For information about the supported SQL syntax of the QUERY_STATEMENT
clause, see GoogleSQL query syntax.
FLATTEN_JSON_OUTPUT
: a BOOL
value that determines whether the JSON
content returned by the function is parsed into separate columns. The default is TRUE
.
TASK_TYPE
: a STRING
literal that specifies the intended downstream application to help the model produce better quality embeddings. The TASK_TYPE
argument accepts the following values:
RETRIEVAL_QUERY
: specifies that the given text is a query in a search or retrieval setting.RETRIEVAL_DOCUMENT
: specifies that the given text is a document in a search or retrieval setting.
When using this task type, it is helpful to include the document title in the query statement in order to improve embedding quality. The document title must be in a column either named title
or aliased as title
, for example:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.embedding_model`, (SELECT abstract as content, header as title, publication_number FROM `mydataset.publications`), STRUCT(TRUE AS flatten_json_output, 'RETRIEVAL_DOCUMENT' as task_type) );
Specifying the title column in the input query populates the title
field of the request body sent to the model. If you specify a title
value when using any other task type, that input is ignored and has no effect on the embedding results.
SEMANTIC_SIMILARITY
: specifies that the given text will be used for Semantic Textual Similarity (STS).
CLASSIFICATION
: specifies that the embeddings will be used for classification.
CLUSTERING
: specifies that the embeddings will be used for clustering.
QUESTION_ANSWERING
: specifies that the embeddings will be used for question answering.
FACT_VERIFICATION
: specifies that the embeddings will be used for fact verification.
CODE_RETRIEVAL_QUERY
: specifies that the embeddings will be used for code retrieval.
OUTPUT_DIMENSIONALITY
: an INT64
value that specifies the number of dimensions to use when generating embeddings. For example, if you specify 256 AS output_dimensionality
, then the ml_generate_embedding_result
output column contains 256 embeddings for each input value.
You can only use this argument if the remote model that you specify in the model
argument uses one of the following models as an endpoint:
text-embedding-004
or latertext-multilingual-embedding-002
or laterThe model and input table must be in the same region.
text-multilingual-embedding
ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, { TABLE `PROJECT_ID.DATASET.TABLE_NAME` | (QUERY_STATEMENT) }, STRUCT( [FLATTEN_JSON_OUTPUT AS flatten_json_output] [, TASK_TYPE AS task_type] [, OUTPUT_DIMENSIONALITY AS output_dimensionality]) )Arguments
ML.GENERATE_EMBEDDING
takes the following arguments:
PROJECT_ID
: the project that contains the resource.
DATASET
: the BigQuery dataset that contains the resource.
MODEL_NAME
: the name of a remote model over a Vertex AI embedding LLM.
You can confirm what LLM is used by the remote model by opening the Google Cloud console and looking at the Remote endpoint field in the model details page.
TABLE_NAME
: the name of the BigQuery table that contains a STRING
column to embed. The text in the column that's named content
is sent to the model. If your table doesn't have a content
column, use a SELECT
statement for this argument to provide an alias for an existing table column. An error occurs if no content
column exists.
QUERY_STATEMENT
: a query whose result contains a STRING
column that's named content
. For information about the supported SQL syntax of the QUERY_STATEMENT
clause, see GoogleSQL query syntax.
FLATTEN_JSON_OUTPUT
: a BOOL
value that determines whether the JSON
content returned by the function is parsed into separate columns. The default is TRUE
.
TASK_TYPE
: a STRING
literal that specifies the intended downstream application to help the model produce better quality embeddings. The TASK_TYPE
argument accepts the following values:
RETRIEVAL_QUERY
: specifies that the given text is a query in a search or retrieval setting.RETRIEVAL_DOCUMENT
: specifies that the given text is a document in a search or retrieval setting.
When using this task type, it is helpful to include the document title in the query statement in order to improve embedding quality. The document title must be in a column either named title
or aliased as title
, for example:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.embedding_model`, (SELECT abstract as content, header as title, publication_number FROM `mydataset.publications`), STRUCT(TRUE AS flatten_json_output, 'RETRIEVAL_DOCUMENT' as task_type) );
Specifying the title column in the input query populates the title
field of the request body sent to the model. If you specify a title
value when using any other task type, that input is ignored and has no effect on the embedding results.
SEMANTIC_SIMILARITY
: specifies that the given text will be used for Semantic Textual Similarity (STS).
CLASSIFICATION
: specifies that the embeddings will be used for classification.
CLUSTERING
: specifies that the embeddings will be used for clustering.
QUESTION_ANSWERING
: specifies that the embeddings will be used for question answering.
FACT_VERIFICATION
: specifies that the embeddings will be used for fact verification.
CODE_RETRIEVAL_QUERY
: specifies that the embeddings will be used for code retrieval.
OUTPUT_DIMENSIONALITY
: an INT64
value that specifies the number of dimensions to use when generating embeddings. For example, if you specify 256 AS output_dimensionality
, then the ml_generate_embedding_result
output column contains 256 embeddings for each input value.
You can only use this argument if the remote model that you specify in the model
argument uses one of the following models as an endpoint:
text-embedding-004
or latertext-multilingual-embedding-002
or laterThe model and input table must be in the same region.
PCAML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, { TABLE `PROJECT_ID.DATASET.TABLE_NAME` | (QUERY_STATEMENT) } )Arguments
ML.GENERATE_EMBEDDING
takes the following arguments:
PROJECT_ID
: the project that contains the resource.
DATASET
: the BigQuery dataset that contains the resource.
MODEL_NAME
: the name of a PCA model.
You can confirm the type of model by opening the Google Cloud console and looking at the Model type field in the model details page.
TABLE_NAME
: the name of the BigQuery table that contains the input data for the PCA model.
QUERY_STATEMENT
: a query whose result contains the input data for the PCA model.
The model and input table must be in the same region.
AutoencoderML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, { TABLE `PROJECT_ID.DATASET.TABLE_NAME` | (QUERY_STATEMENT) }, STRUCT([TRIAL_ID AS trial_id]) )Arguments
ML.GENERATE_EMBEDDING
takes the following arguments:
PROJECT_ID
: the project that contains the resource.
DATASET
: the BigQuery dataset that contains the resource.
MODEL_NAME
: the name of an autoencoder model.
You can confirm the type of model by opening the Google Cloud console and looking at the Model type field in the model details page.
TABLE_NAME
: the name of the BigQuery table that contains the input data for the autoencoder model.
QUERY_STATEMENT
: a query whose result contains the input data for the autoencoder model.
TRIAL_ID
: an INT64
value that identifies the hyperparameter tuning trial that you want the function to evaluate. The function uses the optimal trial by default. Only specify this argument if you ran hyperparameter tuning when creating the model.
The model and input table must be in the same region.
Matrix factorizationML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET.MODEL_NAME`, STRUCT([TRIAL_ID AS trial_id]) )Arguments
ML.GENERATE_EMBEDDING
takes the following arguments:
PROJECT_ID
: the project that contains the resource.
DATASET
: the BigQuery dataset that contains the resource.
MODEL_NAME
: the name of a matrix factorization model.
You can confirm the type of model by opening the Google Cloud console and looking at the Model type field in the model details page.
TRIAL_ID
: an INT64
value that identifies the hyperparameter tuning trial that you want the function to evaluate. The function uses the optimal trial by default. Only specify this argument if you ran hyperparameter tuning when creating the model.
multimodalembedding
ML.GENERATE_EMBEDDING
returns the input table and the following columns:
ml_generate_embedding_result
:
flatten_json_output
is FALSE
, this is the JSON response from the projects.locations.endpoints.predict
call to the model. The generated embeddings are in the textEmbedding
, imageEmbedding
, or videoEmbeddings
element, depending on the type of input data you used.flatten_json_output
is TRUE
, this is an ARRAY<FLOAT64>
value that contains the generated embeddings.ml_generate_embedding_status
: a STRING
value that contains the API response status for the corresponding row. This value is empty if the operation was successful.
ml_generate_embedding_start_sec
: for video content, an INT64
value that contains the starting second of the portion of the video that the embedding represents. For image content, the value is NULL
. This column isn't returned for text content.
ml_generate_embedding_end_sec
: for video content, an INT64
value that contains the ending second of the portion of the video that the embedding represents. For image content, the value is NULL
. This column isn't returned for text content.
text-embedding
ML.GENERATE_EMBEDDING
returns the input table and the following columns:
ml_generate_embedding_result
:
flatten_json_output
is FALSE
, this is the JSON response from the projects.locations.endpoints.predict
call to the model. The generated embeddings are in the values
element.flatten_json_output
is TRUE
, this is an ARRAY<FLOAT64>
value that contains the generated embeddings.ml_generate_embedding_statistics
: a JSON
value that contains a token_count
field with the number of tokens in the content, and a truncated
field that indicates whether the content was truncated. This column is returned when flatten_json_output
is TRUE
.
ml_generate_embedding_status
: a STRING
value that contains the API response status for the corresponding row. This value is empty if the operation was successful.
text-multilingual-embedding
ML.GENERATE_EMBEDDING
returns the input table and the following columns:
ml_generate_embedding_result
:
flatten_json_output
is FALSE
, this is the JSON response from the projects.locations.endpoints.predict
call to the model. The generated embeddings are in the values
element.flatten_json_output
is TRUE
, this is an ARRAY<FLOAT64>
value that contains the generated embeddings.ml_generate_embedding_statistics
: a JSON
value that contains a token_count
field with the number of tokens in the content, and a truncated
field that indicates whether the content was truncated. This column is returned when flatten_json_output
is TRUE
.
ml_generate_embedding_status
: a STRING
value that contains the API response status for the corresponding row. This value is empty if the operation was successful.
ML.GENERATE_EMBEDDING
returns the input table and the following column:
ml_generate_embedding_result
: this is an ARRAY<FLOAT>
value that contains the principal components for the input data. The number of array dimensions is equal to the PCA model's NUM_PRINCIPAL_COMPONENTS
option value if that option is used when the model is created. If the PCA_EXPLAINED_VARIANCE_RATIO
option is used instead, the array dimensions vary depending on the input table and the option ratio determined by BigQuery ML.ML.GENERATE_EMBEDDING
returns the input table and the following column:
trial_id
: an INT64
value that identifies the hyperparameter tuning trial used by the function. This column is only returned if you ran hyperparameter tuning when creating the model.ml_generate_embedding_result
: this is an ARRAY<FLOAT>
value that contains the latent space dimensions for the input data. The number of array dimensions is equal to the number in the middle of the autoencoder model's HIDDEN_UNITS
option array value.ML.GENERATE_EMBEDDING
returns the following columns:
trial_id
: an INT64
value that identifies the hyperparameter tuning trial used by the function. This column is only returned if you ran hyperparameter tuning when creating the model.ml_generate_embedding_result
: this is an ARRAY<FLOAT>
value that contains the weights of the feature, and also the intercept or bias term for the feature. The intercept value is the last value in the array. The number of array dimensions is equal to the matrix factorization model's NUM_FACTORS
option value.processed_input
: a STRING
value that contains the name of the user or item column. The value of this column matches the name of the user or item column provided in the query_statement
clause that was used when the matrix factorization model was trained.feature
: a STRING
value that contains the names of the specific users or items used during training.You can use the ML.GENERATE_EMBEDDING
function to generate embeddings for videos and images that meet the requirements described in API limits.
There is no limitation on the length of the video files you can use with this function. However, the function only processes the first two minutes of a video. If a video is longer than two minutes, the ML.GENERATE_EMBEDDING
function only returns embeddings for the first two minutes.
Sometimes after a query job that uses this function finishes successfully, some returned rows contain the following error message:
A retryable error occurred: RESOURCE EXHAUSTED error from <remote endpoint>
This issue occurs because BigQuery query jobs finish successfully even if the function fails for some of the rows. The function fails when the volume of API calls to the remote endpoint exceeds the quota limits for that service. This issue occurs most often when you are running multiple parallel batch queries. BigQuery retries these calls, but if the retries fail, the resource exhausted
error message is returned.
To iterate through inference calls until all rows are successfully processed, you can use the BigQuery remote inference SQL scripts or the BigQuery remote inference pipeline Dataform package.
Examplesmultimodalembedding
This example shows how to generate embeddings from visual content by using a remote model that references a multimodalembedding
model.
Create the remote model:
CREATE OR REPLACE MODEL `mydataset.multimodalembedding` REMOTE WITH CONNECTION `us.test_connection` OPTIONS(ENDPOINT = 'multimodalembedding@001')
Use an ObjectRefRuntime
value
Generate embeddings from visual content in an ObjectRef
column in a standard table:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.multimodalembedding`, ( SELECT OBJ.GET_ACCESS_URL(art_image, 'r') as content FROM `mydataset.art`) );
Use an object table
Generate embeddings from visual content in an object table:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.multimodalembedding`, TABLE `mydataset.my_object_table`);
text-embedding
This example shows how to generate an embedding of a single piece of sample text by using a remote model that references a text-embedding
model.
Create the remote model:
CREATE OR REPLACE MODEL `mydataset.text_embedding` REMOTE WITH CONNECTION `us.test_connection` OPTIONS(ENDPOINT = 'text-embedding-005')
Generate the embedding:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.text_embedding`, (SELECT "Example text to embed" AS content), STRUCT(TRUE AS flatten_json_output) );
text-multilingual-embedding
This example shows how to generate embeddings from a table and specify a task type by using a remote model that references a text-multilingual-embedding
model.
Create the remote model:
CREATE OR REPLACE MODEL `mydataset.text_multi` REMOTE WITH CONNECTION `us.test_connection` OPTIONS(ENDPOINT = 'text-multilingual-embedding-002')
Generate the embeddings:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.text_multi`, TABLE `mydataset.customer_feedback`, STRUCT(TRUE AS flatten_json_output, 'SEMANTIC_SIMILARITY' as task_type) );PCA
This example shows how to generate embeddings that represent the principal components of a PCA model.
Create the PCA model:
CREATE OR REPLACE MODEL `mydataset.pca_nyc_trees` OPTIONS ( MODEL_TYPE = 'PCA', PCA_EXPLAINED_VARIANCE_RATIO = 0.9) AS ( SELECT tree_id, block_id, tree_dbh, stump_diam, curb_loc, status, health, spc_latin FROM `bigquery-public-data.new_york_trees.tree_census_2015` );
Generate embeddings that represent principal components:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.pca_nyc_trees`, ( SELECT tree_id, block_id, tree_dbh, stump_diam, curb_loc, status, health, spc_latin FROM `bigquery-public-data.new_york_trees.tree_census_2015` ));Autoencoder
This example shows how to generate embeddings that represent the latent space dimensions of an autoencoder model.
Create the autoencoder model:
CREATE OR REPLACE MODEL `mydataset.my_autoencoder_model` OPTIONS ( model_type = 'autoencoder', activation_fn = 'relu', batch_size = 8, dropout = 0.2, hidden_units = [ 32, 16, 4, 16, 32], learn_rate = 0.001, l1_reg_activation = 0.0001, max_iterations = 10, optimizer = 'adam') AS SELECT * EXCEPT ( Time, Class) FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection`;
Generate embeddings that represent latent space dimensions:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.my_autoencoder_model`, TABLE `bigquery-public-data.ml_datasets.ulb_fraud_detection`);Matrix factorization
This example shows how to generate embeddings that represent the underlying weights that the matrix factorization model uses during prediction.
Create the matrix factorization model:
CREATE OR REPLACE MODEL `mydataset.my_mf_model` OPTIONS ( model_type='matrix_factorization', user_col='user_id', item_col='item_id', l2_reg=9.83, num_factors=34) AS SELECT user_id, item_id, AVG(rating) as rating FROM movielens.movielens_1m GROUP BY user_id, item_id;
Generate embeddings that represent model weights and intercepts:
SELECT * FROM ML.GENERATE_EMBEDDING(MODEL `mydataset.my_mf_model`)Locations
The ML.GENERATE_EMBEDDING
function must run in the same region or multi-region as the model that the function references. For more information on supported regions for embedding models, see Google model endpoint locations. Embedding models are also available in the US
multi-region.
Quotas apply when you use the ML.GENERATE_EMBEDDING
function with remote models. For more information, see Vertex AI and Cloud AI service functions quotas and limits.
For the multimodalembedding
model, the default requests per minute (RPM) for non-EU
regions is 600. The default RPM for EU
regions is 120. However, you can request a quota increase in order to increase throughput.
To increase quota, first request more quota for the Vertex AI multimodalembedding
model by using the process described in Manage your quota using the console. When the model quota has been increased, send an email to bqml-feedback@google.com and request a quota increase for the ML.GENERATE_EMBEDDING
function. Include information about the adjusted multimodalembedding
quota.
Try the Perform semantic search and retrieval-augmented generation tutorial to learn how to do the following tasks:
Try the Parse PDFs in a retrieval-augmented generation pipeline tutorial to learn how to create a RAG pipeline based on parsed PDF content.
For more information about using Cloud AI APIs to perform AI tasks, see AI application overview.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[[["The `ML.GENERATE_EMBEDDING` function creates high-dimensional numerical vectors, known as embeddings, that represent entities like text or images, which are used by machine learning models to encode semantic information and compare entities."],["This function can be used for tasks such as semantic search, recommendation, classification, clustering, outlier detection, matrix factorization, principal component analysis (PCA), and autoencoding."],["Depending on the task, `ML.GENERATE_EMBEDDING` either uses a BigQuery ML remote model that represents a Vertex AI embedding model for text and visual content, or it utilizes BigQuery ML's PCA, autoencoder, or matrix factorization models for tasks like dimensionality reduction and feature representation."],["The function's syntax and arguments vary depending on the type of BigQuery ML model used and, in the case of remote models, the specific Vertex AI model being targeted, with different parameters for handling text, image, or video content."],["The `ML.GENERATE_EMBEDDING` function returns embeddings along with additional data, such as a status column indicating the API response status, video start/end times, and statistics like token count, for which the format and the content varies depending on whether the input is multimodal, text-only, PCA, autoencoder, or matrix factorization."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4