Stay organized with collections Save and categorize content based on your preferences.
Perform semantic search and retrieval-augmented generationThis tutorial guides you through the end-to-end process of creating and using text embeddings for semantic search and retrieval-augmented generation (RAG).
This tutorial covers the following tasks:
ML.GENERATE_EMBEDDING
function to generate embeddings from text in a BigQuery table.VECTOR_SEARCH
function with the embeddings to search for similar text.ML.GENERATE_TEXT
function, and using vector search results to augment the prompt input and improve results.This tutorial uses the BigQuery public table patents-public-data.google_patents_research.publications
.
To run this tutorial, you need the following Identity and Access Management (IAM) roles:
roles/bigquery.admin
).roles/resourcemanager.projectIamAdmin
).These predefined roles contain the permissions required to perform the tasks in this document. To see the exact permissions that are required, expand the Required permissions section:
Required permissionsbigquery.datasets.create
bigquery.connections.*
bigquery.config.*
resourcemanager.projects.getIamPolicy
and resourcemanager.projects.setIamPolicy
bigquery.jobs.create
bigquery.models.create
bigquery.models.getData
bigquery.models.updateData
bigquery.models.updateMetadata
You might also be able to get these permissions with custom roles or other predefined roles.
CostsIn this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use the pricing calculator.
New Google Cloud users might be eligible for a
free trial.
For more information about BigQuery pricing, see BigQuery pricing in the BigQuery documentation.
For more information about Vertex AI pricing, see the Vertex AI pricing page.
Before you beginIn the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Verify that billing is enabled for your Google Cloud project.
Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.
Create a BigQuery dataset to store your ML model.
ConsoleIn the Google Cloud console, go to the BigQuery page.
In the Explorer pane, click your project name.
Click more_vert View actions > Create dataset.
On the Create dataset page, do the following:
For Dataset ID, enter bqml_tutorial
.
For Location type, select Multi-region, and then select US (multiple regions in United States).
Leave the remaining default settings as they are, and click Create dataset.
To create a new dataset, use the bq mk
command with the --location
flag. For a full list of possible parameters, see the bq mk --dataset
command reference.
Create a dataset named bqml_tutorial
with the data location set to US
and a description of BigQuery ML tutorial dataset
:
bq --location=US mk -d \ --description "BigQuery ML tutorial dataset." \ bqml_tutorial
Instead of using the --dataset
flag, the command uses the -d
shortcut. If you omit -d
and --dataset
, the command defaults to creating a dataset.
Confirm that the dataset was created:
bq ls
Call the datasets.insert
method with a defined dataset resource.
{ "datasetReference": { "datasetId": "bqml_tutorial" } }BigQuery DataFrames
Before trying this sample, follow the BigQuery DataFrames setup instructions in the BigQuery quickstart using BigQuery DataFrames. For more information, see the BigQuery DataFrames reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.
Create the remote model for text embedding generationCreate a remote model that represents a hosted Vertex AI text embedding generation model:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE MODEL `bqml_tutorial.embedding_model` REMOTE WITH CONNECTION DEFAULT OPTIONS (ENDPOINT = 'text-embedding-005');
The query takes several seconds to complete, after which the model embedding_model
appears in the bqml_tutorial
dataset in the Explorer pane. Because the query uses a CREATE MODEL
statement to create a model, there are no query results.
Generate text embeddings from patent abstracts using the ML.GENERATE_EMBEDDING
function, and then write them to a BigQuery table so that they can be searched.
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE TABLE `bqml_tutorial.embeddings` AS SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, ( SELECT *, abstract AS content FROM `patents-public-data.google_patents_research.publications` WHERE LENGTH(abstract) > 0 AND LENGTH(title) > 0 AND country = 'Singapore' ) ) WHERE LENGTH(ml_generate_embedding_status) = 0;
This query takes approximately 5 minutes to complete.
Embedding generation using the ML.GENERATE_EMBEDDING
function might fail due to Vertex AI LLM quotas or service unavailability. Error details are returned in the ml_generate_embedding_status
column. An empty ml_generate_embedding_status
column indicates successful embedding generation.
For alternative text embedding generation methods in BigQuery, see the Embed text with pretrained TensorFlow models tutorial.
Create a vector indexIf you create a vector index on an embedding column, a vector search performed on that column uses the Approximate Nearest Neighbor search technique. This technique improves vector search performance, with the trade-off of reducing recall and so returning more approximate results.
To create a vector index, use the CREATE VECTOR INDEX
data definition language (DDL) statement:
Go to the BigQuery page.
In the query editor, run the following SQL statement:
CREATE OR REPLACE VECTOR INDEX my_index ON `bqml_tutorial.embeddings`(ml_generate_embedding_result) OPTIONS(index_type = 'IVF', distance_type = 'COSINE', ivf_options = '{"num_lists":500}')
Creating a vector index typically takes only a few seconds. It takes another 2 or 3 minutes for the vector index to be populated and ready to use.
Verify vector index readinessThe vector index is populated asynchronously. You can check whether the index is ready to be used by querying the INFORMATION_SCHEMA.VECTOR_INDEXES
view and verifying that the coverage_percentage
column value is greater than 0
and the last_refresh_time
column value isn't NULL
.
Go to the BigQuery page.
In the query editor, run the following SQL statement:
SELECT table_name, index_name, index_status, coverage_percentage, last_refresh_time, disable_reason FROM `PROJECT_ID.bqml_tutorial.INFORMATION_SCHEMA.VECTOR_INDEXES`
Replace PROJECT_ID
with your project ID.
Use the VECTOR_SEARCH
function to search for relevant patents that match embeddings generated from a text query.
The top_k
argument determines the number of matches to return, in this case five. The fraction_lists_to_search
option determines the percentage of vector index lists to search. The vector index you created has 500 lists, so the fraction_lists_to_search
value of .01
indicates that this vector search scans five of those lists. A lower fraction_lists_to_search
value as shown here provides lower recall and faster performance. For more information about vector index lists, see the num_lists
vector index option.
The model you use to generate the embeddings in this query must be the same as the one you use to generate the embeddings in the table you are comparing against, otherwise the search results won't be accurate.
Go to the BigQuery page.
In the query editor, run the following SQL statement:
SELECT query.query, base.publication_number, base.title, base.abstract FROM VECTOR_SEARCH( TABLE `bqml_tutorial.embeddings`, 'ml_generate_embedding_result', ( SELECT ml_generate_embedding_result, content AS query FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, (SELECT 'improving password security' AS content)) ), top_k => 5, options => '{"fraction_lists_to_search": 0.01}')
The output is similar to the following:
+-----------------------------+--------------------+-------------------------------------------------+-------------------------------------------------+ | query | publication_number | title | abstract | +-----------------------------+--------------------+-------------------------------------------------+-------------------------------------------------+ | improving password security | SG-120868-A1 | Data storage device security method and a... | Methods for improving security in data stora... | | improving password security | SG-10201610585W-A | Passsword management system and process... | PASSSWORD MANAGEMENT SYSTEM AND PROCESS ... | | improving password security | SG-148888-A1 | Improved system and method for... | IMPROVED SYSTEM AND METHOD FOR RANDOM... | | improving password security | SG-194267-A1 | Method and system for protecting a password... | A system for providing security for a... | | improving password security | SG-120868-A1 | Data storage device security... | Methods for improving security in data... | +-----------------------------+--------------------+-------------------------------------------------+-------------------------------------------------+
Create a remote model that represents a hosted Vertex AI text generation model:
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
CREATE OR REPLACE MODEL `bqml_tutorial.text_model` REMOTE WITH CONNECTION DEFAULT OPTIONS (ENDPOINT = 'gemini-2.0-flash-001');
The query takes several seconds to complete, after which the model text_model
appears in the bqml_tutorial
dataset in the Explorer pane. Because the query uses a CREATE MODEL
statement to create a model, there are no query results.
Feed the search results as prompts to generate text with the ML.GENERATE_TEXT
function
In the Google Cloud console, go to the BigQuery page.
In the query editor, run the following statement:
SELECT ml_generate_text_llm_result AS generated, prompt FROM ML.GENERATE_TEXT( MODEL `bqml_tutorial.text_model`, ( SELECT CONCAT( 'Propose some project ideas to improve user password security using the context below: ', STRING_AGG( FORMAT("patent title: %s, patent abstract: %s", base.title, base.abstract), ',\n') ) AS prompt, FROM VECTOR_SEARCH( TABLE `bqml_tutorial.embeddings`, 'ml_generate_embedding_result', ( SELECT ml_generate_embedding_result, content AS query FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, (SELECT 'improving password security' AS content) ) ), top_k => 5, options => '{"fraction_lists_to_search": 0.01}') ), STRUCT(600 AS max_output_tokens, TRUE AS flatten_json_output));
The output is similar to the following:
+------------------------------------------------+------------------------------------------------------------+ | generated | prompt | +------------------------------------------------+------------------------------------------------------------+ | These patents suggest several project ideas to | Propose some project ideas to improve user password | | improve user password security. Here are | security using the context below: patent title: Active | | some, categorized by the patent they build | new password entry dialog with compact visual indication | | upon: | of adherence to password policy, patent abstract: | | | An active new password entry dialog provides a compact | | **I. Projects based on "Active new password | visual indication of adherence to password policies. A | | entry dialog with compact visual indication of | visual indication of progress towards meeting all | | adherence to password policy":** | applicable password policies is included in the display | | | and updated as new password characters are being... | +------------------------------------------------+------------------------------------------------------------+
appspot.com
URL, delete selected resources inside the project instead of deleting the whole project.If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[[["This tutorial demonstrates creating and utilizing text embeddings with BigQuery ML, including setting up a remote model over a Vertex AI embedding model."],["It covers using the `ML.GENERATE_EMBEDDING` function to produce embeddings from text in a BigQuery table and indexing them with a vector index for improved search."],["The guide explains how to employ the `VECTOR_SEARCH` function for finding similar text based on embeddings and perform retrieval-augmented generation (RAG)."],["It showcases the use of `ML.GENERATE_TEXT` for generating text, augmented by vector search results to improve the quality of generated content, using the patents public dataset."],["The tutorial covers how to set up and use a Cloud Resource Connection with BigQuery and Vertex AI, covering also required roles and permissions, and the associated costs."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4