A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/sql/docs/postgres/invoke-online-predictions below:

Invoke online predictions from Cloud SQL instances | Cloud SQL for PostgreSQL

Invoke online predictions from Cloud SQL instances

Stay organized with collections Save and categorize content based on your preferences.

MySQL   |  PostgreSQL   |  SQL Server

This page shows you how to invoke online predictions from a Cloud SQL instance.

Cloud SQL lets you get online predictions in your SQL code by calling the ml_predict_row() function. For more information, see Build generative AI applications using Cloud SQL.

Before you begin

Before you can invoke online predictions from a Cloud SQL instance, you must prepare your database and select an appropriate ML model.

Prepare your database
  1. To prepare your database, set up integration between Cloud SQL and Vertex AI.

  2. Grant permissions for database users to use the ml_predict_row() function to run predictions:

    1. Connect a psql client to the primary instance, as described in Connect using a a psql client.

    2. At the psql command prompt, connect to the database and grant permissions:

      \c DB_NAME
      
      GRANT EXECUTE ON FUNCTION ml_predict_row TO USER_NAME;
      

      Replace the following:

      • DB_NAME: the name of the database for which you're granting permissions

      • USER_NAME: the name of the user for whom you're granting permissions

Select an ML model

When you call the ml_predict_row() function, you must specify the location of an ML model. The model that you specify can be one of these:

Invoke online predictions

You can use the ml_predict_row() SQL function to invoke online predictions against your data.

The format of the function's initial argument depends on whether the ML model that you want to use is in the Vertex AI Model Garden or is an endpoint running in a Google Cloud project.

Use a model in the Vertex AI Model Garden

To invoke an online prediction using an ML model that's running in the Vertex AI Model Garden, use the following syntax for the ml_predict_row() SQL function:

sql SELECT ML_PREDICT_ROW('publishers/google/models/MODEL_ID', '{ "instances": [ INSTANCES ], "parameters": PARAMETERS }');

Make the following replacements:

Note: You must store the ML model in the same project and region as your Cloud SQL instance.
SELECT ML_PREDICT_ROW('publishers/google/models/MODEL_ID', '{ "instances": [ INSTANCES ], "parameters":
PARAMETERS }');

For information about the model's JSON response messages, see

Generative AI foundational model reference

. For examples, see

Example invocations

.

Use a Vertex AI model endpoint

To invoke an online prediction using a Vertex AI model endpoint, use the following syntax for the ml_predict_row() SQL function:

sql SELECT ML_PREDICT_ROW('endpoints/ENDPOINT_ID', '{ "instances": [ INSTANCES ], "parameters": PARAMETERS }');

Make the following replacements:

Note: The endpoint must be located in the same project and region as your Cloud SQL instance.

For information about the model's JSON response messages, see PredictResponse.

Example invocations

The following example uses PaLM 2 for Text, available in the Model Garden, to generate text based on a short prompt that's provided as a literal argument to ml_predict_row() :

select ML_PREDICT_ROW('projects/PROJECT_ID/locations/us-central1/publishers/google/models/text-bison', '{"instances":[{"prompt": "What are three advantages of using Cloud SQL as my SQL database server?"}], "parameters":{"maxOutputTokens":1024, "topK": 40, "topP":0.8, "temperature":0.2}}');
Note: You can use PaLM 2 for Text foundation models only in the us-central1 region.

The response is a JSON object. For more information about the format of the object, see Response body.

The next example modifies the previous one in the following ways:

select ML_PREDICT_ROW('projects/PROJECT_ID/locations/us-central1/publishers/google/models/text-bison', json_build_object('instances', json_build_object('prompt', message), 'parameters', json_build_object('maxOutputTokens', 1024,'topK', 40,'topP', 0.8,'temperature', 0.2))) from messages;

For every row in the messages table, the returned JSON object now contains one entry in its predictions array.

Because the response is a JSON object, you can pull specific fields from it:

For more example arguments to ml_predict_row(), see Try the Vertex AI Gemini API.

What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-14 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4