A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developers.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-roc below:

The ML.ROC_CURVE function | BigQuery

Stay organized with collections Save and categorize content based on your preferences.

The ML.ROC_CURVE function

This document describes the ML.ROC_CURVE function, which you can use to evaluate binary class classification specific metrics.

Syntax
ML.ROC_CURVE(
  MODEL `PROJECT_ID.DATASET.MODEL_NAME`,
  { TABLE `PROJECT_ID.DATASET.TABLE` | (QUERY_STATEMENT) },
  [, GENERATE_ARRAY(THRESHOLDS)]
  [, STRUCT(TRIAL_ID AS trial_id)])
Arguments

ML.ROC_CURVE takes the following arguments:

Output

ML.ROC_CURVE returns multiple rows with metrics for different threshold values for the model. The metrics include the following:

Examples

The following examples assume your model and input table are in your default project.

Evaluate the ROC curve of a binary class logistic regression model

The following query returns all of the output columns for ML.ROC_CURVE. You can graph the recall and false_positive_rate values for an ROC curve. The threshold values returned are chosen based on the percentile values of the prediction output.

SELECT
  *
FROM
  ML.ROC_CURVE(MODEL `mydataset.mymodel`,
    TABLE `mydataset.mytable`)
Evaluate an ROC curve with custom thresholds

The following query returns all of the output columns for ML.ROC_CURVE. The threshold values returned are chosen based on the output of the GENERATE_ARRAY function.

SELECT
  *
FROM
  ML.ROC_CURVE(MODEL `mydataset.mymodel`,
    TABLE `mydataset.mytable`,
    GENERATE_ARRAY(0.4,0.6,0.01))
Evaluate the precision-recall curve

Instead of getting an ROC curve (the recall versus false positive rate), the following query calculates a precision-recall curve by using the precision from the true and false positive counts:

SELECT
  recall,
  true_positives / (true_positives + false_positives) AS precision
FROM
  ML.ROC_CURVE(MODEL `mydataset.mymodel`,
    TABLE `mydataset.mytable`)
What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-07 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["The `ML.ROC_CURVE` function is used to evaluate binary class classification models by returning metrics for different threshold values."],["This function requires a model, and either an input table or a query to generate evaluation data, ensuring that column names and types match those in the model."],["Users can specify custom threshold values using the `GENERATE_ARRAY` function to control the percentile values of the prediction output."],["The output includes metrics like threshold, recall, true positives, false positives, true negatives, and false negatives, which can be used to generate an ROC curve or a precision-recall curve."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4