A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developers.google.com/bigquery/docs/reference/standard-sql/bigqueryml-hyperparameter-tuning below:

Hyperparameter tuning overview | BigQuery

Skip to main content

Stay organized with collections Save and categorize content based on your preferences.

Hyperparameter tuning overview

In machine learning, hyperparameter tuning identifies a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a model argument whose value is set before the learning process begins. By contrast, the values of other parameters such as coefficients of a linear model are learned.

Hyperparameter tuning lets you spend less time manually iterating hyperparameters and more time focusing on exploring insights from data.

You can specify hyperparameter tuning options for the following model types:

For these types of models, hyperparameter tuning is enabled when you specify a value for the NUM_TRIALS option in the CREATE MODEL statement.

To try running hyperparameter tuning on a linear regression model, see Use the BigQuery ML hyperparameter tuning to improve model performance.

The following models also support hyperparameter tuning but don't allow you to specify particular values:

For information about the supported SQL statements and functions for each model type, see End-to-end user journey for each model.

Locations

For information about which locations support hyperparameter tuning, see BigQuery ML locations.

Set hyperparameters

To tune a hyperparameter, you must specify a range of values for that hyperparameter that the model can use for a set of trials. You can do this by using one of the following keywords when setting the hyperparameter in the CREATE MODEL statement, instead of providing a single value:

Hyperparameters and objectives

The following table lists the supported hyperparameters and objectives for each model type that supports hyperparameter tuning:

Model type Hyperparameter objectives Hyperparameter Valid range Default range Scale type LINEAR_REG MEAN_ABSOLUTE_ERROR

MEAN_SQUARED_ERROR

MEAN_SQUARED_LOG_ERROR

MEDIAN_ABSOLUTE_ERROR

R2_SCORE (default)

EXPLAINED_VARIANCE

L1_REG

L2_REG

(0, ∞]

(0, ∞]

(0, 10]

(0, 10]

LOG

LOG

LOGISTIC_REG PRECISION

RECALL

ACCURACY

F1_SCORE

LOG_LOSS

ROC_AUC (default)

L1_REG

L2_REG

(0, ∞]

(0, ∞]

(0, 10]

(0, 10]

LOG

LOG

KMEANS DAVIES_BOULDIN_INDEX NUM_CLUSTERS [2, 100] [2, 10] LINEAR MATRIX_
FACTORIZATION
(explicit) MEAN_SQUARED_ERROR NUM_FACTORS

L2_REG

[2, 200]

(0, ∞)

[2, 20]

(0, 10]

LINEAR

LOG

MATRIX_
FACTORIZATION
(implicit) MEAN_AVERAGE_PRECISION (default)

MEAN_SQUARED_ERROR

NORMALIZED_DISCOUNTED_CUMULATIVE_GAIN

AVERAGE_RANK

NUM_FACTORS

L2_REG

WALS_ALPHA

[2, 200]

(0, ∞)

[0, ∞)

[2, 20]

(0, 10]

[0, 100]

LINEAR

LOG

LINEAR

AUTOENCODER MEAN_ABSOLUTE_ERROR

MEAN_SQUARED_ERROR (default)

MEAN_SQUARED_LOG_ERROR

LEARN_RATE

BATCH_SIZE

L1_REG

L2_REG

L1_REG_ACTIVATION

DROPOUT

HIDDEN_UNITS

OPTIMIZER

ACTIVATION_FN

[0, 1]

(0, ∞)

(0, ∞)

(0, ∞)

(0, ∞)

[0, 1)

Array of [1, ∞)

{ADAM, ADAGRAD, FTRL, RMSPROP, SGD}

{RELU, RELU6, CRELU, ELU, SELU, SIGMOID, TANH}

[0, 1]

[16, 1024]

(0, 10]

(0, 10]

(0, 10]

[0, 0.8]

N/A

{ADAM, ADAGRAD, FTRL, RMSPROP, SGD}

N/A

LOG

LOG

LOG

LOG

LOG

LINEAR

N/A

N/A

N/A

DNN_CLASSIFIER PRECISION

RECALL

ACCURACY

F1_SCORE

LOG_LOSS

ROC_AUC (default)

BATCH_SIZE

DROPOUT

HIDDEN_UNITS

LEARN_RATE

OPTIMIZER

L1_REG

L2_REG

ACTIVATION_FN

(0, ∞)

[0, 1)

Array of [1, ∞)

[0, 1]

{ADAM, ADAGRAD, FTRL, RMSPROP, SGD}

(0, ∞)

(0, ∞)

{RELU, RELU6, CRELU, ELU, SELU, SIGMOID, TANH}

[16, 1024]

[0, 0.8]

N/A

[0, 1]

{ADAM, ADAGRAD, FTRL, RMSPROP, SGD}

(0, 10]

(0, 10]

N/A

LOG

LINEAR

N/A

LINEAR

N/A

LOG

LOG

N/A

DNN_REGRESSOR MEAN_ABSOLUTE_ERROR

MEAN_SQUARED_ERROR

MEAN_SQUARED_LOG_ERROR

MEDIAN_ABSOLUTE_ERROR

R2_SCORE (default)

EXPLAINED_VARIANCE

DNN_LINEAR_
COMBINED_
CLASSIFIER
PRECISION

RECALL

ACCURACY

F1_SCORE

LOG_LOSS

ROC_AUC (default)

BATCH_SIZE

DROPOUT

HIDDEN_UNITS

L1_REG

L2_REG

ACTIVATION_FN

(0, ∞)

[0, 1)

Array of [1, ∞)

(0, ∞)

(0, ∞)

{RELU, RELU6, CRELU, ELU, SELU, SIGMOID, TANH}

[16, 1024]

[0, 0.8]

N/A

(0, 10]

(0, 10]

N/A

LOG

LINEAR

N/A

LOG

LOG

N/A

DNN_LINEAR_
COMBINED_
REGRESSOR
MEAN_ABSOLUTE_ERROR

MEAN_SQUARED_ERROR

MEAN_SQUARED_LOG_ERROR

MEDIAN_ABSOLUTE_ERROR

R2_SCORE (default)

EXPLAINED_VARIANCE

BOOSTED_TREE_
CLASSIFIER
PRECISION

RECALL

ACCURACY

F1_SCORE

LOG_LOSS

ROC_AUC (default)

LEARN_RATE

L1_REG

L2_REG

DROPOUT

MAX_TREE_DEPTHMAX_TREE_DEPTH

SUBSAMPLE

MIN_SPLIT_LOSS

NUM_PARALLEL_TREE

MIN_TREE_CHILD_WEIGHT

COLSAMPLE_BYTREE

COLSAMPLE_BYLEVEL

COLSAMPLE_BYNODE

BOOSTER_TYPE

DART_NORMALIZE_TYPE

TREE_METHOD

[0, ∞)

(0, ∞)

(0, ∞)

[0, 1]

[1, 20]

(0, 1]

[0, ∞)

[1, ∞)

[0, ∞)

[0, 1]

[0, 1]

[0, 1]

{GBTREE, DART}

{TREE, FOREST}

{AUTO, EXACT, APPROX, HIST}

[0, 1]

(0, 10]

(0, 10]

N/A

[1, 10]

(0, 1]

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

LINEAR

LOG

LOG

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

N/A

N/A

N/A

BOOSTED_TREE_
REGRESSOR
MEAN_ABSOLUTE_ERROR

MEAN_SQUARED_ERROR

MEAN_SQUARED_LOG_ERROR

MEDIAN_ABSOLUTE_ERROR

R2_SCORE (default)

EXPLAINED_VARIANCE

RANDOM_FOREST_
CLASSIFIER
PRECISION

RECALL

ACCURACY

F1_SCORE

LOG_LOSS

ROC_AUC (default)

L1_REG

L2_REG

MAX_TREE_DEPTH

SUBSAMPLE

MIN_SPLIT_LOSS

NUM_PARALLEL_TREE

MIN_TREE_CHILD_WEIGHT

COLSAMPLE_BYTREE

COLSAMPLE_BYLEVEL

COLSAMPLE_BYNODE

TREE_METHOD

(0, ∞)

(0, ∞)

[1, 20]

(0, 1)

[0, ∞)

[2, ∞)

[0, ∞)

[0, 1]

[0, 1]

[0, 1]

{AUTO, EXACT, APPROX, HIST}

(0, 10]

(0, 10]

[1, 20]

(0, 1)

N/A

[2, 200]

N/A

N/A

N/A

N/A

N/A

LOG

LOG

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

LINEAR

N/A

RANDOM_FOREST_
REGRESSOR
MEAN_ABSOLUTE_ERROR

MEAN_SQUARED_ERROR

MEAN_SQUARED_LOG_ERROR

MEDIAN_ABSOLUTE_ERROR

R2_SCORE (default)

EXPLAINED_VARIANCE

Most LOG scale hyperparameters use the open lower boundary of 0. You can still set 0 as the lower boundary by using the HPARAM_RANGE keyword to set the hyperparameter range. For example, in a boosted tree classifier model, you could set the range for the L1_REG hyperparameter as L1_REG = HPARAM_RANGE(0, 5). A value of 0 gets converted to 1e-14.

Conditional hyperparameters are supported. For example, in a boosted tree regressor model, you can only tune the DART_NORMALIZE_TYPE hyperparameter when the value of the BOOSTER_TYPE hyperparameter is DART. In this case, you specify both search spaces and the conditions are handled automatically, as shown in the following example:

BOOSTER_TYPE = HPARAM_CANDIDATES(['DART', 'GBTREE'])
DART_NORMALIZE_TYPE = HPARAM_CANDIDATES(['TREE', 'FOREST'])
Search starting point

If you don't specify a search space for a hyperparameter by using HPARAM_RANGE or HPARAM_CANDIDATES, the search starts from the default value of that hyperparameter, as documented in the CREATE MODEL topic for that model type. For example, if you are running hyperparameter tuning for a boosted tree model, and you don't specify a value for the L1_REG hyperparameter, then the search starts from 0, the default value.

If you specify a search space for a hyperparameter by using HPARAM_RANGE or HPARAM_CANDIDATES, the search starting points depends on whether the specified search space includes the default value for that hyperparameter, as documented in the CREATE MODEL topic for that model type:

Data split

When you specify a value for the NUM_TRIALS option, the service identifies that you are doing hyperparameter tuning and automatically performs a 3-way split on input data to divide it into training, evaluation, and test sets. By default, the input data is randomized and then split 80% for training, 10% for evaluation, and 10% for testing.

The training and evaluation sets are used in each trial training, the same as in models that don't use hyperparameter tuning. The trial hyperparameter suggestions are calculated based on the model evaluation metrics for that model type. At the end of each trial training, the test set is used to test the trial and record its metrics in the model. This ensures the objectivity of the final reporting evaluation metrics by using data that has not yet been analyzed by the model. Evaluation data is used to calculate the intermediate metrics for hyperparameter suggestion, while the test data is used to calculate the final, objective model metrics.

If you want to use only a training set, specify NO_SPLIT for the DATA_SPLIT_METHOD option of the CREATE MODEL statement.

If you want to use only training and evaluation sets, specify 0 for the DATA_SPLIT_TEST_FRACTION option of the CREATE MODEL statement. When the test set is empty, the evaluation set is used as the test set for the final evaluation metrics reporting.

The metrics from models that are generated from a normal training job and those from a hyperparameter tuning training job are only comparable when the data split fractions are equal. For example, the following models are comparable:

Performance

Model performance when using hyperparameter tuning is typically no worse than model performance when using the default search space and not using hyperparameter tuning. A model that uses the default search space and doesn't use hyperparameter tuning always uses the default hyperparameters in the first trial.

To confirm the model performance improvements provided by hyperparameter tuning, compare the optimal trial for the hyperparameter tuning model to the first trial for the non-hyperparameter tuning model.

Transfer learning

Transfer learning is enabled by default when you set the HPARAM_TUNING_ALGORITHM option in the CREATE MODEL statement to VIZIER_DEFAULT. The hyperparameter tuning for a model benefits by learning from previously tuned models if it meets the following requirements:

Transfer learning doesn't require that the input data be the same.

Transfer learning helps solve the cold start problem where the system performs random exploration during the first trial batch. Transfer learning provides the system with some initial knowledge about the hyperparameters and their objectives. To continuously improve the model quality, always train a new hyperparameter tuning model with the same or a subset of hyperparameters.

Transfer learning helps hyperparameter tuning converge faster, instead of helping submodels to converge.

Error handling

Hyperparameter tuning handles errors in the following ways:

Model serving functions

You can use output models from hyperparameter tuning with a number of existing model serving functions. To use these functions, follow these rules:

The output from ML.FEATURE_INFO doesn't change, because all trials share the same input data.

Evaluation metrics from ML.EVALUATE and ML.TRIAL_INFO can be different because of the way input data is split. By default, ML.EVALUATE runs against the test data, while ML.TRIAL_INFO runs against the evaluation data. For more information, see Data split.

Unsupported functions

The ML.TRAINING_INFO function returns information for each iteration, and iteration results aren't saved in hyperparameter tuning models. Trial results are saved instead. You can use the ML.TRIAL_INFO function to get information about trial results.

Model export

You can export models created with hyperparameter tuning to Cloud Storage locations using the EXPORT MODEL statement. You can export the default optimal trial or any specified trial.

Pricing

The cost of hyperparameter tuning training is the sum of the cost of all executed trials. The pricing of a trial is consistent with the existing BigQuery ML pricing model.

FAQ

This section provides answers to some frequently asked questions about hyperparameter tuning.

How many trials do I need to tune a model?

We recommend using at least 10 trials for one hyperparameter, so the total number of trials should be at least 10 * num_hyperparameters. If you are using the default search space, refer to the Hyperparameters column in the Hyperparameters and objectives table for the number of hyperparameters tuned by default for a given model type.

What if I don't see performance improvements by using hyperparameter tuning?

Make sure you follow the guidance in this document to get a fair comparison. If you still don't see performance improvements, it might mean the default hyperparameters already work well for you. You might want to focus on feature engineering or try other model types before trying another round of hyperparameter tuning.

What if I want to continue tuning a model?

Train a new hyperparameter tuning model with the same search space. The built-in transfer learning helps to continue tuning based on your previously tuned models.

Do I need to retrain the model with all data and the optimal hyperparameters?

It depends on the following factors:

What's next

To try running hyperparameter tuning, see Use the BigQuery ML hyperparameter tuning to improve model performance.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-07 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["Hyperparameter tuning optimizes machine learning models by identifying the best hyperparameter values before the learning process, which helps improve model performance and efficiency."],["Various model types support hyperparameter tuning, including linear and logistic regression, K-means, matrix factorization, autoencoder, boosted trees, random forest, and deep neural networks, with options for setting hyperparameter ranges or candidates."],["The `NUM_TRIALS` option in the `CREATE MODEL` statement enables hyperparameter tuning, automatically splitting data into training, evaluation, and test sets for each trial, but the split can be modified."],["Hyperparameter tuning can leverage transfer learning, allowing new models to benefit from the knowledge of previously tuned models, as long as the models have the same type, project, and hyperparameter search space, or subset of it."],["Error handling during hyperparameter tuning includes cancellation and invalid inputs, with the system skipping trials with invalid hyperparameters, or terminating if more than 10% of trials have internal errors."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4