Stay organized with collections Save and categorize content based on your preferences.
BigQuery Explainable AI overviewThis document describes how BigQuery ML supports Explainable artificial intelligence (AI), sometimes called XAI.
Explainable AI helps you understand the results that your predictive machine learning model generates for classification and regression tasks by defining how each feature in a row of data contributed to the predicted result. This information is often referred to as feature attribution. You can use this information to verify that the model is behaving as expected, to recognize biases in your models, and to inform ways to improve your model and your training data.
BigQuery ML and Vertex AI both have Explainable AI offerings which offer feature-based explanations. You can perform explainability in BigQuery ML, or you can register your model in Vertex AI and perform explainability there.
For information about the supported SQL statements and functions for each model type, see End-to-end user journey for each model.
Local versus global explainabilityThere are two types of explainability: local explainability and global explainability. These are also known respectively as local feature importance and global feature importance.
Explainable AI in BigQuery ML supports a variety of machine learning models, including both time series and non-time series models. Each of the models takes advantage of a different explainability method.
Model category Model types Explainability method Basic explanation of the method Local explain functions Global explain functions Supervised models Linear & Logistic Regression Shapley values Shapley values for linear models are equal tomodel weight * feature value
, where feature values are standardized and model weights are trained with the standardized feature values. ML.EXPLAIN_PREDICT
1 ML.GLOBAL_EXPLAIN
2 Standard Errors and P-values Standard errors and p-values are used for significance testing against the model weights. N/A ML.ADVANCED_WEIGHTS
4 Boosted trees
Tree SHAP Tree SHAP is an algorithm to compute exact SHAP values for decision tree-based models. ML.EXPLAIN_PREDICT
1 ML.GLOBAL_EXPLAIN
2 Approximate Feature Contribution Approximates the feature contribution values. It is faster and simpler compared to Tree SHAP. ML.EXPLAIN_PREDICT
1 ML.GLOBAL_EXPLAIN
2 Gini Index-based feature importance A global feature importance score that indicates how useful or valuable each feature was in the construction of the boosted tree or random forest model during training. N/A ML.FEATURE_IMPORTANCE
Deep Neural Network (DNN)
Integrated gradients A gradients-based method that efficiently computes feature attributions with the same axiomatic properties as the Shapley value. It provides a sampling approximation of exact feature attributions. Its accuracy is controlled by the integrated_gradients_num_steps
parameter. ML.EXPLAIN_PREDICT
1 ML.GLOBAL_EXPLAIN
2 AutoML Tables Sampled Shapley Sampled Shapley assigns credit for the model's outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values. N/A ML.GLOBAL_EXPLAIN
2 Time series models ARIMA_PLUS Time series decomposition Decomposes the time series into multiple components if those components are present in the time series. The components include trend, seasonal, holiday, step changes, and spike and dips. See ARIMA_PLUS modeling pipeline for more details. ML.EXPLAIN_FORECAST
3 N/A ARIMA_PLUS_XREG Time series decomposition
model weight * feature value
. ML.EXPLAIN_FORECAST
3 N/A
1ML_EXPLAIN_PREDICT
is an extended version of ML.PREDICT
.
2ML.GLOBAL_EXPLAIN
returns the global explainability obtained by taking the mean absolute attribution that each feature receives for all the rows in the evaluation dataset.
3ML.EXPLAIN_FORECAST
is an extended version of ML.FORECAST
.
4ML.ADVANCED_WEIGHTS
is an extended version of ML.WEIGHTS
.
Explainable AI is available in Vertex AI for the following subset of exportable supervised learning models:
Model type Explainable AI method dnn_classifier Integrated gradients dnn_regressor Integrated gradients dnn_linear_combined_classifier Integrated gradients dnn_linear_combined_regressor Integrated gradients boosted_tree_regressor Sampled shapley boosted_tree_classifier Sampled shapley random_forest_regressor Sampled shapley random_forest_classifier Sampled shapleySee Feature Attribution Methods to learn more about these methods.
Enable Explainable AI in Model RegistryWhen your BigQuery ML model is registered in Model Registry, and if it is a type of model that supports Explainable AI, you can enable Explainable AI on the model when deploying to an endpoint. When you register your BigQuery ML model, all of the associated metadata is populated for you.
Note: Explainable AI incurs a minor additional cost. See Vertex AI pricing to learn more.n1-standard-2
.To learn how to use XAI on your models from the Model Registry, see Get an online explanation using your deployed model. To learn more about XAI in Vertex AI, see Get explanations.
What's nextExcept as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["BigQuery ML supports Explainable AI (XAI), which helps users understand how individual features contribute to predictions in classification and regression models."],["XAI in BigQuery ML offers both local explainability, detailing the impact of features on individual predictions, and global explainability, showing a feature's overall influence on the model across a dataset."],["BigQuery ML provides Explainable AI support for various models, including time series and non-time series, with different methods like Shapley values, Tree SHAP, and Integrated Gradients, depending on the model type."],["BigQuery ML models can be registered in Vertex AI, where Explainable AI can be enabled during deployment, allowing users to obtain explanations through online predictions with an extra cost."],["The `ML.EXPLAIN_PREDICT` and `ML.GLOBAL_EXPLAIN` SQL functions are supported to achieve explainability, with time-series models also having `ML.EXPLAIN_FORECAST`."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4