Stay organized with collections Save and categorize content based on your preferences.
Create AWS Glue federated datasetsThis document describes how to create a federated dataset in BigQuery that's linked to an existing database in AWS Glue.
A federated dataset is a connection between BigQuery and an external data source at the dataset level. The tables in a federated dataset are automatically populated from the tables in the corresponding external data source. You can query these tables directly in BigQuery, but you cannot make modifications, additions, or deletions. However, any updates that you make in the external data source are automatically reflected in BigQuery.
Before you beginEnsure that you have a connection to access AWS Glue data.
To create or modify a connection, follow the instructions in Connect to Amazon S3. When you create that connection, include the following policy statement for AWS Glue in your AWS Identity and Access Management policy for BigQuery. Include this statement in addition to the other permissions on the Amazon S3 bucket where the data in your AWS Glue tables is stored.
{ "Effect": "Allow", "Action": [ "glue:GetDatabase", "glue:GetTable", "glue:GetTables", "glue:GetPartitions" ], "Resource": [ "arn:aws:glue:REGION:ACCOUNT_ID:catalog", "arn:aws:glue:REGION:ACCOUNT_ID:database/DATABASE_NAME", "arn:aws:glue:REGION:ACCOUNT_ID:table/DATABASE_NAME/*" ] }
Replace the following:
REGION
: the AWS region—for example us-east-1
ACCOUNT_ID:
: the 12-digit AWS Account IDDATABASE_NAME
: the AWS Glue database nameTo get the permissions that you need to create a federated dataset, ask your administrator to grant you the BigQuery Admin (roles/bigquery.admin
) IAM role. For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains the permissions required to create a federated dataset. To see the exact permissions that are required, expand the Required permissions section:
Required permissionsThe following permissions are required to create a federated dataset:
bigquery.datasets.create
bigquery.connections.use
bigquery.connections.delegate
You might also be able to get these permissions with custom roles or other predefined roles.
For more information about IAM roles and permissions in BigQuery, see Introduction to IAM.
Create a federated datasetTo create a federated dataset, do the following:
ConsoleOpen the BigQuery page in the Google Cloud console.
In the Explorer panel, select the project where you want to create the dataset.
Expand the more_vert Actions option and click Create dataset.
On the Create dataset page, do the following:
aws-us-east-1
. After you create a dataset, the location can't be changed.For External Dataset, do the following:
AWS Glue
.aws-glue://
followed by the Amazon Resource Name (ARN) of the AWS Glue database—for example, aws-glue://arn:aws:glue:us-east-1:123456789:database/test_database
.Leave the other default settings as they are.
Click Create dataset.
Use the CREATE EXTERNAL SCHEMA
data definition language (DDL) statement.
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
CREATE EXTERNAL SCHEMA DATASET_NAME WITH CONNECTION PROJECT_ID.CONNECTION_LOCATION.CONNECTION_NAME OPTIONS ( external_source = 'AWS_GLUE_SOURCE', location = 'LOCATION');
Replace the following:
DATASET_NAME
: the name of your new dataset in BigQuery.PROJECT_ID
: your project ID.CONNECTION_LOCATION
: the location of your AWS connection—for example, aws-us-east-1
.CONNECTION_NAME
: the name of your AWS connection.AWS_GLUE_SOURCE
: the Amazon Resource Name (ARN) of the AWS Glue database with a prefix identifying the source—for example, aws-glue://arn:aws:glue:us-east-1:123456789:database/test_database
.LOCATION
: the location of your new dataset in BigQuery—for example, aws-us-east-1
. After you create a dataset, you can't change its location.Click play_circle Run.
For more information about how to run queries, see Run an interactive query.
bqIn a command-line environment, create a dataset by using the bq mk
command:
bq --location=LOCATION mk --dataset \ --external_source aws-glue://AWS_GLUE_SOURCE \ --connection_id PROJECT_ID.CONNECTION_LOCATION.CONNECTION_NAME \ DATASET_NAME
Replace the following:
LOCATION
: the location of your new dataset in BigQuery—for example, aws-us-east-1
. After you create a dataset, you can't change its location. You can set a default location value by using the .bigqueryrc
file.AWS_GLUE_SOURCE
: the Amazon Resource Name (ARN) of the AWS Glue database—for example, arn:aws:glue:us-east-1:123456789:database/test_database
.PROJECT_ID
: your BigQuery project ID.CONNECTION_LOCATION
: the location of your AWS connection—for example, aws-us-east-1
.CONNECTION_NAME
: the name of your AWS connection.DATASET_NAME
: the name of your new dataset in BigQuery. To create a dataset in a project other than your default project, add the project ID to the dataset name in the following format: PROJECT_ID
:DATASET_NAME
.Use the google_bigquery_dataset
resource.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
The following example creates an AWS Glue federated dataset:
resource "google_bigquery_dataset" "dataset" { provider = google-beta dataset_id = "example_dataset" friendly_name = "test" description = "This is a test description." location = "aws-us-east-1" external_dataset_reference { external_source = "aws-glue://arn:aws:glue:us-east-1:999999999999:database/database" connection = "projects/project/locations/aws-us-east-1/connections/connection" } }
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud ShellSet the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Each Terraform configuration file must have its own directory (also called a root module).
.tf
extension—for example main.tf
. In this tutorial, the file is referred to as main.tf
.
mkdir DIRECTORY && cd DIRECTORY && touch main.tf
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created main.tf
.
Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
terraform init
Optionally, to use the latest Google provider version, include the -upgrade
option:
terraform init -upgrade
terraform plan
Make corrections to the configuration as necessary.
yes
at the prompt:
terraform apply
Wait until Terraform displays the "Apply complete!" message.
Call the datasets.insert
method with a defined dataset resource and externalDatasetReference
field for your AWS Glue database.
To list the tables that are available for query in your federated dataset, see Listing datasets.
Get table informationTo get information on the tables in your federated dataset, such as schema details, see Get table information.
Control access to tablesTo manage access to the tables in your federated dataset, see Control access to resources with IAM.
Row-level security, column-level security, and data masking are also supported for tables in federated datasets.
Schema operations that might invalidate security policies, such as deleting a column in AWS Glue, can cause jobs to fail until the policies are updated. Additionally, if you delete a table in AWS Glue and recreate it, your security policies no longer apply to the recreated table.
Query AWS Glue dataQuerying tables in federated datasets is the same as querying tables in any other BigQuery dataset.
You can query AWS Glue tables in the following formats:
Every table that you grant access to in your AWS Glue database appears as an equivalent table in your BigQuery dataset.
FormatThe format of each BigQuery table is determined by the following fields of the respective AWS Glue table:
InputFormat
(Table.StorageDescriptor.InputFormat
)OutputFormat
(Table.StorageDescriptor.OutputFormat
)SerializationLib
(Table.StorageDescriptor.SerdeInfo.SerializationLibrary
)The only exception is Iceberg tables, which use the TableType
(Table.Parameters["table_type"]
) field.
For example, an AWS Glue table with the following fields is mapped to an ORC table in BigQuery:
InputFormat
= "org.apache.hadoop.hive.ql.io.orc.OrcInputFormat"
OutputFormat
= "org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat"
SerializationLib
= "org.apache.hadoop.hive.ql.io.orc.OrcSerde"
The location of each BigQuery table is determined by the following:
Table.Parameters["metadata_location"]
field in the AWS Glue tableTable.StorageDescriptor.Location
field in the AWS Glue tableAdditionally, some AWS Glue table properties are automatically mapped to format-specific options in BigQuery:
Format SerializationLib AWS Glue table value BigQuery option CSV LazySimpleSerDe Table.StorageDescriptor.SerdeInfo.Parameters["field.delim"] CsvOptions.fieldDelimiter CSV LazySimpleSerDe Table.StorageDescriptor.Parameters["serialization.encoding"] CsvOptions.encoding CSV LazySimpleSerDe Table.StorageDescriptor.Parameters["skip.header.line.count"] CsvOptions.skipLeadingRows CSV OpenCsvSerDe Table.StorageDescriptor.SerdeInfo.Parameters["separatorChar"] CsvOptions.fieldDelimiter CSV OpenCsvSerDe Table.StorageDescriptor.SerdeInfo.Parameters["quoteChar"] CsvOptions.quote CSV OpenCsvSerDe Table.StorageDescriptor.Parameters["serialization.encoding"] CsvOptions.encoding CSV OpenCsvSerDe Table.StorageDescriptor.Parameters["skip.header.line.count"] CsvOptions.skipLeadingRows JSON Hive JsonSerDe Table.StorageDescriptor.Parameters["serialization.encoding"] JsonOptions.encoding Create a view in a federated datasetYou can't create a view in a federated dataset. However, you can create a view in a standard dataset that's based on a table in a federated dataset. For more information, see Create views.
Delete a federated datasetDeleting a federated dataset is the same as deleting any other BigQuery dataset. For more information, see Delete datasets.
PricingFor information about pricing, see BigQuery Omni pricing.
LimitationsINFORMATION_SCHEMA
views aren't supported.UNION
isn't supported for Avro tables.Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["Federated datasets in BigQuery establish a connection with an external AWS Glue database, automatically populating tables within BigQuery that reflect the data from the external source, allowing for direct querying."],["Creating a federated dataset requires specific permissions, such as the BigQuery Admin IAM role, and setting up a connection to access AWS Glue data with the appropriate Identity and Access Management policy statement."],["You can create a federated dataset through the Google Cloud console, SQL, the bq command-line tool, or Terraform, each requiring you to specify the AWS Glue source and connection details."],["While federated datasets allow for querying data in AWS Glue, they do not support direct modifications, additions, or deletions, as updates must occur in the external data source to be reflected in BigQuery."],["Federated datasets are subject to certain limitations, including restrictions on data modification, table creation, and support for certain BigQuery features, all while being priced under the BigQuery Omni model."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4