A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developers.google.com/bigquery/docs/external-data-cloud-storage below:

Create Cloud Storage external tables | BigQuery

Skip to main content

Stay organized with collections Save and categorize content based on your preferences.

Create Cloud Storage external tables

BigQuery supports querying Cloud Storage data in the following formats:

BigQuery supports querying Cloud Storage data from these storage classes:

To query a Cloud Storage external table, you must have permissions on both the external table and the Cloud Storage files. We recommend using a BigLake table instead if possible. BigLake tables provide access delegation, so that you only need permissions on the BigLake table in order to query the Cloud Storage data.

Be sure to consider the location of your dataset and Cloud Storage bucket when you query data stored in Cloud Storage.

Before you begin

Grant Identity and Access Management (IAM) roles that give users the necessary permissions to perform each task in this document. The permissions required to perform a task (if any) are listed in the "Required permissions" section of the task.

Required roles

To create an external table, you need the bigquery.tables.create BigQuery Identity and Access Management (IAM) permission.

Each of the following predefined Identity and Access Management roles includes this permission:

You also need the following permissions to access the Cloud Storage bucket that contains your data:

The Cloud Storage Storage Admin (roles/storage.admin) predefined Identity and Access Management role includes these permissions.

If you are not a principal in any of these roles, ask your administrator to grant you access or to create the external table for you.

For more information on Identity and Access Management roles and permissions in BigQuery, see Predefined roles and permissions.

Access scopes for Compute Engine instances

If, from a Compute Engine instance, you need to query an external table that is linked to a Cloud Storage source, the instance must have at least the Cloud Storage read-only access scope (https://www.googleapis.com/auth/devstorage.read_only).

The scopes control the Compute Engine instance's access to Google Cloud products, including Cloud Storage. Applications running on the instance use the service account attached to the instance to call Google Cloud APIs.

If you set up a Compute Engine instance to run as the default Compute Engine service account, the instance is by default granted a number of default scopes, including the https://www.googleapis.com/auth/devstorage.read_only scope.

If instead you set up the instance with a custom service account, make sure to explicitly grant the https://www.googleapis.com/auth/devstorage.read_only scope to the instance.

For information about applying scopes to a Compute Engine instance, see Changing the service account and access scopes for an instance. For more information about Compute Engine service accounts, see Service accounts.

Create external tables on unpartitioned data

You can create a permanent table linked to your external data source by:

Select one of the following options:

Console
  1. Go to the BigQuery page.

    Go to BigQuery

  2. In the Explorer pane, expand your project and select a dataset.

  3. Expand the more_vert Actions option and click Create table.

  4. In the Source section, specify the following details:

    1. For Create table from, select Google Cloud Storage

    2. For Select file from GCS bucket or use a URI pattern, browse to select a bucket and file to use, or type the path in the format gs://bucket_name/[folder_name/]file_name.

      You can't specify multiple URIs in the Google Cloud console, but you can select multiple files by specifying one asterisk (*) wildcard character. For example, gs://mybucket/file_name*. For more information, see Wildcard support for Cloud Storage URIs.

      The Cloud Storage bucket must be in the same location as the dataset that contains the table you're creating.

    3. For File format, select the format that matches your file.

  5. In the Destination section, specify the following details:

    1. For Project, choose the project in which to create the table.

    2. For Dataset, choose the dataset in which to create the table.

    3. For Table, enter the name of the table you are creating.

    4. For Table type, select External table.

  6. In the Schema section, you can either enable schema auto-detection or manually specify a schema if you have a source file. If you don't have a source file, you must manually specify a schema.

  7. To ignore rows with extra column values that do not match the schema, expand the Advanced options section and select Unknown values.

  8. Click Create table.

After the permanent table is created, you can run a query against the table as if it were a native BigQuery table. After your query completes, you can export the results as CSV or JSON files, save the results as a table, or save the results to Google Sheets.

SQL

You can create a permanent external table by running the CREATE EXTERNAL TABLE DDL statement. You can specify the schema explicitly, or use schema auto-detection to infer the schema from the external data.

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    CREATE EXTERNAL TABLE `PROJECT_ID.DATASET.EXTERNAL_TABLE_NAME`
      OPTIONS (
        format ="TABLE_FORMAT",
        uris = ['BUCKET_PATH'[,...]]
        );

    Replace the following:

  3. Click play_circle Run.

For more information about how to run queries, see Run an interactive query.

Examples

The following example uses schema auto-detection to create an external table named sales that is linked to a CSV file stored in Cloud Storage:

CREATE OR REPLACE EXTERNAL TABLE mydataset.sales
  OPTIONS (
  format = 'CSV',
  uris = ['gs://mybucket/sales.csv']);

The next example specifies a schema explicitly and skips the first row in the CSV file:

CREATE OR REPLACE EXTERNAL TABLE mydataset.sales (
  Region STRING,
  Quarter STRING,
  Total_Sales INT64
) OPTIONS (
    format = 'CSV',
    uris = ['gs://mybucket/sales.csv'],
    skip_leading_rows = 1);
bq

To create an external table, use the bq mk command with the --external_table_definition flag. This flag contains either a path to a table definition file or an inline table definition.

Option 1: Table definition file

Use the bq mkdef command to create a table definition file, and then pass the file path to the bq mk command as follows:

bq mkdef --source_format=SOURCE_FORMAT \
  BUCKET_PATH > DEFINITION_FILE

bq mk --table \
  --external_table_definition=DEFINITION_FILE \
  DATASET_NAME.TABLE_NAME \
  SCHEMA

Replace the following:

Example:

bq mkdef --source_format=CSV gs://mybucket/sales.csv > mytable_def

bq mk --table --external_table_definition=mytable_def \
  mydataset.mytable \
  Region:STRING,Quarter:STRING,Total_sales:INTEGER

To use schema auto-detection, set the --autodetect=true flag in the mkdef command and omit the schema:

bq mkdef --source_format=CSV --autodetect=true \
  gs://mybucket/sales.csv > mytable_def

bq mk --table --external_table_definition=mytable_def \
  mydataset.mytable

Option 2: Inline table definition

Instead of creating a table definition file, you can pass the table definition directly to the bq mk command:

bq mk --table \
  --external_table_definition=@SOURCE_FORMAT=BUCKET_PATH \
  DATASET_NAME.TABLE_NAME \
  SCHEMA

Replace the following:

Example:

bq mkdef --source_format=CSV gs://mybucket/sales.csv > mytable_def
bq mk --table --external_table_definition=mytable_def \
  mydataset.mytable \
  Region:STRING,Quarter:STRING,Total_sales:INTEGER
API

Call the tables.insert method API method, and create an ExternalDataConfiguration in the Table resource that you pass in.

Specify the schema property or set the autodetect property to true to enable schema auto detection for supported data sources.

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

Create external tables on partitioned data

You can create an external table for Hive partitioned data that resides in Cloud Storage. After you create an externally partitioned table, you can't change the partition key. You need to recreate the table to change the partition key.

To create an external table for Hive partitioned data, choose one of the following options:

Console
  1. In the Google Cloud console, go to BigQuery.

    Go to BigQuery

  2. In the Explorer pane, expand your project and select a dataset.
  3. Click more_vert View actions, and then click Create table. This opens the Create table pane.
  4. In the Source section, specify the following details:
    1. For Create table from, select Google Cloud Storage.
    2. For Select file from Cloud Storage bucket, enter the path to the Cloud Storage folder, using wildcards. For example, my_bucket/my_files*. The Cloud Storage bucket must be in the same location as the dataset that contains the table you want to create, append, or overwrite.
    3. From the File format list, select the file type.
    4. Select the Source data partitioning checkbox, and then for Select Source URI Prefix, enter the Cloud Storage URI prefix. For example, gs://my_bucket/my_files.
    5. In the Partition inference mode section, select one of the following options:
      • Automatically infer types: set the partition schema detection mode to AUTO.
      • All columns are strings: set the partition schema detection mode to STRINGS.
      • Provide my own: set the partition schema detection mode to CUSTOM and manually enter the schema information for the partition keys. For more information, see Provide a custom partition key schema.
    6. Optional: To require a partition filter on all queries for this table, select the Require partition filter checkbox. Requiring a partition filter can reduce cost and improve performance. For more information, see Requiring predicate filters on partition keys in queries.
  5. In the Destination section, specify the following details:
    1. For Project, select the project in which you want to create the table.
    2. For Dataset, select the dataset in which you want to create the table.
    3. For Table, enter the name of the table that you want to create.
    4. For Table type, select External table.
  6. In the Schema section, enter the schema definition.
  7. To enable the auto detection of schema, select Auto detect.
  8. To ignore rows with extra column values that do not match the schema, expand the Advanced options section and select Unknown values.
  9. Click Create table.
SQL

Use the CREATE EXTERNAL TABLE DDL statement.

The following example uses automatic detection of Hive partition keys:

CREATE EXTERNAL TABLE `PROJECT_ID.DATASET.EXTERNAL_TABLE_NAME`
WITH PARTITION COLUMNS
OPTIONS (
format = 'SOURCE_FORMAT',
uris = ['GCS_URIS'],
hive_partition_uri_prefix = 'GCS_URI_SHARED_PREFIX',
require_hive_partition_filter = BOOLEAN);

Replace the following:

The following example uses custom Hive partition keys and types by listing them in the WITH PARTITION COLUMNS clause:

CREATE EXTERNAL TABLE `PROJECT_ID.DATASET.EXTERNAL_TABLE_NAME`
WITH PARTITION COLUMNS (PARTITION_COLUMN_LIST)
OPTIONS (
format = 'SOURCE_FORMAT',
uris = ['GCS_URIS'],
hive_partition_uri_prefix = 'GCS_URI_SHARED_PREFIX',
require_hive_partition_filter = BOOLEAN);

Replace the following:

KEY1 TYPE1, KEY2 TYPE2

The following example creates an externally partitioned table. It uses schema auto-detection to detect both the file schema and the hive partitioning layout. If the external path is gs://bucket/path/field_1=first/field_2=1/data.parquet, the partition columns are detected as field_1 (STRING) and field_2 (INT64).

CREATE EXTERNAL TABLE dataset.AutoHivePartitionedTable
WITH PARTITION COLUMNS
OPTIONS (
uris = ['gs://bucket/path/*'],
format = 'PARQUET',
hive_partition_uri_prefix = 'gs://bucket/path',
require_hive_partition_filter = false);

The following example creates an externally partitioned table by explicitly specifying the partition columns. This example assumes that the external file path has the pattern gs://bucket/path/field_1=first/field_2=1/data.parquet.

CREATE EXTERNAL TABLE dataset.CustomHivePartitionedTable
WITH PARTITION COLUMNS (
field_1 STRING, -- column order must match the external path
field_2 INT64)
OPTIONS (
uris = ['gs://bucket/path/*'],
format = 'PARQUET',
hive_partition_uri_prefix = 'gs://bucket/path',
require_hive_partition_filter = false);
bq

First, use the bq mkdef command to create a table definition file:

bq mkdef \
--source_format=SOURCE_FORMAT \
--hive_partitioning_mode=PARTITIONING_MODE \
--hive_partitioning_source_uri_prefix=GCS_URI_SHARED_PREFIX \
--require_hive_partition_filter=BOOLEAN \
 GCS_URIS > DEFINITION_FILE

Replace the following:

If PARTITIONING_MODE is CUSTOM, include the partition key schema in the source URI prefix, using the following format:

--hive_partitioning_source_uri_prefix=GCS_URI_SHARED_PREFIX/{KEY1:TYPE1}/{KEY2:TYPE2}/...

After you create the table definition file, use the bq mk command to create the external table:

bq mk --external_table_definition=DEFINITION_FILE \
DATASET_NAME.TABLE_NAME \
SCHEMA

Replace the following:

Examples

The following example uses AUTO Hive partitioning mode:

bq mkdef --source_format=CSV \
  --hive_partitioning_mode=AUTO \
  --hive_partitioning_source_uri_prefix=gs://myBucket/myTable \
  gs://myBucket/myTable/* > mytable_def

bq mk --external_table_definition=mytable_def \
  mydataset.mytable \
  Region:STRING,Quarter:STRING,Total_sales:INTEGER

The following example uses STRING Hive partitioning mode:

bq mkdef --source_format=CSV \
  --hive_partitioning_mode=STRING \
  --hive_partitioning_source_uri_prefix=gs://myBucket/myTable \
  gs://myBucket/myTable/* > mytable_def

bq mk --external_table_definition=mytable_def \
  mydataset.mytable \
  Region:STRING,Quarter:STRING,Total_sales:INTEGER

The following example uses CUSTOM Hive partitioning mode:

bq mkdef --source_format=CSV \
  --hive_partitioning_mode=CUSTOM \
  --hive_partitioning_source_uri_prefix=gs://myBucket/myTable/{dt:DATE}/{val:STRING} \
  gs://myBucket/myTable/* > mytable_def

bq mk --external_table_definition=mytable_def \
  mydataset.mytable \
  Region:STRING,Quarter:STRING,Total_sales:INTEGER
API

To set Hive partitioning using the BigQuery API, include a hivePartitioningOptions object in the ExternalDataConfiguration object when you create the table definition file.

If you set the hivePartitioningOptions.mode field to CUSTOM, you must encode the partition key schema in the hivePartitioningOptions.sourceUriPrefix field as follows: gs://BUCKET/PATH_TO_TABLE/{KEY1:TYPE1}/{KEY2:TYPE2}/...

To enforce the use of a predicate filter at query time, set the hivePartitioningOptions.requirePartitionFilter field to true.

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

Query external tables

For more information, see Query Cloud Storage data in external tables.

Upgrade external tables to BigLake

You can upgrade tables based on Cloud Storage to BigLake tables by associating the external table to a connection. If you want to use metadata caching with the BigLake table, you can specify settings for this at the same time. To get table details such as source format and source URI, see Get table information.

To update an external table to a BigLake table, select one of the following options:

SQL

Use the CREATE OR REPLACE EXTERNAL TABLE DDL statement to update a table:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    CREATE OR REPLACE EXTERNAL TABLE
      `PROJECT_ID.DATASET.EXTERNAL_TABLE_NAME`
      WITH CONNECTION {`REGION.CONNECTION_ID` | DEFAULT}
      OPTIONS(
        format ="TABLE_FORMAT",
        uris = ['BUCKET_PATH'],
        max_staleness = STALENESS_INTERVAL,
        metadata_cache_mode = 'CACHE_MODE'
        );

    Replace the following:

  3. Click play_circle Run.

For more information about how to run queries, see Run an interactive query.

bq

Use the bq mkdef and bq update commands to update a table:

  1. Generate an external table definition, that describes the aspects of the table to change:

    bq mkdef --connection_id=PROJECT_ID.REGION.CONNECTION_ID \
    --source_format=TABLE_FORMAT \
    --metadata_cache_mode=CACHE_MODE \
    "BUCKET_PATH" > /tmp/DEFINITION_FILE

    Replace the following:

  2. Update the table using the new external table definition:

    bq update --max_staleness=STALENESS_INTERVAL \
    --external_table_definition=/tmp/DEFINITION_FILE \
    PROJECT_ID:DATASET.EXTERNAL_TABLE_NAME

    Replace the following:

Cloud Storage resource path

When you create an external table based on a Cloud Storage data source, you must provide the path to the data.

The Cloud Storage resource path contains your bucket name and your object (filename). For example, if the Cloud Storage bucket is named mybucket and the data file is named myfile.csv, the resource path would be gs://mybucket/myfile.csv.

BigQuery does not support Cloud Storage resource paths that include multiple consecutive slashes after the initial double slash. Cloud Storage object names can contain multiple consecutive slash ("/") characters. However, BigQuery converts multiple consecutive slashes into a single slash. For example, the following resource path, though valid in Cloud Storage, does not work in BigQuery: gs://bucket/my//object//name.

To retrieve the Cloud Storage resource path:

  1. Open the Cloud Storage console.

    Cloud Storage console

  2. Browse to the location of the object (file) that contains the source data.

  3. Click on the name of the object.

    The Object details page opens.

  4. Copy the value provided in the gsutil URI field, which begins with gs://.

Note: You can also use the gcloud storage ls command to list buckets or objects. Wildcard support for Cloud Storage URIs

If your data is separated into multiple files, you can use an asterisk (*) wildcard to select multiple files. Use of the asterisk wildcard must follow these rules:

Examples:

When using the bq command-line tool, you might need to escape the asterisk on some platforms.

You can't use an asterisk wildcard when you create external tables linked to Datastore or Firestore exports.

Pricing

The following Cloud Storage retrieval and data transfer fees apply to BigQuery requests:

Limitations

For information about limitations that apply to external tables, see External table limitations.

What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-07 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["BigQuery enables querying data in Cloud Storage through external tables, supporting formats like CSV, JSON, Avro, ORC, and Parquet, and storage classes such as Standard, Nearline, Coldline, and Archive."],["Creating an external table requires the `bigquery.tables.create` IAM permission, along with specific Cloud Storage permissions, and it is recommended to use a BigLake table for access delegation if possible."],["External tables can be created using the Google Cloud console, `bq mk` command, `ExternalDataConfiguration` API method, or `CREATE EXTERNAL TABLE` DDL statement, with options for specifying the schema manually or using auto-detection."],["Hive-partitioned data in Cloud Storage can be queried by creating external tables, where you have the choice to automatically detect partition keys and types or to manually define them."],["Existing Cloud Storage external tables can be upgraded to BigLake tables via the `CREATE OR REPLACE EXTERNAL TABLE` DDL statement or `bq mkdef` and `bq update` commands, optionally enabling metadata caching and managing its refresh mode for improved performance."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4