A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloud.google.com/spanner/docs/reference/standard-sql/data-definition-language below:

GoogleSQL data definition language | Spanner

Skip to main content GoogleSQL data definition language

Stay organized with collections Save and categorize content based on your preferences.

Use the GoogleSQL data definition language (DDL) to do the following:

Notation Reserved keywords

Some words have special meaning in the GoogleSQL language and are reserved in its DDL. To use a reserved keyword as an identifier in your schema, enclose it in backticks (`). For the full list of reserved keywords in GoogleSQL, see GoogleSQL lexical structure and syntax.

For example:

CREATE TABLE MyTable (
  RowId INT64 NOT NULL PRIMARY KEY,
  `Order` INT64
);
Names

The following rules apply to database IDs.

The following rules apply to names for schemas, tables, change streams, columns, constraints, indexes, roles, sequences, and views:

Data types

The following are the data types used in GoogleSQL.

Scalars

The syntax for using a scalar type in DDL is:

{
  BOOL
  | INT64
  | FLOAT32
  | FLOAT64
  | NUMERIC
  | STRING( length )
  | JSON
  | BYTES( length )
  | DATE
  | TIMESTAMP
}

length:
    { int64_value | MAX }

int64_value:
    { decimal_value | hex_value }

decimal_value:
    [-]0—9+

hex_value:
    [-]0x{0—9|a—f|A—F}+

An int64_value must correspond to an integer from -9,223,372,036,854,775,808 (-263) to 9,223,372,036,854,775,807 (263 − 1). It can be specified with decimal or hexadecimal notation. The hexadecimal form requires a 0x prefix, with a lowercase x.

STRING

STRING is a variable length Unicode character string. Its value must be a valid Unicode string. Length is required, and represents the maximum number of Unicode characters (not bytes) that can be stored in the field.

Notes:

JSON

JSON is a variable length Unicode character string representing a JSON object. The string must be UTF-8 encoded on receipt at the server. The maximum length of the JSON value is 10 MB.

See Working with JSON and Data types For more information.

BYTES

BYTES is a variable length binary string. Length is required, and represents the maximum number of bytes that can be stored in the field.

Notes:

DATE TIMESTAMP Arrays

The syntax for using the ARRAY type in DDL is:

ARRAY<scalar_type>

GoogleSQL supports arrays of scalars. The primary purpose of arrays is to store a collection of values in a space efficient way. Arrays are not designed to provide access to individual elements; to read or write a single element, you must read or write the entire array.

If your application uses data structures like vectors or repeated fields, you can persist their state in a GoogleSQL array.

Here's an example of an alternate definition of Singers that uses multiple columns of ARRAY type:

CREATE TABLE Singers (
  SingerId INT64,
  FeaturedSingerIds ARRAY<INT64>,
  SongNames ARRAY<STRING(MAX)>
) PRIMARY KEY (SingerId) ...;

Notes:

Protocol buffers

The syntax for using the protocol buffers (PROTO) data type in DDL is:

proto_type_name;

GoogleSQL supports PROTO and arrays of PROTO. Protocol buffers are a flexible, efficient mechanism for serializing structured data. For more information, see Work with protocol buffers in GoogleSQL.

The following is an example of a table named Singers with a SingerInfo proto message column and an SingerInfoArray proto message array column:

CREATE TABLE Singers (
 SingerId   INT64 NOT NULL PRIMARY KEY,
 FirstName  STRING(1024),
 LastName   STRING(1024),
 SingerInfo googlesql.example.SingerInfo,
 SingerInfoArray ARRAY<googlesql.example.SingerInfo>,
);

It has the following definition of the SingerInfo proto type:

  package googlesql.example;
  message SingerInfo {
  optional string    nationality = 1;
  repeated Residence residence   = 2;

    message Residence {
      required int64  start_year   = 1;
      optional int64  end_year     = 2;
      optional string city         = 3;
      optional string country      = 4;
    }
  }
SCHEMA statements

This section has information about the CREATE SCHEMA and DROP SCHEMA statements.

CREATE SCHEMA

Creates a new schema and assigns a name.

CREATE SCHEMA [schema_name]
Parameters

schema_name

DROP SCHEMA

Removes a named schema.

DROP SCHEMA schema_name
Parameters

schema_name

Parameters

schema_name

DATABASE statements

This section has information about the CREATE DATABASE and ALTER DATABASE statements.

CREATE DATABASE

When creating a GoogleSQL database, you must provide a CREATE DATABASE statement, which defines the ID of the database:

CREATE DATABASE database_id

where database_id
    {a—z}[{a—z|0—9|_|-}+]{a—z|0—9}
Parameters

database_id

ALTER DATABASE

Changes the definition of a database.

Syntax
ALTER DATABASE database_id
    action

where database_id is:
    {a—z}[{a—z|0—9|_|-}+]{a—z|0—9}

and action is:
    SET OPTIONS ( options_def [, ... ] )

and options_def is:
    { default_leader = { 'region' | null } |
      optimizer_version = { 1 ... 8 | null } |
      optimizer_statistics_package = { 'package_name' | null } |
      version_retention_period = { 'duration' | null } |
      default_sequence_kind = { 'bit_reversed_positive' | null } |
      default_time_zone = { 'time_zone_name' | null } }

Description

ALTER DATABASE changes the definition of an existing database.

SET OPTIONS

Parameters

database_id

options_def

LOCALITY GROUP statements

This section has information about the CREATE LOCALITY GROUP, ALTER LOCALITY GROUP, and DROP LOCALITY GROUP statements.

CREATE LOCALITY GROUP

Use the CREATE LOCALITY GROUP statement to define a locality group to store some columns separately or to use tiered storage. For more information, see Locality groups and Tiered storage overview.

Syntax
CREATE LOCALITY GROUP locality_group_name [ storage_def ]

where storage_def is:
    { OPTIONS ( storage = '{ ssd | hdd }' [, ssd_to_hdd_spill_timespan='duration' ] ) }
Description

CREATE LOCALITY GROUP defines a new locality group in the current database.

Parameters

locality_group_name

OPTIONS

ALTER LOCALITY GROUP

Use the ALTER LOCALITY GROUP statement to change the storage option or age-based policy of a locality group.

Syntax
ALTER LOCALITY GROUP locality_group_name [ storage_def ]

where storage_def is:
    { SET OPTIONS ( [ storage = '{ ssd | hdd }' ssd_to_hdd_spill_timespan='duration' ] ) }
Description

ALTER LOCALITY GROUP changes the storage option or age-based policy of a locality group. You can change these options together or individually.

Parameters

locality_group_name

OPTIONS

DROP LOCALITY GROUP

Use the DROP LOCALITY GROUP statement to drop the locality group. You can't drop a locality group if it contains data. You must first move all data that's in the locality group to another locality group.

Syntax
DROP LOCALITY GROUP locality_group_name
Description

DROP LOCALITY GROUP drops the locality group.

PLACEMENT statements

Preview — Geo-partitioning

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.

This section has information about PLACEMENT statements.

CREATE PLACEMENT

Use the CREATE PLACEMENT statement to define a placement to partition row data in your database. For more information, see the Geo-partitioning overview.

Syntax
CREATE PLACEMENT placement_name [ partition_def ]

where partition_def is:
    { OPTIONS ( instance_partition="partition_id" [, default_leader="leader_region_id" ] ) }
Description

CREATE PLACEMENT defines a new placement in the current database.

Parameters

placement_name

partition_id

leader_region_id

DROP PLACEMENT

Use the DROP PLACEMENT statement to delete a placement.

Syntax
DROP PLACEMENT placement_name
Description

DROP PLACEMENT drops a placement.

Parameters

placement_name

PROTO BUNDLE statements

The PROTO files you create need to be loaded into your database schema using PROTO BUNDLE, making the PROTO files available for use by tables and queries keyed by PROTO and ENUM fields.

CREATE PROTO BUNDLE

Use the CREATE PROTO BUNDLE statement to load types available from imported proto files into the schema.

Syntax
CREATE PROTO BUNDLE ("
                      (<proto_type_name>) ("," <proto_type_name>)*
                    ")
Description

CREATE PROTO BUNDLE loads types available from imported proto files.

Parameters

proto_type_name

Notes:

ALTER PROTO BUNDLE

The ALTER PROTO BUNDLE statement is used to update the proto information stored in the schema.

Syntax
ALTER PROTO BUNDLE
[ INSERT ( <proto_type_name> , .... ) ]
[ UPDATE ( <proto_type_name> , .... ) ]
[ DELETE ( <proto_type_name> , .... ) ]
Description

ALTER PROTO BUNDLE updates the proto information already stored in the schema.

Parameters

proto_type_name

Notes:

DROP PROTO BUNDLE

The DROP PROTO BUNDLE statement is used to drop all proto type information stored in the schema.

Syntax
DROP PROTO BUNDLE
Description

DROP PROTO BUNDLE drops all proto type information stored in the schema.

Notes:

TABLE statements

This section has information about the CREATE TABLE, ALTER TABLE, DROP TABLE, AND RENAME TABLE statements.

CREATE TABLE

Defines a new table.

Syntax
CREATE TABLE [ IF NOT EXISTS ] table_name ( [
   { column_name data_type [NOT NULL]
     [ { DEFAULT ( expression ) | AS ( expression ) [ STORED ]
       | GENERATED BY DEFAULT AS IDENTITY [ ( sequence_option_clause ... ) ]
       | AUTO_INCREMENT } ]
     [ PRIMARY KEY ]
     [ options_def ]
   | location_name STRING(MAX) NOT NULL PLACEMENT KEY
   | table_constraint
   | synonym_definition }
   [, ... ]
] ) [ PRIMARY KEY ( [column_name [ { ASC | DESC } ], ...] ) ]
[, INTERLEAVE IN [PARENT] table_name [ ON DELETE { CASCADE | NO ACTION } ] ]
[, ROW DELETION POLICY ( OLDER_THAN ( timestamp_column, INTERVAL num_days DAY ) ) ]
[, OPTIONS ( locality_group = 'locality_group_name' ) ]

where data_type is:
    { scalar_type | array_type | proto_type_name }

and options_def is:
    { OPTIONS ( allow_commit_timestamp = { true | null } |
                locality_group = 'locality_group_name' ) }

and table_constraint is:
    [ CONSTRAINT constraint_name ]
    { CHECK ( expression ) |
      FOREIGN KEY ( column_name [, ... ] ) REFERENCES  ref_table  ( ref_column [, ... ] )
        [ ON DELETE { CASCADE | NO ACTION } ] [ { ENFORCED | NOT ENFORCED } ]
    }

and synonym_definition is:
    [ SYNONYM (synonym) ]

and sequence_option_clause is:
    { BIT_REVERSED_POSITIVE
    | SKIP RANGE skip_range_min, skip_range_max
    | START COUNTER WITH start_with_counter }
Description

CREATE TABLE defines a new table in the current database.

Parameters

IF NOT EXISTS

table_name

column_name

data_type

timestamp_column

num_days

NOT NULL

DEFAULT (expression)

GENERATED BY DEFAULT AS IDENTITY [ ( sequence_option_clause ... )]

AS (expression) [STORED]

For examples on how to work with generated columns, see Creating and managing generated columns.

AUTO_INCREMENT

For examples of how to work with AUTO_INCREMENT, see Primary key default values management.

location_nameSTRING(MAX) NOT NULL PLACEMENT KEY

PRIMARY KEY in column definition or PRIMARY KEY ( [column_name[ { ASC | DESC } ], ...] in table definition

[, INTERLEAVE IN PARENTtable_name[ ON DELETE { CASCADE | NO ACTION } ] ]

    For more details, see Schema and data model.

INTERLEAVE IN parent_table_name

CONSTRAINTconstraint_name

CHECK (expression)

FOREIGN KEY (column_name[, ... ] ) REFERENCESref_table(ref_column[, ... ] [ ON DELETE { CASCADE | NO ACTION } ] [ { ENFORCED | NOT ENFORCED } ] )

    For more information, see Foreign keys.

OPTIONS ( allow_commit_timestamp = { true | null } )

[, ROW DELETION POLICY ( OLDER_THAN (timestamp_column, INTERVALnum_daysDAY ) ) ]

OPTIONS ( locality_group = '<code><b><i>locality_group_name</b></i></code>' )

SYNONYM (synonym)

ALTER TABLE

Changes the definition of a table.

Syntax
ALTER TABLE table_name
    action

where action is:
    ADD SYNONYM synonym
    DROP SYNONYM synonym
    RENAME TO new_table_name [, ADD SYNONYM synonym]
    ADD [ COLUMN ] [ IF NOT EXISTS] column_name data_type [ column_expression ] [ options_def ]
    DROP [ COLUMN ] column_name
    ADD table_constraint
    DROP CONSTRAINT constraint_name
    SET ON DELETE { CASCADE | NO ACTION }
    SET INTERLEAVE IN [ PARENT ] parent_table_name [ ON DELETE { CASCADE | NO ACTION } ]
    ALTER [ COLUMN ] column_name
      {
        data_type  [ NOT NULL ] [ DEFAULT ( expression )
        | AS ( expression )
        | GENERATED BY DEFAULT AS IDENTITY [ ( sequence_option_clause ... ) ] ]
        | SET OPTIONS ( options_def )
        | SET DEFAULT ( expression )
        | DROP DEFAULT
        | ALTER IDENTITY
          {
            SET { SKIP RANGE skip_range_min, skip_range_max | NO SKIP RANGE }
            | RESTART COUNTER WITH counter_restart
          }
      }
    ADD ROW DELETION POLICY ( OLDER_THAN ( timestamp_column, INTERVAL num_days DAY ))
    DROP ROW DELETION POLICY
    REPLACE ROW DELETION POLICY ( OLDER_THAN ( timestamp_column, INTERVAL num_days DAY ))
    OPTIONS ( locality_group = 'locality_group_name' )

and data_type is:
    { scalar_type | array_type }

and column_expression is:
    [ NOT NULL ] [ { DEFAULT ( expression ) | AS ( expression ) STORED
    | GENERATED BY DEFAULT AS IDENTITY [ ( sequence_option_clause ) ] } ]

and options_def is:
    allow_commit_timestamp = { true | null } |
    locality_group = 'locality_group_name'

and table_constraint is:
    [ CONSTRAINT constraint_name ]
    { CHECK ( expression ) |
      FOREIGN KEY ( column_name [, ... ] ) REFERENCES ref_table ( ref_column [, ... ] )
        [ ON DELETE { CASCADE | NO ACTION } ] [ { ENFORCED | NOT ENFORCED } ]
    }
Description

ALTER TABLE changes the definition of an existing table.

ADD SYNONYM synonym

DROP SYNONYM synonym

RENAME TO new_table_name

RENAME TO new_table_name [, ADD SYNONYM synonym]

ADD COLUMN

DROP COLUMN

ADDtable_constraint

DROP CONSTRAINTconstraint_name

SET ON DELETE { CASCADE | NO ACTION }

SET INTERLEAVE IN [ PARENT ] parent_table_name [ ON DELETE { CASCADE | NO ACTION } ]

ALTER COLUMN

ADD ROW DELETION POLICY ( OLDER_THAN (timestamp_column, INTERVALnum_daysDAY ) )

DROP ROW DELETION POLICY

REPLACE ROW DELETION POLICY ( OLDER_THAN (timestamp_column, INTERVALnum_daysDAY ) )

SET OPTIONS ( locality_group = '<code><b><i>locality_group_name</b></i></code>' )

Parameters

table_name

column_name

data_type

options_def

table_constraint

constraint_name

ref_table

ref_column

DROP TABLE

Removes a table.

Syntax
DROP TABLE [ IF EXISTS ] table_name
Description

Use the DROP TABLE statement to remove a table from the database.

Parameters

IF EXISTS

table_name

RENAME TABLE

Renames a table or multiple tables at once.

Syntax
RENAME TABLE old_table_name TO new_table_name ...
   [, old_table_name2 TO new_table_name2 ...]
Description

Renames a table or multiple tables simultaneously, for example, if the table name is misspelled. For more information, see Rename a table.

Parameters

old_table_name

new_table_name

Example

This example shows how to change the names of multiple tables atomically.

RENAME TABLE Singers TO Artists, Albums TO Recordings;
INDEX statements

This section has information about the CREATE INDEX, ALTER INDEX, and DROP INDEX statements.

CREATE INDEX

Use the CREATE INDEX statement to define secondary indexes.

Syntax
CREATE [ UNIQUE ] [ NULL_FILTERED ] INDEX [ IF NOT EXISTS ] index_name
ON table_name ( key_part [, ...] ) [ storing_clause ] [ , interleave_clause ]
[ OPTIONS ( locality_group = 'locality_group_name' ) ]

where index_name is:
    {a—z|A—Z}[{a—z|A—Z|0—9|_}+]

and key_part is:
    column_name [ { ASC | DESC } ]

and storing_clause is:
    STORING ( column_name [, ...] )

and where_clause is:
    WHERE column_name IS NOT NULL [AND ...]

and interleave_clause is:
    INTERLEAVE IN table_name

Description

Spanner automatically indexes the primary key columns of each table.

You can use CREATE INDEX to create secondary indexes for other columns. Adding a secondary index on a column makes it more efficient to look up data in that column. For more details, see secondary indexes.

Note: Spanner has a hard size limit of 8 KB for the total size of a table key or an index key. To work around this limitation, you can create a stored generated column with expressions and an index column. Parameters

UNIQUE

NULL_FILTERED

IF NOT EXISTS

index_name

table_name

WHERE IS NOT NULL

INTERLEAVE IN

DESC

STORING

[ OPTIONS ( locality_group = '<code><b><i>locality_group_name</b></i></code>' ) ]

ALTER INDEX

Use the ALTER INDEX statement to add additional columns or remove stored columns from the secondary indexes.

Syntax
ALTER INDEX index_name {ADD|DROP} STORED COLUMN column_name
Description

Add an additional column into an index or remove a column from an index.

Parameters

index_name

column_name

DROP INDEX

Removes a secondary index.

Syntax
DROP INDEX [ IF EXISTS ] index_name
Description

Use the DROP INDEX statement to drop a secondary index.

Parameters

IF EXISTS

index_name

SEARCH INDEX statements

This section has information about the CREATE SEARCH INDEX, ALTER SEARCH INDEX, and DROP SEARCH INDEX statements.

CREATE SEARCH INDEX

Use the CREATE SEARCH INDEX statement to define search indexes. For more information, see Search indexes.

Syntax
CREATE SEARCH INDEX index_name
ON table_name ( token_column_list )
[ storing_clause ] [ partition_clause ]
[ orderby_clause ] [ where_clause ]
[ interleave_clause ] [ options_clause ]

where index_name is:
    {a—z|A—Z}[{a—z|A—Z|0—9|_}+]

and token_column_list is:
    column_name [, ...]

and storing_clause is:
    STORING ( column_name [, ...] )

and partition_clause is:
    PARTITION BY column_name [, ...]

and orderby_clause is:
    ORDER BY column_name [ {ASC | DESC} ] [, column_name [ {ASC | DESC} ]]

and where_clause is:
    WHERE column_name IS NOT NULL [AND ...]

and interleave_clause is:
    , INTERLEAVE IN table_name

and options_clause is:
    OPTIONS ( option_name=option_value [, ...] )

Description

You can use CREATE SEARCH INDEX to create search indexes for TOKENLIST columns. Adding a search index on a column makes it more efficient to search data in the source column of the TOKENLIST.

Parameters

index_name

table_name

token_column_list

STORING

PARTITION BY

ORDER BY

WHERE IS NOT NULL

INTERLEAVE IN

OPTIONS

ALTER SEARCH INDEX

Use the ALTER SEARCH INDEX statement to add or remove columns from the search indexes.

Syntax
ALTER SEARCH INDEX index_name {ADD|DROP} [STORED] COLUMN column_name
Description

Add a TOKENLIST column into a search index or remove an existing TOKENLIST column from a search index. Use STORED COLUMN to add or remove stored columns from a search index.

Parameters

index_name

column_name

DROP SEARCH INDEX

Removes a search index.

Syntax
DROP SEARCH INDEX [ IF EXISTS ] index_name
Description

Use the DROP SEARCH INDEX statement to drop a search index.

Parameters

IF EXISTS

index_name

VIEW statements

This section has information about the CREATE VIEW, CREATE OR REPLACE VIEW, and DROP VIEW statements.

CREATE VIEW and CREATE OR REPLACE VIEW

Use the CREATE VIEW or CREATE OR REPLACE VIEW statement to define a view.

Syntax
{ CREATE VIEW | CREATE OR REPLACE VIEW } view_name
SQL SECURITY { INVOKER | DEFINER }
AS query

Description

CREATE VIEW defines a new view in the current database. If a view named view_name exists, the CREATE VIEW statement fails.

CREATE OR REPLACE VIEW defines a new view in the current database. If a view named view_name exists, its definition is replaced. Use this statement to replace the security type of a view.

Parameters

view_name

SQL SECURITY

AS query

DROP VIEW

Removes a view.

Syntax
DROP VIEW view_name
Description

Use the DROP VIEW statement to remove a view from the database.

Parameters

view_name

CHANGE STREAM statements

This section has information about the CREATE CHANGE STREAM, ALTER CHANGE STREAM, and DROP CHANGE STREAM statements.

CREATE CHANGE STREAM

Defines a new change stream.

Syntax
CREATE CHANGE STREAM change_stream_name
[ FOR { table_columns [, ... ] | ALL } ]
[ OPTIONS ( change_stream_option [, ... ] ) ]

where table_columns is:
    table_name [ ( [ column_name, ... ] ) ]

and change_stream_option is:
    { retention_period = 'duration' |
      value_capture_type = { 'OLD_AND_NEW_VALUES' | 'NEW_ROW' |'NEW_VALUES' | 'NEW_ROW_AND_OLD_VALUES' } |
      exclude_ttl_deletes = { false | true } |
      exclude_insert = { false | true } |
      exclude_update = { false | true } |
      exclude_delete = { false | true } |
      allow_txn_exclusion = { false | true } }
Description

CREATE CHANGE STREAM defines a new change stream in the current database. For more information, see Create a change stream.

Parameters

change_stream_name

FOR {table_columns[, ... ] | ALL }

OPTIONS (change_stream_option[, ... ] )

ALTER CHANGE STREAM

Changes the definition of a change stream.

Syntax
ALTER CHANGE STREAM change_stream_name
    action

where action is:
    { SET FOR { table_columns [, ... ] | ALL } |
      DROP FOR ALL |
      SET OPTIONS ( change_stream_option [, ... ] ) }

and table_columns is:
    table_name [ ( [ column_name, ... ] ) ]

and change_stream_option is:
    { retention_period = { 'duration' | null } |
      value_capture_type = { 'OLD_AND_NEW_VALUES' | 'NEW_ROW' | 'NEW_VALUES' | 'NEW_ROW_AND_OLD_VALUES' | null } |
      exclude_ttl_deletes = { false | true | null } |
      exclude_insert = { false | true | null } |
      exclude_update = { false | true | null } |
      exclude_delete = { false | true | null } |
      allow_txn_exclusion = { false | true | null } }
Description

ALTER CHANGE STREAM changes the definition of an existing change stream. For more information, see Modify a change stream.

Parameters

change_stream_name

SET FOR {table_columns[, ... ] | ALL }

DROP FOR ALL

SET OPTIONS

DROP CHANGE STREAM

Removes a change stream.

Syntax
DROP CHANGE STREAM change_stream_name
Description

Use the DROP CHANGE STREAM statement to remove a change stream from the database and delete its data change records.

Parameters

change_stream_name

ROLE statements

This section has information about the CREATE ROLE and DROP ROLE statements.

CREATE ROLE

Defines a new database role.

Syntax
CREATE ROLE database_role_name
Description

CREATE ROLE defines a new database role. Database roles are collections of fine-grained access control privileges. You can create only one role with this statement.

Parameters

database_role_name

Example

This example creates the database role hr_manager.

CREATE ROLE hr_manager

DROP ROLE

Drops a database role.

Syntax
DROP ROLE database_role_name
Description

DROP ROLE drops a database role. You can drop only one role with this statement.

You can't drop a database role if it has any privileges granted to it. All privileges granted to a database role must be revoked before the role can be dropped. You can drop a database role whether or not access to it is granted to IAM principals.

Dropping a role automatically revokes its membership in other roles and revokes the membership of its members.

You can't drop system roles.

Parameters

database_role_name

Example

This example drops the database role hr_manager.

DROP ROLE hr_manager

GRANT and REVOKE statements

This section has information about the GRANT and REVOKE statements.

GRANT

Grants privileges that allow database roles to access database objects.

Syntax
GRANT { SELECT | INSERT | UPDATE | DELETE } ON TABLE table_list TO ROLE role_list

GRANT { SELECT | INSERT | UPDATE }(column_list)
   ON TABLE table_list | ON ALL TABLES IN SCHEMA schema_name [, ...]
   TO ROLE role_list

GRANT SELECT
    ON CHANGE STREAM change_stream_list
        | ON ALL CHANGE STREAMS IN SCHEMA schema_name [, ...] }
    TO ROLE role_list

GRANT SELECT ON VIEW view_list | ON ALL VIEWS IN SCHEMA schema_name [, ...]
    TO ROLE role_list

GRANT EXECUTE ON TABLE FUNCTION function_list
    TO ROLE role_list.

GRANT ROLE role_list
    TO ROLE role_list

GRANT USAGE ON SCHEMA [DEFAULT | schema_name_list] TO ROLE role_list

where table_list is:
      table_name [, ...]

and column_list is:
    column_name [,...]

and view_list is:
    view_name [, ...]

and change_stream_list is:
    change_stream_name [, ...]

and function_list is:
    change_stream_read_function_name [, ...]

and schema_name_list is:
    schema_name [, ...]

and role_list is:
    database_role_name [, ...]
Description

For fine-grained access control, grants privileges on one or more tables, views, change streams, or change stream read functions to database roles. Also grants database roles to other database roles to create a database role hierarchy with inheritance. When granting SELECT, INSERT, or UPDATE on a table, optionally grants privileges on only a subset of table columns.

Parameters

table_name

column_name

view_name

change_stream_name

change_stream_read_function_name

schema_name

database_role_name

Notes and restrictions Examples

The following example grants SELECT on the employees table to the hr_rep role. Grantees of the hr_rep role can read all columns of employees.

GRANT SELECT ON TABLE employees TO ROLE hr_rep;

The next example grants SELECT on a subset of columns of the contractors table to the hr_rep role. Grantees of the hr_rep role can read-only the named columns.

GRANT SELECT(name, address, phone) ON TABLE contractors TO ROLE hr_rep;

The next example mixes table-level and column-level grants. hr_manager can read all table columns, but can update only the location column.

GRANT SELECT, UPDATE(location) ON TABLE employees TO ROLE hr_manager;

The next example makes column-level grants on two tables. Both tables must contain the name, level, and location columns.

GRANT SELECT(name, level, location), UPDATE(location) ON TABLE employees, contractors TO ROLE hr_manager;

The next example grants INSERT on a subset of columns of the employees table.

GRANT INSERT(name, cost_center, location, manager) ON TABLE employees TO ROLE hr_manager;

The next example grants the database role pii_access to the roles hr_manager and hr_director. The hr_manager and hr_director roles are members of pii_access and inherit the privileges that were granted to pii_access. For more information, see Database role hierarchies and inheritance.

GRANT ROLE pii_access TO ROLE hr_manager, hr_director;
REVOKE

Revokes privileges that allow database roles access to database objects.

Syntax
REVOKE { SELECT | INSERT | UPDATE | DELETE } ON TABLE table_list FROM ROLE role_list

REVOKE { SELECT | INSERT | UPDATE }(column_list) ON TABLE table_list FROM ROLE role_list

REVOKE SELECT ON VIEW view_list FROM ROLE role_list

REVOKE SELECT ON CHANGE STREAM change_stream_list FROM ROLE role_list

REVOKE EXECUTE ON TABLE FUNCTION function_list FROM ROLE role_list

REVOKE ROLE role_list FROM ROLE role_list

and table_list is:
    table_name [, ...]

and column_list is:
    column_name [,...]

and view_list is:
    view_name [, ...]

and change_stream_list is:
    change_stream_name [, ...]

and function_list is:
    change_stream_read_function_name [, ...]

and role_list is:
    database_role_name [, ...]
Description

For fine-grained access control, revokes privileges on one or more tables, views, change streams, or change stream read functions from database roles. Also revokes database roles from other database roles. When revoking SELECT, INSERT, or UPDATE on a table, optionally revokes privileges on only a subset of table columns.

Parameters

table_name

column_name

view_name

change_stream_name

change_stream_read_function_name

database_role_name

Notes and restrictions Examples

The following example revokes SELECT on the employees table from the role hr_rep.

REVOKE SELECT ON TABLE employees FROM ROLE hr_rep;

The next example revokes SELECT on a subset of columns of the contractors table from the role hr_rep.

REVOKE SELECT(name, address, phone) ON TABLE contractors FROM ROLE hr_rep;

The next example shows revoking both table-level and column-level privileges in a single statement.

REVOKE SELECT, UPDATE(location) ON TABLE employees FROM ROLE hr_manager;

The next example revokes column-level grants on two tables. Both tables must contain the name, level, and location columns.

REVOKE SELECT(name, level, location), UPDATE(location) ON TABLE employees, contractors FROM ROLE hr_manager;

The next example revokes INSERT on a subset of columns.

REVOKE INSERT(name, cost_center, location, manager) ON TABLE employees FROM ROLE hr_manager;

The following example revokes the database role pii_access from the hr_manager and hr_director database roles. The hr_manager and hr_director roles lose any privileges that they inherited from pii_access.

REVOKE ROLE pii_access FROM ROLE hr_manager, hr_director;

SEQUENCE statements

This section has information about the CREATE SEQUENCE,ALTER SEQUENCE, andDROP SEQUENCE` statements.

CREATE SEQUENCE

Creates a sequence object.

Syntax
CREATE SEQUENCE
    [ IF NOT EXISTS ] sequence_name
    [ sequence_option_clause ... ]
    [ OPTIONS ( sequence_options ) ]

where sequence_option_clause is:
    BIT_REVERSED_POSITIVE
    | SKIP RANGE skip_range_min, skip_range_max
    | START COUNTER WITH start_with_counter
Description

When you use a CREATE SEQUENCE statement, Spanner creates a schema object that you can poll for values using the GET_NEXT_SEQUENCE_VALUE function.

Parameters

IF NOT EXISTS

sequence_name

OPTIONS (sequence_options)

Examples
# Create a positive bit-reversed sequence to use in a primary key.

CREATE SEQUENCE MySequence OPTIONS (
    sequence_kind='bit_reversed_positive',
    skip_range_min = 1,
    skip_range_max = 1000,
    start_with_counter = 50);

# Create a table that uses the sequence for a key column.
CREATE TABLE Singers (
  SingerId INT64 DEFAULT (GET_NEXT_SEQUENCE_VALUE(SEQUENCE MySequence)),
  FirstName  STRING(1024),
  LastName   STRING(1024),
  SingerInfo googlesql.example.SingerInfo,
  BirthDate  DATE
) PRIMARY KEY (SingerId);

Use the following SQL to query information about sequences.

SELECT * FROM information_schema.sequences;
SELECT * FROM information_schema.sequence_options;
ALTER SEQUENCE

Makes changes to the sequence object.

Syntax
ALTER SEQUENCE sequence_name
    { SET OPTIONS sequence_options | sequence_option_clause ...  }

where sequence_option_clause is:
    { { SKIP RANGE skip_range_min, skip_range_max | NO SKIP RANGE }
      | RESTART COUNTER WITH counter_restart }
Description

ALTER SEQUENCE makes changes to the specified sequence schema object. Executing this statement doesn't affect values the sequence already generated. If the ALTER SEQUENCE statement doesn't include an option, the current value of the option remains the same.

Parameters

sequence_name

SET OPTIONS (sequence_options)

Examples
# Alter the sequence to include a skipped range. This is useful when you are
# migrating from a regular sequence with sequential data
ALTER SEQUENCE MySequence
SET OPTIONS (skip_range_min=1, skip_range_max=1234567);
DROP SEQUENCE

Drops a specific sequence.

Syntax
DROP SEQUENCE [IF EXISTS] sequence_name
Description

DROP SEQUENCE drops a specific sequence. Spanner can't drop a sequence if its name appears in a sequence function that is used in a column default value or a view.

Parameters

sequence_name

STATISTICS statements

This section has information about the ALTER STATISTICS and ANALYZE statements.

ALTER STATISTICS

Changes the definition of a query optimizer statistics package.

Syntax
ALTER STATISTICS package_name
    action

where package_name is:
    {a—z}[{a—z|0—9|_|-}+]{a—z|0—9}

and action is:
    SET OPTIONS ( options_def )

and options_def is:
    { allow_gc = { true | false } }
Description

ALTER STATISTICS changes the definition of a query optimizer statistics package.

SET OPTIONS

Parameters

package_name

options_def

ANALYZE

Start a new query optimizer statistics package construction.

Syntax
ANALYZE
Description

ANALYZE starts a new query optimizer statistics package construction.

MODEL statements

This section has information about the CREATE MODEL, ALTER MODEL, and DROP MODEL statements.

CREATE MODEL and CREATE OR REPLACE MODEL

Use the CREATE MODEL or CREATE OR REPLACE MODEL statement to define an ML model.

Note: Spanner Vertex AI integration supports only classifier, regression, and text ML models. Syntax
{ CREATE MODEL | CREATE OR REPLACE MODEL | CREATE MODEL IF NOT EXISTS } model_name
[INPUT ( column_list ) OUTPUT ( column_list )]
REMOTE
[OPTIONS ( model_options )]

where column_list is:
   { column_name data_type [OPTIONS ( model_column_options )] [, ... ] }

and model_column_options is:
    {
      required = { true | false }
    }

and model_options is:
    {
      endpoint = '{endpoint_address}',
      endpoints = [ '{endpoint_address}' [, ...] ],
      default_batch_size = int64_value
    }
Description

CREATE MODEL registers a reference to the Vertex AI ML model in the current database. If a model named model_name already exists, the CREATE MODEL statement fails.

CREATE OR REPLACE MODEL registers a reference to the Vertex AI ML model in the current database. If a model named model_name already exists, its definition is replaced.

CREATE MODEL IF NOT EXISTS registers a reference to the Vertex AI ML model in the current database. If a model named model_name already exists, the CREATE MODEL IF NOT EXISTS statement does not have any effect and no error is generated.

As soon as the model reference is registered in a database, it can be used from queries that use the ML.Predict function.

Model registration doesn't result in copying a model from the Vertex AI to a database, but only in creation of a reference to this models' endpoint hosted in the Vertex AI. If the model's endpoint gets removed from the Vertex AI, Spanner queries referencing this model fail.

Model endpoint access control

To be able to access a registered Vertex AI model endpoint from Spanner, you need to grant access permission to Spanner's service agent account.

Spanner creates the service agent and grants the necessary permissions when Spanner executes the first MODEL DDL statement. If both the Spanner database and the Vertex AI endpoint are in the same project, then no additional setup is required.

If the Spanner service agent account doesn't exist for your Spanner project, create it by running the following command:

gcloud beta services identity create --service=spanner.googleapis.com --project={PROJECT}`

Follow the steps described in the following tutorial to grant the Spanner API Service Agent role to the Spanner service agent account service-{PROJECT}@gcp-sa-spanner.iam.gserviceaccount.com on your Vertex AI project.

Parameters

model_name

INPUT ( column_list ) OUTPUT ( column_list )

model_column_options

model_options

ALTER MODEL

Changes the definition of a model.

Note: Spanner Vertex AI integration supports only classifier, regression, and text ML models. Syntax
ALTER MODEL [ IF EXISTS ] model_name
SET OPTIONS ( model_options )

where model_options is:
    {
      endpoint = '{endpoint_address}',
      endpoints = [ '{endpoint_address}' [, ...] ],
      default_batch_size = int64_value
    }
Description

ALTER MODEL changes the definition of an existing table.

Parameters

model_name

SET OPTIONS

DROP MODEL

Removes a model.

Syntax
DROP MODEL [ IF EXISTS ] model_name
Description

Use the DROP MODEL statement to remove a model definition from the database. Unless the IF EXISTS clause is specified, the statement fails if the model doesn't exist.

After you delete a model definition, all SQL queries referencing the deleted model fail. Dropping a model definition does not affect the underlying the Vertex AI endpoint that this model is attached to.

Parameters

model_name

VECTOR INDEX statements

If you have a table with a large amount of vector data, you can use a vector index to perform similarity searches and nearest neighbor queries efficiently, with the trade-off of reduced recall and more approximate results.

CREATE VECTOR INDEX

Creates a new vector index on a column of a table.

Syntax
CREATE VECTOR INDEX [ IF NOT EXISTS ] index_name
ON table_name(column_name)
[ STORING ( column_name [, ...] ) ]
[ WHERE column_name IS NOT NULL ]
OPTIONS(index_option_list)
Parameters

IF NOT EXISTS

index_name

table_name

column_name

WHERE IS NOT NULL

STORING

index_option_list

Description

You can only create a new vector index on a column of a table.

index_option_list

The index option list specifies options for the vector index. Spanner creates tree-based vector indexes which use a tree-like structure to partition vector data. Using index_option_list, you can define the specific distance metric and search tree specification used to create the vector index. Specify the options in the following format: NAME=VALUE, ....

The following index options are supported:

NAME VALUE Details distance_type STRING Required. The distance metric used to build the vector index. This value can be COSINE, DOT_PRODUCT, or EUCLIDEAN. tree_depth INT The tree depth (level). This value can be either 2 or 3. A tree with 2 levels only has leaves (num_leaves) as nodes. If the dataset has more than 100 million rows, then you can use a tree with 3 levels and add branches (num_branches) to further partition the dataset. num_leaves INT The number of leaves (i.e. potential partitions) for the vector data. You can designate num_leaves for trees with 2 or 3 levels. We recommend that the number of leaves is number_of_rows_in_dataset/1000. num_branches INT The number of branches to further parititon the vector data. You can only designate num_branches for trees with 3 levels. The number of branches must be fewer than the number of leaves. We recommend that the number of leaves is between 1000 and sqrt(number_of_rows_in_dataset). Examples

The following example creates a vector index Singer_vector_index on the embedding column of the Singers table and defines the distance type:

CREATE TABLE Singers(id INT64, genre STRING, embedding ARRAY<FLOAT32>(vector_length=>128))
PRIMARY KEY(id);

CREATE VECTOR INDEX Singer_vector_index ON Singers(embedding)
STORING (genre)
WHERE embedding IS NOT NULL
OPTIONS(distance_type = 'COSINE');

The following example creates a vector index Singer_vector_index on the embedding column of the Singers table and defines the distance type and search tree specifications, which are optional:

CREATE TABLE Singers(id INT64, embedding ARRAY<FLOAT32>(vector_length=>128))
PRIMARY KEY(id);

CREATE VECTOR INDEX Singer_vector_index ON Singers(embedding)
STORING (genre)
WHERE embedding IS NOT NULL
OPTIONS(distance_type = 'COSINE', tree_depth = 3, num_branches = 1000, num_leaves = 1000000);
ALTER VECTOR INDEX statement

Use the ALTER VECTOR INDEX statement to add additional stored columns or remove stored columns from the vector index.

Syntax
ALTER VECTOR INDEX index_name
{ADD|DROP} STORED COLUMN column_name
Parameters

index_name

column_name

Description

Add an additional stored column into a vector index or remove a stored column from the index.

Examples

The following ALTER VECTOR INDEX statement modifies the vector index by removing the stored columngenre:

ALTER VECTOR INDEX Singer_vector_index
DROP STORED COLUMN genre;
DROP VECTOR INDEX statement

Deletes a vector index on a table.

Syntax
DROP [ VECTOR ] INDEX index_name;
Parameters Example

The following example deletes the vector index Singer_vector_index:

DROP VECTOR INDEX Singer_vector_index;

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-09 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-09 UTC."],[],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4