Stay organized with collections Save and categorize content based on your preferences.
API documentation for bigquery.enums
module.
How to handle automatic insert IDs when inserting rows as a stream.
BigLakeFileFormatAPI documentation for bigquery.enums.BigLakeFileFormat
class.
API documentation for bigquery.enums.BigLakeTableFormat
class.
The compression type to use for exported files. The default value is NONE
.
DEFLATE
and SNAPPY
are only supported for Avro.
Specifies whether the job is allowed to create new tables. The default value is CREATE_IF_NEEDED
.
Creation, truncation and append actions occur as one atomic update upon job completion.
DatasetViewDatasetView specifies which dataset information is returned.
DecimalTargetType DefaultPandasDTypesDefaultPandasDTypes(value)
Default Pandas DataFrem DTypes to convert BigQuery data. These Sentinel values are used instead of None to maintain backward compatibility, and allow Pandas package is not available. For more information: https://stackoverflow.com/a/60605919/101923
DestinationFormatThe exported file format. The default value is CSV
.
Tables with nested or repeated fields cannot be exported as CSV.
DeterminismLevel EncodingThe character encoding of the data. The default is UTF_8
.
BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
EntityTypesEnum of allowed entity type names in AccessEntry
JobCreationModeDocumented values for Job Creation Mode.
KeyResultStatementKind QueryApiMethodAPI method used to start the query. The default value is INSERT
.
Specifies a priority for the query. The default value is INTERACTIVE
.
Rounding mode options that can be used when storing NUMERIC or BIGNUMERIC values.
ROUNDING_MODE_UNSPECIFIED: will default to using ROUND_HALF_AWAY_FROM_ZERO.
ROUND_HALF_AWAY_FROM_ZERO: rounds half values away from zero when applying precision and scale upon writing of NUMERIC and BIGNUMERIC values. For Scale: 0
ROUND_HALF_EVEN: rounds half values to the nearest even value when applying precision and scale upon writing of NUMERIC and BIGNUMERIC values. For Scale: 0
Specifies an update to the destination table schema as a side effect of a load job.
SourceColumnMatchUses sensible defaults based on how the schema is provided. If autodetect is used, then columns are matched by name. Otherwise, columns are matched by position. This is done to keep the behavior backward-compatible.
SourceFormatThe format of the data files. The default value is CSV
.
Note that the set of allowed values for loading data is different than the set used for external data sources (see ExternalSourceFormat).
SqlTypeNamesEnum of allowed SQL type names in schema.SchemaField.
Datatype used in Legacy SQL.
StandardSqlTypeNamesStandardSqlTypeNames(value)
Enum of allowed SQL type names in schema.SchemaField.
Datatype used in GoogleSQL.
UpdateModeSpecifies the kind of information to update in a dataset.
WriteDispositionSpecifies the action that occurs if destination table already exists.
The default value is WRITE_APPEND
.
Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4