A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.databricks.com/aws/en/error-messages/error-classes below:

Error conditions in Databricks | Databricks Documentation

Error conditions in Databricks

Applies to: Databricks SQL Databricks Runtime 12.2 and above

Error conditions are descriptive, human-readable strings that unique to the error they describe.

You can use error conditions to programmatically handle errors in your application without the need to parse the error message.

This is a list of common, named error conditions returned by Databricks.

Databricks Runtime and Databricks SQL​ ADD_DEFAULT_UNSUPPORTED​

SQLSTATE: 42623

Failed to execute <statementType> command because DEFAULT values are not supported when adding new columns to previously existing target data source with table provider: "<dataSource>".

AGGREGATE_FUNCTION_IN_NESTED_REFERENCES_NOT_SUPPORTED​

SQLSTATE: 0A000

Detected aggregate functions in outer scope references <expression> which is not supported.

AGGREGATE_FUNCTION_WITH_NONDETERMINISTIC_EXPRESSION​

SQLSTATE: 42845

Non-deterministic expression <sqlExpr> should not appear in the arguments of an aggregate function.

AGGREGATE_OUT_OF_MEMORY​

SQLSTATE: 82001

No enough memory for aggregation

AI_FUNCTION_HTTP_PARSE_CAST_ERROR​

SQLSTATE: 2203G

Failed to parse model output when casting to the specified returnType: "<dataType>", response JSON was: "<responseString>". Please update the returnType to match the contents of the type represented by the response JSON and then retry the query again. Exception: <errorMessage>

AI_FUNCTION_HTTP_PARSE_COLUMNS_ERROR​

SQLSTATE: 2203G

The actual model output has more than one column "<responseString>". However, the specified return type["<dataType>"] has only one column. Please update the returnType to contain the same number of columns as the model output and then retry the query again.

AI_FUNCTION_HTTP_REQUEST_ERROR​

SQLSTATE: 08000

Error occurred while making an HTTP request for function <funcName>: <errorMessage>

AI_FUNCTION_INVALID_HTTP_RESPONSE​

SQLSTATE: 08000

Invalid HTTP response for function <funcName>: <errorMessage>

AI_FUNCTION_INVALID_MAX_WORDS​

SQLSTATE: 22032

The maximum number of words must be a non-negative integer, but got <maxWords>.

AI_FUNCTION_INVALID_MODEL_PARAMETERS​

SQLSTATE: 22023

The provided model parameters (<modelParameters>) are invalid in the AI_QUERY function for serving endpoint "<endpointName>".

For more details see AI_FUNCTION_INVALID_MODEL_PARAMETERS

AI_FUNCTION_INVALID_RESPONSE_FORMAT_TYPE​

SQLSTATE: 0A000

AI function: "<functionName>" requires valid <format> string for responseFormat parameter, but found the following response format: "<invalidResponseFormat>". Exception: <errorMessage>

AI_FUNCTION_JSON_PARSE_ERROR​

SQLSTATE: 22000

Error occurred while parsing the JSON response for function <funcName>: <errorMessage>

AI_FUNCTION_MODEL_SCHEMA_PARSE_ERROR​

SQLSTATE: 2203G

Failed to parse the schema for the serving endpoint "<endpointName>": <errorMessage>, response JSON was: "<responseJson>".

Set the returnType parameter manually in the AI_QUERY function to override schema resolution.

AI_FUNCTION_UNSUPPORTED_ERROR​

SQLSTATE: 56038

The function <funcName> is not supported in the current environment. It is only available in Databricks SQL Pro and Serverless.

AI_FUNCTION_UNSUPPORTED_REQUEST​

SQLSTATE: 0A000

Failed to evaluate the SQL function "<functionName>" because the provided argument of <invalidValue> has "<invalidDataType>", but only the following types are supported: <supportedDataTypes>. Please update the function call to provide an argument of string type and retry the query again.

AI_FUNCTION_UNSUPPORTED_RESPONSE_FORMAT​

SQLSTATE: 0A000

Found unsupported response format.

For more details see AI_FUNCTION_UNSUPPORTED_RESPONSE_FORMAT

AI_FUNCTION_UNSUPPORTED_RETURN_TYPE​

SQLSTATE: 0A000

AI function: "<functionName>" does not support the following type as return type: "<typeName>". Return type must be a valid SQL type understood by Catalyst and supported by AI function. Current supported types includes: <supportedValues>

AI_INVALID_ARGUMENT_VALUE_ERROR​

SQLSTATE: 22032

Provided value "<argValue>" is not supported by argument "<argName>". Supported values are: <supportedValues>

AI_QUERY_ENDPOINT_NOT_SUPPORT_STRUCTURED_OUTPUT​

SQLSTATE: 0A000

Expected the serving endpoint task type to be "Chat" for structured output support, but found "<taskType>" for the endpoint "<endpointName>".

AI_QUERY_RETURN_TYPE_COLUMN_TYPE_MISMATCH​

SQLSTATE: 0A000

Provided "<sqlExpr>" is not supported by the argument returnType.

AI_SEARCH_CONFLICTING_QUERY_PARAM_SUPPLY_ERROR​

SQLSTATE: 0A000

Conflicting parameters detected for vector_search SQL function: <conflictParamNames>.<hint>

AI_SEARCH_EMBEDDING_COLUMN_TYPE_UNSUPPORTED_ERROR​

SQLSTATE: 0A000

vector_search SQL function with embedding column type <embeddingColumnType> is not supported.

AI_SEARCH_EMPTY_QUERY_PARAM_ERROR​

SQLSTATE: 0A000

vector_search SQL function is missing query input parameter, please specify at least one from: <parameterNames>.

AI_SEARCH_HYBRID_QUERY_PARAM_DEPRECATION_ERROR​

SQLSTATE: 0A000

The parameter query to vector_search SQL function is not supported for hybrid vector search. Please use query_text instead.

AI_SEARCH_HYBRID_TEXT_NOT_FOUND_ERROR​

SQLSTATE: 0A000

Query text not found in the vector_search SQL function for hybrid vector search. Please provide query_text.

AI_SEARCH_INDEX_TYPE_UNSUPPORTED_ERROR​

SQLSTATE: 0A000

vector_search SQL function with index type <indexType> is not supported.

AI_SEARCH_MISSING_EMBEDDING_INPUT_ERROR​

SQLSTATE: 0A000

query_vector must be specified for index <indexName> because it is not associated with an embedding model endpoint.

AI_SEARCH_QUERY_TYPE_CONVERT_ENCODE_ERROR​

SQLSTATE: 0A000

Failure to materialize vector_search SQL function query from spark type <dataType> to scala-native objects during request-encoding with error: <errorMessage>.

AI_SEARCH_QUERY_TYPE_UNSUPPORTED_ERROR​

SQLSTATE: 0A000

vector_search SQL function with query type <unexpectedQueryType> is not supported. Please specify one from: <supportedQueryTypes>.

AI_SEARCH_UNSUPPORTED_NUM_RESULTS_ERROR​

SQLSTATE: 0A000

vector_search SQL function with num_results larger than <maxLimit> is not supported. The limit specified was <requestedLimit>. Please try again with num_results <= <maxLimit>

AI_TOP_DRIVERS_PARAM_OUT_OF_RANGE​

SQLSTATE: 22003

The ai_top_drivers parameter <param> must be between <lo> and <hi>.

AI_TOP_DRIVERS_UNSUPPORTED_AGGREGATION_TYPE​

SQLSTATE: 0A000

ai_top_drivers does not support the <aggName> aggregate. Choose one of the following supported aggregates: <allowed>.

AI_TOP_DRIVERS_UNSUPPORTED_DIMENSION_TYPE​

SQLSTATE: 0A000

ai_top_drivers does not support numeric, map, or struct dimension columns. The column <colName> has type <dataType>. Remove this dimension or cast it to a supported type.

AI_TOP_DRIVERS_UNSUPPORTED_LABEL_TYPE​

SQLSTATE: 0A000

ai_top_drivers requires the label column type to be boolean. The column <colName> has type <dataType>. Change the label column or cast it to a supported type.

AI_TOP_DRIVERS_UNSUPPORTED_METRIC_TYPE​

SQLSTATE: 0A000

ai_top_drivers requires the metric column type to be numeric. The column <colName> has type <dataType>. Change the metric column or cast it to a supported type.

ALL_PARAMETERS_MUST_BE_NAMED​

SQLSTATE: 07001

Using name parameterized queries requires all parameters to be named. Parameters missing names: <exprs>.

ALL_PARTITION_COLUMNS_NOT_ALLOWED​

SQLSTATE: KD005

Cannot use all columns for partition columns.

ALTER_SCHEDULE_DOES_NOT_EXIST​

SQLSTATE: 42704

Cannot alter <scheduleType> on a table without an existing schedule or trigger. Please add a schedule or trigger to the table before attempting to alter it.

ALTER_TABLE_COLUMN_DESCRIPTOR_DUPLICATE​

SQLSTATE: 42710

ALTER TABLE <type> column <columnName> specifies descriptor "<optionName>" more than once, which is invalid.

AMBIGUOUS_ALIAS_IN_NESTED_CTE​

SQLSTATE: 42KD0

Name <name> is ambiguous in nested CTE.

Please set <config> to "CORRECTED" so that name defined in inner CTE takes precedence. If set it to "LEGACY", outer CTE definitions will take precedence.

See 'https://spark.apache.org/docs/latest/sql-migration-guide.html#query-engine'.

AMBIGUOUS_COLUMN_OR_FIELD​

SQLSTATE: 42702

Column or field <name> is ambiguous and has <n> matches.

AMBIGUOUS_COLUMN_REFERENCE​

SQLSTATE: 42702

Column <name> is ambiguous. It's because you joined several DataFrame together, and some of these DataFrames are the same.

This column points to one of the DataFrames but Spark is unable to figure out which one.

Please alias the DataFrames with different names via DataFrame.alias before joining them,

and specify the column using qualified name, e.g. df.alias("a").join(df.alias("b"), col("a.id") > col("b.id")).

AMBIGUOUS_CONSTRAINT​

SQLSTATE: 42K0C

Ambiguous reference to constraint <constraint>.

AMBIGUOUS_LATERAL_COLUMN_ALIAS​

SQLSTATE: 42702

Lateral column alias <name> is ambiguous and has <n> matches.

AMBIGUOUS_REFERENCE​

SQLSTATE: 42704

Reference <name> is ambiguous, could be: <referenceNames>.

AMBIGUOUS_REFERENCE_TO_FIELDS​

SQLSTATE: 42000

Ambiguous reference to the field <field>. It appears <count> times in the schema.

ANALYZE_CONSTRAINTS_NOT_SUPPORTED​

SQLSTATE: 0A000

ANALYZE CONSTRAINTS is not supported.

ANSI_CONFIG_CANNOT_BE_DISABLED​

SQLSTATE: 56038

The ANSI SQL configuration <config> cannot be disabled in this product.

AQE_THREAD_INTERRUPTED​

SQLSTATE: HY008

AQE thread is interrupted, probably due to query cancellation by user.

ARGUMENT_NOT_CONSTANT​

SQLSTATE: 42K08

The function <functionName> includes a parameter <parameterName> at position <pos> that requires a constant argument. Please compute the argument <sqlExpr> separately and pass the result as a constant.

ARITHMETIC_OVERFLOW​

SQLSTATE: 22003

<message>.<alternative> If necessary set <config> to "false" to bypass this error.

For more details see ARITHMETIC_OVERFLOW

ARROW_TYPE_MISMATCH​

SQLSTATE: 42K0G

Invalid schema from <operation>: expected <outputTypes>, got <actualDataTypes>.

ARTIFACT_ALREADY_EXISTS​

SQLSTATE: 42713

The artifact <normalizedRemoteRelativePath> already exists. Please choose a different name for the new artifact because it cannot be overwritten.

ASSIGNMENT_ARITY_MISMATCH​

SQLSTATE: 42802

The number of columns or variables assigned or aliased: <numTarget> does not match the number of source expressions: <numExpr>.

AS_OF_JOIN​

SQLSTATE: 42604

Invalid as-of join.

For more details see AS_OF_JOIN

AVRO_CANNOT_WRITE_NULL_FIELD​

SQLSTATE: 22004

Cannot write null value for field <name> defined as non-null Avro data type <dataType>.

To allow null value for this field, specify its avro schema as a union type with "null" using avroSchema option.

AVRO_DEFAULT_VALUES_UNSUPPORTED​

SQLSTATE: 0A000

The use of default values is not supported whenrescuedDataColumn is enabled. You may be able to remove this check by setting spark.databricks.sql.avro.rescuedDataBlockUserDefinedSchemaDefaultValue to false, but the default values will not apply and null values will still be used.

AVRO_INCOMPATIBLE_READ_TYPE​

SQLSTATE: 22KD3

Cannot convert Avro <avroPath> to SQL <sqlPath> because the original encoded data type is <avroType>, however you're trying to read the field as <sqlType>, which would lead to an incorrect answer.

To allow reading this field, enable the SQL configuration: "spark.sql.legacy.avro.allowIncompatibleSchema".

AVRO_NOT_LOADED_SQL_FUNCTIONS_UNUSABLE​

SQLSTATE: 22KD3

Cannot call the <functionName> SQL function because the Avro data source is not loaded.

Please restart your job or session with the 'spark-avro' package loaded, such as by using the --packages argument on the command line, and then retry your query or command again.

AVRO_POSITIONAL_FIELD_MATCHING_UNSUPPORTED​

SQLSTATE: 0A000

The use of positional field matching is not supported when either rescuedDataColumn or failOnUnknownFields is enabled. Remove these options to proceed.

BATCH_METADATA_NOT_FOUND​

SQLSTATE: 42K03

Unable to find batch <batchMetadataFile>.

BIGQUERY_OPTIONS_ARE_MUTUALLY_EXCLUSIVE​

SQLSTATE: 42616

BigQuery connection credentials must be specified with either the 'GoogleServiceAccountKeyJson' parameter or all of 'projectId', 'OAuthServiceAcctEmail', 'OAuthPvtKey'

BINARY_ARITHMETIC_OVERFLOW​

SQLSTATE: 22003

<value1> <symbol> <value2> caused overflow. Use <functionName> to ignore overflow problem and return NULL.

BUILT_IN_CATALOG​

SQLSTATE: 42832

<operation> doesn't support built-in catalogs.

CALL_ON_STREAMING_DATASET_UNSUPPORTED​

SQLSTATE: 42KDE

The method <methodName> can not be called on streaming Dataset/DataFrame.

CANNOT_ALTER_COLLATION_BUCKET_COLUMN​

SQLSTATE: 428FR

ALTER TABLE (ALTER|CHANGE) COLUMN cannot change collation of type/subtypes of bucket columns, but found the bucket column <columnName> in the table <tableName>.

CANNOT_ALTER_PARTITION_COLUMN​

SQLSTATE: 428FR

ALTER TABLE (ALTER|CHANGE) COLUMN is not supported for partition columns, but found the partition column <columnName> in the table <tableName>.

CANNOT_ASSIGN_EVENT_TIME_COLUMN_WITHOUT_WATERMARK​

SQLSTATE: 42611

Watermark needs to be defined to reassign event time column. Failed to find watermark definition in the streaming query.

CANNOT_CAST_DATATYPE​

SQLSTATE: 42846

Cannot cast <sourceType> to <targetType>.

CANNOT_CONVERT_PROTOBUF_FIELD_TYPE_TO_SQL_TYPE​

SQLSTATE: 42846

Cannot convert Protobuf <protobufColumn> to SQL <sqlColumn> because schema is incompatible (protobufType = <protobufType>, sqlType = <sqlType>).

CANNOT_CONVERT_PROTOBUF_MESSAGE_TYPE_TO_SQL_TYPE​

SQLSTATE: 42846

Unable to convert <protobufType> of Protobuf to SQL type <toType>.

CANNOT_CONVERT_SQL_TYPE_TO_PROTOBUF_FIELD_TYPE​

SQLSTATE: 42846

Cannot convert SQL <sqlColumn> to Protobuf <protobufColumn> because schema is incompatible (protobufType = <protobufType>, sqlType = <sqlType>).

CANNOT_CONVERT_SQL_VALUE_TO_PROTOBUF_ENUM_TYPE​

SQLSTATE: 42846

Cannot convert SQL <sqlColumn> to Protobuf <protobufColumn> because <data> is not in defined values for enum: <enumString>.

CANNOT_COPY_STATE​

SQLSTATE: 0AKD0

Cannot copy catalog state like current database and temporary views from Unity Catalog to a legacy catalog.

CANNOT_CREATE_DATA_SOURCE_TABLE​

SQLSTATE: 42KDE

Failed to create data source table <tableName>:

For more details see CANNOT_CREATE_DATA_SOURCE_TABLE

CANNOT_DECODE_URL​

SQLSTATE: 22546

The provided URL cannot be decoded: <url>. Please ensure that the URL is properly formatted and try again.

CANNOT_DELETE_SYSTEM_OWNED​

SQLSTATE: 42832

System owned <resourceType> cannot be deleted.

CANNOT_DROP_AMBIGUOUS_CONSTRAINT​

SQLSTATE: 42K0C

Cannot drop the constraint with the name <constraintName> shared by a CHECK constraint

and a PRIMARY KEY or FOREIGN KEY constraint. You can drop the PRIMARY KEY or

FOREIGN KEY constraint by queries:

ALTER TABLE .. DROP PRIMARY KEY or

ALTER TABLE .. DROP FOREIGN KEY ..

CANNOT_ESTABLISH_CONNECTION​

SQLSTATE: 08001

Cannot establish connection to remote <jdbcDialectName> database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please check your workspace's network setup and ensure it does not have outbound restrictions to the host. Please also check that the host does not block inbound connections from the network where the workspace's Spark clusters are deployed. ** Detailed error message: <causeErrorMessage>.

CANNOT_ESTABLISH_CONNECTION_SERVERLESS​

SQLSTATE: 08001

Cannot establish connection to remote <jdbcDialectName> database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please allow inbound traffic from the Internet to your host, as you are using Serverless Compute. If your network policies do not allow inbound Internet traffic, please use non Serverless Compute, or you may reach out to your Databricks representative to learn about Serverless Private Networking. ** Detailed error message: <causeErrorMessage>.

CANNOT_INVOKE_IN_TRANSFORMATIONS​

SQLSTATE: 0A000

Dataset transformations and actions can only be invoked by the driver, not inside of other Dataset transformations; for example, dataset1.map(x => dataset2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the dataset1.map transformation. For more information, see SPARK-28702.

CANNOT_LOAD_CHECKPOINT_FILE_MANAGER​

SQLSTATE: 58030

Error loading streaming checkpoint file manager for path=<path>.

For more details see CANNOT_LOAD_CHECKPOINT_FILE_MANAGER

CANNOT_LOAD_FUNCTION_CLASS​

SQLSTATE: 46103

Cannot load class <className> when registering the function <functionName>, please make sure it is on the classpath.

CANNOT_LOAD_PROTOBUF_CLASS​

SQLSTATE: 42K03

Could not load Protobuf class with name <protobufClassName>. <explanation>.

CANNOT_LOAD_STATE_STORE​

SQLSTATE: 58030

An error occurred during loading state.

For more details see CANNOT_LOAD_STATE_STORE

CANNOT_MERGE_INCOMPATIBLE_DATA_TYPE​

SQLSTATE: 42825

Failed to merge incompatible data types <left> and <right>. Please check the data types of the columns being merged and ensure that they are compatible. If necessary, consider casting the columns to compatible data types before attempting the merge.

CANNOT_MERGE_SCHEMAS​

SQLSTATE: 42KD9

Failed merging schemas:

Initial schema:

<left>

Schema that cannot be merged with the initial schema:

<right>.

CANNOT_MODIFY_CONFIG​

SQLSTATE: 46110

Cannot modify the value of the Spark config: <key>.

See also 'https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements'.

CANNOT_PARSE_DECIMAL​

SQLSTATE: 22018

Cannot parse decimal. Please ensure that the input is a valid number with optional decimal point or comma separators.

CANNOT_PARSE_INTERVAL​

SQLSTATE: 22006

Unable to parse <intervalString>. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format. If the issue persists, please double check that the input value is not null or empty and try again.

CANNOT_PARSE_JSON_FIELD​

SQLSTATE: 2203G

Cannot parse the field name <fieldName> and the value <fieldValue> of the JSON token type <jsonType> to target Spark data type <dataType>.

CANNOT_PARSE_PROTOBUF_DESCRIPTOR​

SQLSTATE: 22018

Error parsing descriptor bytes into Protobuf FileDescriptorSet.

CANNOT_PARSE_TIME​

SQLSTATE: 22010

The input string <input> cannot be parsed to a TIME value because it does not match to the datetime format <format>.

CANNOT_PARSE_TIMESTAMP​

SQLSTATE: 22007

<message>. Use <func> to tolerate invalid input string and return NULL instead.

CANNOT_QUERY_TABLE_DURING_INITIALIZATION​

SQLSTATE: 55019

Cannot query MV/ST during initialization.

For more details see CANNOT_QUERY_TABLE_DURING_INITIALIZATION

CANNOT_READ_ARCHIVED_FILE​

SQLSTATE: KD003

Cannot read file at path <path> because it has been archived. Please adjust your query filters to exclude archived files.

CANNOT_READ_FILE​

SQLSTATE: KD003

Cannot read <format> file at path: <path>.

For more details see CANNOT_READ_FILE

CANNOT_READ_SENSITIVE_KEY_FROM_SECURE_PROVIDER​

SQLSTATE: 42501

Cannot read sensitive key '<key>' from secure provider.

CANNOT_RECOGNIZE_HIVE_TYPE​

SQLSTATE: 429BB

Cannot recognize hive type string: <fieldType>, column: <fieldName>. The specified data type for the field cannot be recognized by Spark SQL. Please check the data type of the specified field and ensure that it is a valid Spark SQL data type. Refer to the Spark SQL documentation for a list of valid data types and their format. If the data type is correct, please ensure that you are using a supported version of Spark SQL.

CANNOT_REFERENCE_UC_IN_HMS​

SQLSTATE: 0AKD0

Cannot reference a Unity Catalog <objType> in Hive Metastore objects.

CANNOT_REMOVE_RESERVED_PROPERTY​

SQLSTATE: 42000

Cannot remove reserved property: <property>.

CANNOT_RENAME_ACROSS_CATALOG​

SQLSTATE: 0AKD0

Renaming a <type> across catalogs is not allowed.

CANNOT_RENAME_ACROSS_SCHEMA​

SQLSTATE: 0AKD0

Renaming a <type> across schemas is not allowed.

CANNOT_RESOLVE_DATAFRAME_COLUMN​

SQLSTATE: 42704

Cannot resolve dataframe column <name>. It's probably because of illegal references like df1.select(df2.col("a")).

CANNOT_RESOLVE_STAR_EXPAND​

SQLSTATE: 42704

Cannot resolve <targetString>.* given input columns <columns>. Please check that the specified table or struct exists and is accessible in the input columns.

CANNOT_RESTORE_PERMISSIONS_FOR_PATH​

SQLSTATE: 58030

Failed to set permissions on created path <path> back to <permission>.

CANNOT_SHALLOW_CLONE_ACROSS_UC_AND_HMS​

SQLSTATE: 0AKD0

Cannot shallow-clone tables across Unity Catalog and Hive Metastore.

CANNOT_SHALLOW_CLONE_NESTED​

SQLSTATE: 0AKUC

Cannot shallow-clone a table <table> that is already a shallow clone.

CANNOT_SHALLOW_CLONE_NON_UC_MANAGED_TABLE_AS_SOURCE_OR_TARGET​

SQLSTATE: 0AKUC

Shallow clone is only supported for the MANAGED table type. The table <table> is not MANAGED table.

CANNOT_UPDATE_FIELD​

SQLSTATE: 0A000

Cannot update <table> field <fieldName> type:

For more details see CANNOT_UPDATE_FIELD

CANNOT_UP_CAST_DATATYPE​

SQLSTATE: 42846

Cannot up cast <expression> from <sourceType> to <targetType>.

<details>

CANNOT_USE_KRYO​

SQLSTATE: 22KD3

Cannot load Kryo serialization codec. Kryo serialization cannot be used in the Spark Connect client. Use Java serialization, provide a custom Codec, or use Spark Classic instead.

CANNOT_VALIDATE_CONNECTION​

SQLSTATE: 08000

Validation of <jdbcDialectName> connection is not supported. Please contact Databricks support for alternative solutions, or set "spark.databricks.testConnectionBeforeCreation" to "false" to skip connection testing before creating a connection object.

CANNOT_WRITE_STATE_STORE​

SQLSTATE: 58030

Error writing state store files for provider <providerClass>.

For more details see CANNOT_WRITE_STATE_STORE

CAST_INVALID_INPUT​

SQLSTATE: 22018

The value <expression> of the type <sourceType> cannot be cast to <targetType> because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast to tolerate malformed input and return NULL instead.

For more details see CAST_INVALID_INPUT

CAST_OVERFLOW​

SQLSTATE: 22003

The value <value> of the type <sourceType> cannot be cast to <targetType> due to an overflow. Use try_cast to tolerate overflow and return NULL instead.

CAST_OVERFLOW_IN_TABLE_INSERT​

SQLSTATE: 22003

Fail to assign a value of <sourceType> type to the <targetType> type column or variable <columnName> due to an overflow. Use try_cast on the input value to tolerate overflow and return NULL instead.

CATALOG_NOT_FOUND​

SQLSTATE: 42P08

The catalog <catalogName> not found. Consider to set the SQL config <config> to a catalog plugin.

CATALOG_OWNED_TABLE_CREATION_NOT_ALLOWED​

SQLSTATE: 0A000

Creating Delta tables with <feature> table feature is not allowed. Please contact Databricks support.

CHECKPOINT_RDD_BLOCK_ID_NOT_FOUND​

SQLSTATE: 56000

Checkpoint block <rddBlockId> not found!

Either the executor that originally checkpointed this partition is no longer alive, or the original RDD is unpersisted.

If this problem persists, you may consider using rdd.checkpoint() instead, which is slower than local checkpointing but more fault-tolerant.

CIRCULAR_CLASS_REFERENCE​

SQLSTATE: 42602

Cannot have circular references in class, but got the circular reference of class <t>.

CLASS_NOT_OVERRIDE_EXPECTED_METHOD​

SQLSTATE: 38000

<className> must override either <method1> or <method2>.

CLASS_UNSUPPORTED_BY_MAP_OBJECTS​

SQLSTATE: 0A000

MapObjects does not support the class <cls> as resulting collection.

CLEANROOM_COMMANDS_NOT_SUPPORTED​

SQLSTATE: 0A000

Clean Room commands are not supported

SQLSTATE: 42K05

Invalid name to reference a <type> inside a Clean Room. Use a <type>'s name inside the clean room following the format of [catalog].[schema].[<type>].

If you are unsure about what name to use, you can run "SHOW ALL IN CLEANROOM [clean_room]" and use the value in the "name" column.

CLONING_WITH_HISTORY_INVALID_OPTION​

SQLSTATE: 42613

Cloning with history is specified with an invalid option: <invalidOption>.

Valid syntax: CREATE (OR REPLACE) TABLE ... DEEP CLONE ... WITH HISTORY.

CLONING_WITH_HISTORY_UNSUPPORTED​

SQLSTATE: 0A000

Cloning with history is not supported.

CLOUD_FILE_SOURCE_FILE_NOT_FOUND​

SQLSTATE: 42K03

A file notification was received for file: <filePath> but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration <config> to true.

CLOUD_PROVIDER_ERROR​

SQLSTATE: 58000

Cloud provider error: <message>

CLUSTERING_COLUMNS_MISMATCH​

SQLSTATE: 42P10

Specified clustering does not match that of the existing table <tableName>.

Specified clustering columns: [<specifiedClusteringString>].

Existing clustering columns: [<existingClusteringString>].

CLUSTERING_NOT_SUPPORTED​

SQLSTATE: 42000

'<operation>' does not support clustering.

CLUSTER_BY_AUTO_FEATURE_NOT_ENABLED​

SQLSTATE: 0A000

Please contact your Databricks representative to enable the cluster-by-auto feature.

CLUSTER_BY_AUTO_REQUIRES_CLUSTERING_FEATURE_ENABLED​

SQLSTATE: 56038

Please enable clusteringTable.enableClusteringTableFeature to use CLUSTER BY AUTO.

CLUSTER_BY_AUTO_REQUIRES_PREDICTIVE_OPTIMIZATION​

SQLSTATE: 56038

CLUSTER BY AUTO requires Predictive Optimization to be enabled.

CLUSTER_BY_AUTO_UNSUPPORTED_TABLE_TYPE_ERROR​

SQLSTATE: 56038

CLUSTER BY AUTO is only supported on UC Managed tables.

CODEC_NOT_AVAILABLE​

SQLSTATE: 56038

The codec <codecName> is not available.

For more details see CODEC_NOT_AVAILABLE

CODEC_SHORT_NAME_NOT_FOUND​

SQLSTATE: 42704

Cannot find a short name for the codec <codecName>.

COLLATION_INVALID_NAME​

SQLSTATE: 42704

The value <collationName> does not represent a correct collation name. Suggested valid collation names: [<proposals>].

COLLATION_INVALID_PROVIDER​

SQLSTATE: 42704

The value <provider> does not represent a correct collation provider. Supported providers are: [<supportedProviders>].

COLLATION_MISMATCH​

SQLSTATE: 42P21

Could not determine which collation to use for string functions and operators.

For more details see COLLATION_MISMATCH

COLLECTION_SIZE_LIMIT_EXCEEDED​

SQLSTATE: 54000

Can't create array with <numberOfElements> elements which exceeding the array size limit <maxRoundedArrayLength>,

For more details see COLLECTION_SIZE_LIMIT_EXCEEDED

COLUMN_ALIASES_NOT_ALLOWED​

SQLSTATE: 42601

Column aliases are not allowed in <op>.

COLUMN_ALREADY_EXISTS​

SQLSTATE: 42711

The column <columnName> already exists. Choose another name or rename the existing column.

COLUMN_ARRAY_ELEMENT_TYPE_MISMATCH​

SQLSTATE: 0A000

Some values in field <pos> are incompatible with the column array type. Expected type <type>.

COLUMN_MASKS_ABAC_MISMATCH​

SQLSTATE: 0A000

Column masks could not be resolved on <tableName> because there was a mismatch between column masks inherited from policies and explicitly defined column masks. To proceed, please disable Attribute Based Access Control (ABAC) and contact Databricks support.

COLUMN_MASKS_CHECK_CONSTRAINT_UNSUPPORTED​

SQLSTATE: 0A000

Creating CHECK constraint on table <tableName> with column mask policies is not supported.

COLUMN_MASKS_DUPLICATE_USING_COLUMN_NAME​

SQLSTATE: 42734

A <statementType> statement attempted to assign a column mask policy to a column which included two or more other referenced columns in the USING COLUMNS list with the same name <columnName>, which is invalid.

COLUMN_MASKS_FEATURE_NOT_SUPPORTED​

SQLSTATE: 0A000

Column mask policies for <tableName> are not supported:

For more details see COLUMN_MASKS_FEATURE_NOT_SUPPORTED

COLUMN_MASKS_INCOMPATIBLE_SCHEMA_CHANGE​

SQLSTATE: 0A000

Unable to <statementType> <columnName> from table <tableName> because it's referenced in a column mask policy for column <maskedColumn>. The table owner must remove or alter this policy before proceeding.

COLUMN_MASKS_MERGE_UNSUPPORTED_SOURCE​

SQLSTATE: 0A000

MERGE INTO operations do not support column mask policies in source table <tableName>.

COLUMN_MASKS_MERGE_UNSUPPORTED_TARGET​

SQLSTATE: 0A000

MERGE INTO operations do not support writing into table <tableName> with column mask policies.

COLUMN_MASKS_MULTI_PART_TARGET_COLUMN_NAME​

SQLSTATE: 42K05

This statement attempted to assign a column mask policy to a column <columnName> with multiple name parts, which is invalid.

COLUMN_MASKS_MULTI_PART_USING_COLUMN_NAME​

SQLSTATE: 42K05

This statement attempted to assign a column mask policy to a column and the USING COLUMNS list included the name <columnName> with multiple name parts, which is invalid.

COLUMN_MASKS_NOT_ENABLED​

SQLSTATE: 56038

Support for defining column masks is not enabled

COLUMN_MASKS_REQUIRE_UNITY_CATALOG​

SQLSTATE: 0A000

Column mask policies are only supported in Unity Catalog.

COLUMN_MASKS_SHOW_PARTITIONS_UNSUPPORTED​

SQLSTATE: 0A000

SHOW PARTITIONS command is not supported for<format> tables with column masks.

COLUMN_MASKS_TABLE_CLONE_SOURCE_NOT_SUPPORTED​

SQLSTATE: 0A000

<mode> clone from table <tableName> with column mask policies is not supported.

COLUMN_MASKS_TABLE_CLONE_TARGET_NOT_SUPPORTED​

SQLSTATE: 0A000

<mode> clone to table <tableName> with column mask policies is not supported.

COLUMN_MASKS_UNSUPPORTED_CONSTANT_AS_PARAMETER​

SQLSTATE: 0AKD1

Using a constant as a parameter in a column mask policy is not supported. Please update your SQL command to remove the constant from the column mask definition and then retry the command again.

COLUMN_MASKS_UNSUPPORTED_DATA_TYPE​

SQLSTATE: 0AKDC

Function <functionName> used as a column mask policy contains parameter with unsupported data type <dataType>.

COLUMN_MASKS_UNSUPPORTED_PROVIDER​

SQLSTATE: 0A000

Failed to execute <statementType> command because assigning column mask policies is not supported for target data source with table provider: "<provider>".

COLUMN_MASKS_USING_COLUMN_NAME_SAME_AS_TARGET_COLUMN​

SQLSTATE: 42734

The column <columnName> had the same name as the target column, which is invalid; please remove the column from the USING COLUMNS list and retry the command.

COLUMN_NOT_DEFINED_IN_TABLE​

SQLSTATE: 42703

<colType> column <colName> is not defined in table <tableName>, defined table columns are: <tableCols>.

COLUMN_NOT_FOUND​

SQLSTATE: 42703

The column <colName> cannot be found. Verify the spelling and correctness of the column name according to the SQL config <caseSensitiveConfig>.

COLUMN_ORDINAL_OUT_OF_BOUNDS​

SQLSTATE: 22003

Column ordinal out of bounds. The number of columns in the table is <attributesLength>, but the column ordinal is <ordinal>.

Attributes are the following: <attributes>.

COMMA_PRECEDING_CONSTRAINT_ERROR​

SQLSTATE: 42601

Unexpected ',' before constraint(s) definition. Ensure that the constraint clause does not start with a comma when columns (and expectations) are not defined.

SQLSTATE: 42000

The COMMENT ON CONNECTION command is not implemented yet

COMPARATOR_RETURNS_NULL​

SQLSTATE: 22004

The comparator has returned a NULL for a comparison between <firstValue> and <secondValue>.

It should return a positive integer for "greater than", 0 for "equal" and a negative integer for "less than".

To revert to deprecated behavior where NULL is treated as 0 (equal), you must set "spark.sql.legacy.allowNullComparisonResultInArraySort" to "true".

COMPLEX_EXPRESSION_UNSUPPORTED_INPUT​

SQLSTATE: 42K09

Cannot process input data types for the expression: <expression>.

For more details see COMPLEX_EXPRESSION_UNSUPPORTED_INPUT

CONCURRENT_QUERY​

SQLSTATE: 0A000

Another instance of this query [id: <queryId>] was just started by a concurrent session [existing runId: <existingQueryRunId> new runId: <newQueryRunId>].

CONCURRENT_SCHEDULER_INSUFFICIENT_SLOT​

SQLSTATE: 53000

The minimum number of free slots required in the cluster is <numTasks>, however, the cluster has only has <numSlots> slots free. Query will stall or fail. Increase cluster size to proceed.

CONCURRENT_STREAM_LOG_UPDATE​

SQLSTATE: 40000

Concurrent update to the log. Multiple streaming jobs detected for <batchId>.

Please make sure only one streaming job runs on a specific checkpoint location at a time.

CONFIG_NOT_AVAILABLE​

SQLSTATE: 42K0I

Configuration <config> is not available.

CONFLICTING_CLUSTER_CONFIGURATION​

SQLSTATE: 22023

The following configuration(s) conflict with spark.databricks.streaming.realTimeMode.enabled: <confNames>. Remove these configurations from your cluster configuration and restart your Spark cluster.

CONFLICTING_DIRECTORY_STRUCTURES​

SQLSTATE: KD009

Conflicting directory structures detected.

Suspicious paths:

<discoveredBasePaths>

If provided paths are partition directories, please set "basePath" in the options of the data source to specify the root directory of the table.

If there are multiple root directories, please load them separately and then union them.

CONFLICTING_PARTITION_COLUMN_NAMES​

SQLSTATE: KD009

Conflicting partition column names detected:

<distinctPartColLists>

For partitioned table directories, data files should only live in leaf directories.

And directories at the same level should have the same partition column name.

Please check the following directories for unexpected files or inconsistent partition column names:

<suspiciousPaths>

CONFLICTING_PARTITION_COLUMN_NAME_WITH_RESERVED​

SQLSTATE: KD009

Partition column name '<partitionColumnName>' conflicts with reserved column name.

The schema of <tableName> is Hive-incompatible, Spark automatically generates a reserved column '<partitionColumnName>' to store the table in a specific way.

Please use a different name for the partition column.

CONFLICTING_PROVIDER​

SQLSTATE: 22023

The specified provider <provider> is inconsistent with the existing catalog provider <expectedProvider>. Please use 'USING <expectedProvider>' and retry the command.

CONFLICTING_SQL_CONFIGURATION​

SQLSTATE: 22023

The following configuration(s) conflict with spark.databricks.streaming.realTimeMode.enabled: <confNames>. Remove these configurations from your SparkSession configuration.

CONNECT​

SQLSTATE: 56K00

Generic Spark Connect error.

For more details see CONNECT

CONNECTION_ALREADY_EXISTS​

SQLSTATE: 42000

Cannot create connection <connectionName> because it already exists.

Choose a different name, drop or replace the existing connection, or add the IF NOT EXISTS clause to tolerate pre-existing connections.

CONNECTION_NAME_CANNOT_BE_EMPTY​

SQLSTATE: 42000

Cannot execute this command because the connection name must be non-empty.

CONNECTION_NOT_FOUND​

SQLSTATE: 42000

Cannot execute this command because the connection name <connectionName> was not found.

CONNECTION_OPTION_NOT_SUPPORTED​

SQLSTATE: 42000

Connections of type '<connectionType>' do not support the following option(s): <optionsNotSupported>. Supported options: <allowedOptions>.

CONNECTION_TYPE_NOT_SUPPORTED​

SQLSTATE: 42000

Cannot create connection of type '<connectionType>. Supported connection types: <allowedTypes>.

CONNECTION_TYPE_NOT_SUPPORTED_FOR_OPTIONS_INJECTION​

SQLSTATE: 42000

Connection with name <connectionName> and type <connectionType> is not supported in dataframe options.

CONNECTION_TYPE_NOT_SUPPORTED_FOR_REMOTE_QUERY_FUNCTION​

SQLSTATE: 42000

Connection with name '<connectionName>' of type '<connectionType>' is not supported for remote query function execution.

CONNECT_SESSION_MIGRATION​

SQLSTATE: 56K00

Generic Session Migration error (userId: <userId>, sessionId: <sessionId>, serverSessionId: <serverSessionId>).

For more details see CONNECT_SESSION_MIGRATION

CONSTRAINTS_REQUIRE_UNITY_CATALOG​

SQLSTATE: 0A000

Table constraints are only supported in Unity Catalog.

CONVERSION_INVALID_INPUT​

SQLSTATE: 22018

The value <str> (<fmt>) cannot be converted to <targetType> because it is malformed. Correct the value as per the syntax, or change its format. Use <suggestion> to tolerate malformed input and return NULL instead.

COPY_INTO_COLUMN_ARITY_MISMATCH​

SQLSTATE: 21S01

Cannot write to <tableName>, the reason is

For more details see COPY_INTO_COLUMN_ARITY_MISMATCH

COPY_INTO_CREDENTIALS_NOT_ALLOWED_ON​

SQLSTATE: 0A000

Invalid scheme <scheme>. COPY INTO source credentials currently only supports s3/s3n/s3a/wasbs/abfss.

COPY_INTO_CREDENTIALS_REQUIRED​

SQLSTATE: 42601

COPY INTO source credentials must specify <keyList>.

COPY_INTO_DUPLICATED_FILES_COPY_NOT_ALLOWED​

SQLSTATE: 25000

Duplicated files were committed in a concurrent COPY INTO operation. Please try again later.

COPY_INTO_ENCRYPTION_NOT_ALLOWED_ON​

SQLSTATE: 0A000

Invalid scheme <scheme>. COPY INTO source encryption currently only supports s3/s3n/s3a/abfss.

COPY_INTO_ENCRYPTION_NOT_SUPPORTED_FOR_AZURE​

SQLSTATE: 0A000

COPY INTO encryption only supports ADLS Gen2, or abfss:// file scheme

COPY_INTO_ENCRYPTION_REQUIRED​

SQLSTATE: 42601

COPY INTO source encryption must specify '<key>'.

COPY_INTO_ENCRYPTION_REQUIRED_WITH_EXPECTED​

SQLSTATE: 42601

Invalid encryption option <requiredKey>. COPY INTO source encryption must specify '<requiredKey>' = '<keyValue>'.

COPY_INTO_FEATURE_INCOMPATIBLE_SETTING​

SQLSTATE: 42613

The COPY INTO feature '<feature>' is not compatible with '<incompatibleSetting>'.

COPY_INTO_NON_BLIND_APPEND_NOT_ALLOWED​

SQLSTATE: 25000

COPY INTO other than appending data is not allowed to run concurrently with other transactions. Please try again later.

COPY_INTO_ROCKSDB_MAX_RETRY_EXCEEDED​

SQLSTATE: 25000

COPY INTO failed to load its state, maximum retries exceeded.

COPY_INTO_SCHEMA_MISMATCH_WITH_TARGET_TABLE​

SQLSTATE: 42KDG

A schema mismatch was detected while copying into the Delta table (Table: <table>).

This may indicate an issue with the incoming data, or the Delta table schema can be evolved automatically according to the incoming data by setting:

COPY_OPTIONS ('mergeSchema' = 'true')

Schema difference:

<schemaDiff>

COPY_INTO_SOURCE_FILE_FORMAT_NOT_SUPPORTED​

SQLSTATE: 0A000

The format of the source files must be one of CSV, JSON, AVRO, ORC, PARQUET, TEXT, or BINARYFILE. Using COPY INTO on Delta tables as the source is not supported as duplicate data may be ingested after OPTIMIZE operations. This check can be turned off by running the SQL command set spark.databricks.delta.copyInto.formatCheck.enabled = false.

COPY_INTO_SOURCE_SCHEMA_INFERENCE_FAILED​

SQLSTATE: 42KD9

The source directory did not contain any parsable files of type <format>. Please check the contents of '<source>'.

The error can be silenced by setting '<config>' to 'false'.

COPY_INTO_STATE_INTERNAL_ERROR​

SQLSTATE: 55019

An internal error occurred while processing COPY INTO state.

For more details see COPY_INTO_STATE_INTERNAL_ERROR

COPY_INTO_SYNTAX_ERROR​

SQLSTATE: 42601

Failed to parse the COPY INTO command.

For more details see COPY_INTO_SYNTAX_ERROR

COPY_INTO_UNSUPPORTED_FEATURE​

SQLSTATE: 0A000

The COPY INTO feature '<feature>' is not supported.

COPY_UNLOAD_FORMAT_TYPE_NOT_SUPPORTED​

SQLSTATE: 42000

Cannot unload data in format '<formatType>'. Supported formats for <connectionType> are: <allowedFormats>.

CORRUPTED_CATALOG_FUNCTION​

SQLSTATE: 0A000

Cannot convert the catalog function '<identifier>' into a SQL function due to corrupted function information in catalog. If the function is not a SQL function, please make sure the class name '<className>' is loadable.

CREATE_FOREIGN_SCHEMA_NOT_IMPLEMENTED_YET​

SQLSTATE: 42000

The CREATE FOREIGN SCHEMA command is not implemented yet

CREATE_FOREIGN_TABLE_NOT_IMPLEMENTED_YET​

SQLSTATE: 42000

The CREATE FOREIGN TABLE command is not implemented yet

CREATE_OR_REFRESH_MV_NOT_SUPPORTED​

SQLSTATE: 42601

CREATE OR REFRESH MATERIALIZED VIEW is not supported. Use CREATE OR REPLACE MATERIALIZED VIEW instead.

CREATE_OR_REFRESH_MV_ST_ASYNC​

SQLSTATE: 0A000

Cannot CREATE OR REFRESH materialized views or streaming tables with ASYNC specified. Please remove ASYNC from the CREATE OR REFRESH statement or use REFRESH ASYNC to refresh existing materialized views or streaming tables asynchronously.

CREATE_PERMANENT_VIEW_WITHOUT_ALIAS​

SQLSTATE: 0A000

Not allowed to create the permanent view <name> without explicitly assigning an alias for the expression <attr>.

CREATE_TABLE_COLUMN_DESCRIPTOR_DUPLICATE​

SQLSTATE: 42710

CREATE TABLE column <columnName> specifies descriptor "<optionName>" more than once, which is invalid.

CREATE_VIEW_COLUMN_ARITY_MISMATCH​

SQLSTATE: 21S01

Cannot create view <viewName>, the reason is

For more details see CREATE_VIEW_COLUMN_ARITY_MISMATCH

CREDENTIAL_MISSING​

SQLSTATE: 42601

Please provide credentials when creating or updating external locations.

CREDENTIAL_PURPOSE_MISMATCH​

SQLSTATE: 42809

The credential <credentialName> has purpose <actualPurpose> but the purpose given in the command is <expectedPurpose>.

CSV_ENFORCE_SCHEMA_NOT_SUPPORTED​

SQLSTATE: 0A000

The CSV option enforceSchema cannot be set when using rescuedDataColumn or failOnUnknownFields, as columns are read by name rather than ordinal.

CYCLIC_FUNCTION_REFERENCE​

SQLSTATE: 42887

Cyclic function reference detected: <path>.

DATABRICKS_DELTA_NOT_ENABLED​

SQLSTATE: 56038

Databricks Delta is not enabled in your account.<hints>

DATATYPE_MISMATCH​

SQLSTATE: 42K09

Cannot resolve <sqlExpr> due to data type mismatch:

For more details see DATATYPE_MISMATCH

DATATYPE_MISSING_SIZE​

SQLSTATE: 42K01

DataType <type> requires a length parameter, for example <type>(10). Please specify the length.

DATA_LINEAGE_SECURE_VIEW_LEAF_NODE_HAS_NO_RELATION​

SQLSTATE: 25000

Write Lineage unsuccessful: missing corresponding relation with policies for CLM/RLS.

DATA_SOURCE_ALREADY_EXISTS​

SQLSTATE: 42710

Data source '<provider>' already exists. Please choose a different name for the new data source.

DATA_SOURCE_EXTERNAL_ERROR​

SQLSTATE: KD010

Encountered error when saving to external data source.

DATA_SOURCE_NOT_EXIST​

SQLSTATE: 42704

Data source '<provider>' not found. Please make sure the data source is registered.

DATA_SOURCE_NOT_FOUND​

SQLSTATE: 42K02

Failed to find the data source: <provider>. Make sure the provider name is correct and the package is properly registered and compatible with your Spark version.

DATA_SOURCE_OPTION_CONTAINS_INVALID_CHARACTERS​

SQLSTATE: 42602

Option <option> must not be empty and should not contain invalid characters, query strings, or parameters.

DATA_SOURCE_OPTION_IS_REQUIRED​

SQLSTATE: 42601

Option <option> is required.

DATA_SOURCE_OPTION_VALUE_NOT_VALID​

SQLSTATE: 42602

Provided data source option '<option>' contains invalid value ('<value>').

For more details see DATA_SOURCE_OPTION_VALUE_NOT_VALID

DATA_SOURCE_TABLE_SCHEMA_MISMATCH​

SQLSTATE: 42K03

The schema of the data source table does not match the expected schema. If you are using the DataFrameReader.schema API or creating a table, avoid specifying the schema.

Data Source schema: <dsSchema>

Expected schema: <expectedSchema>

DATA_SOURCE_URL_NOT_ALLOWED​

SQLSTATE: 42KDB

JDBC URL is not allowed in data source options, please specify 'host', 'port', and 'database' options instead.

DATETIME_FIELD_OUT_OF_BOUNDS​

SQLSTATE: 22023

<rangeMessage>.

For more details see DATETIME_FIELD_OUT_OF_BOUNDS

DATETIME_OVERFLOW​

SQLSTATE: 22008

Datetime operation overflow: <operation>.

DC_API_QUOTA_EXCEEDED​

SQLSTATE: KD000

You have exceeded the API quota for the data source <sourceName>.

For more details see DC_API_QUOTA_EXCEEDED

DC_CONNECTION_ERROR​

SQLSTATE: KD000

Failed to make a connection to the <sourceName> source. Error code: <errorCode>.

For more details see DC_CONNECTION_ERROR

DC_DYNAMICS_API_ERROR​

SQLSTATE: KD000

Error happened in Dynamics API calls, errorCode: <errorCode>.

For more details see DC_DYNAMICS_API_ERROR

DC_NETSUITE_ERROR​

SQLSTATE: KD000

Error happened in Netsuite JDBC calls, errorCode: <errorCode>.

For more details see DC_NETSUITE_ERROR

DC_SCHEMA_CHANGE_ERROR​

SQLSTATE: none assigned

A schema change has occurred in table <tableName> of the <sourceName> source.

For more details see DC_SCHEMA_CHANGE_ERROR

DC_SERVICENOW_API_ERROR​

SQLSTATE: KD000

Error happened in ServiceNow API calls, errorCode: <errorCode>.

For more details see DC_SERVICENOW_API_ERROR

DC_SFDC_BULK_QUERY_JOB_INCOMPLETE​

SQLSTATE: KD000

Ingestion for object <objName> is incomplete because the Salesforce API query job took too long, failed, or was manually cancelled.

To try again, you can either re-run the entire pipeline or refresh this specific destination table. If the error persists, file a ticket. Job ID: <jobId>. Job status: <jobStatus>.

SQLSTATE: KD000

Error happened in Sharepoint API calls, errorCode: <errorCode>.

For more details see DC_SHAREPOINT_API_ERROR

DC_SOURCE_API_ERROR​

SQLSTATE: KD000

An error occurred in the <sourceName> API call. Source API type: <apiType>. Error code: <errorCode>.

This can sometimes happen when you've reached a <sourceName> API limit. If you haven't exceeded your API limit, try re-running the connector. If the issue persists, please file a ticket.

DC_UNSUPPORTED_ERROR​

SQLSTATE: 0A000

Unsupported error happened in data source <sourceName>.

For more details see DC_UNSUPPORTED_ERROR

DC_WORKDAY_RAAS_API_ERROR​

SQLSTATE: KD000

Error happened in Workday RAAS API calls, errorCode: <errorCode>.

For more details see DC_WORKDAY_RAAS_API_ERROR

DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION​

SQLSTATE: 22003

Decimal precision <precision> exceeds max precision <maxPrecision>.

DEFAULT_DATABASE_NOT_EXISTS​

SQLSTATE: 42704

Default database <defaultDatabase> does not exist, please create it first or change default database to <defaultDatabase>.

DEFAULT_FILE_NOT_FOUND​

SQLSTATE: 42K03

It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If disk cache is stale or the underlying files have been removed, you can invalidate disk cache manually by restarting the cluster.

DEFAULT_PLACEMENT_INVALID​

SQLSTATE: 42608

A DEFAULT keyword in a MERGE, INSERT, UPDATE, or SET VARIABLE command could not be directly assigned to a target column because it was part of an expression.

For example: UPDATE SET c1 = DEFAULT is allowed, but UPDATE T SET c1 = DEFAULT + 1 is not allowed.

DEFAULT_UNSUPPORTED​

SQLSTATE: 42623

Failed to execute <statementType> command because DEFAULT values are not supported for target data source with table provider: "<dataSource>".

DESCRIBE_JSON_NOT_EXTENDED​

SQLSTATE: 0A000

DESCRIBE TABLE ... AS JSON only supported when [EXTENDED|FORMATTED] is specified.

For example: DESCRIBE EXTENDED <tableName> AS JSON is supported but DESCRIBE <tableName> AS JSON is not.

DIFFERENT_DELTA_TABLE_READ_BY_STREAMING_SOURCE​

SQLSTATE: 55019

The streaming query was reading from an unexpected Delta table (id = '<newTableId>').

It used to read from another Delta table (id = '<oldTableId>') according to checkpoint.

This may happen when you changed the code to read from a new table or you deleted and

re-created a table. Please revert your change or delete your streaming query checkpoint

to restart from scratch.

DISTINCT_WINDOW_FUNCTION_UNSUPPORTED​

SQLSTATE: 0A000

Distinct window functions are not supported: <windowExpr>.

DIVIDE_BY_ZERO​

SQLSTATE: 22012

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead. If necessary set <config> to "false" to bypass this error.

For more details see DIVIDE_BY_ZERO

DLT_EXPECTATIONS_NOT_SUPPORTED​

SQLSTATE: 56038

Expectations are only supported within Lakeflow Declarative Pipelines.

DLT_VIEW_CLUSTER_BY_NOT_SUPPORTED​

SQLSTATE: 56038

MATERIALIZED VIEWs with a CLUSTER BY clause are supported only in Lakeflow Declarative Pipelines.

DLT_VIEW_LOCATION_NOT_SUPPORTED​

SQLSTATE: 56038

materialized view locations are supported only in Lakeflow Declarative Pipelines.

DLT_VIEW_SCHEMA_WITH_TYPE_NOT_SUPPORTED​

SQLSTATE: 56038

materialized view schemas with a specified type are supported only in Lakeflow Declarative Pipelines.

DLT_VIEW_TABLE_CONSTRAINTS_NOT_SUPPORTED​

SQLSTATE: 56038

CONSTRAINT clauses in a view are only supported in Lakeflow Declarative Pipelines.

DROP_SCHEDULE_DOES_NOT_EXIST​

SQLSTATE: 42000

Cannot drop SCHEDULE on a table without an existing schedule or trigger.

DUPLICATED_CTE_NAMES​

SQLSTATE: 42602

CTE definition can't have duplicate names: <duplicateNames>.

DUPLICATED_FIELD_NAME_IN_ARROW_STRUCT​

SQLSTATE: 42713

Duplicated field names in Arrow Struct are not allowed, got <fieldNames>.

DUPLICATED_MAP_KEY​

SQLSTATE: 23505

Duplicate map key <key> was found, please check the input data.

If you want to remove the duplicated keys, you can set <mapKeyDedupPolicy> to "LAST_WIN" so that the key inserted at last takes precedence.

DUPLICATED_METRICS_NAME​

SQLSTATE: 42710

The metric name is not unique: <metricName>. The same name cannot be used for metrics with different results.

However multiple instances of metrics with with same result and name are allowed (e.g. self-joins).

DUPLICATE_ASSIGNMENTS​

SQLSTATE: 42701

The columns or variables <nameList> appear more than once as assignment targets.

DUPLICATE_CLAUSES​

SQLSTATE: 42614

Found duplicate clauses: <clauseName>. Please, remove one of them.

DUPLICATE_CONDITION_IN_SCOPE​

SQLSTATE: 42734

Found duplicate condition <condition> in the scope. Please, remove one of them.

DUPLICATE_EXCEPTION_HANDLER​

SQLSTATE: 42734

Found duplicate handlers. Please, remove one of them.

For more details see DUPLICATE_EXCEPTION_HANDLER

DUPLICATE_KEY​

SQLSTATE: 23505

Found duplicate keys <keyColumn>.

DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT​

SQLSTATE: 4274K

Call to routine <routineName> is invalid because it includes multiple argument assignments to the same parameter name <parameterName>.

For more details see DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT

DUPLICATE_ROUTINE_PARAMETER_NAMES​

SQLSTATE: 42734

Found duplicate name(s) in the parameter list of the user-defined routine <routineName>: <names>.

DUPLICATE_ROUTINE_RETURNS_COLUMNS​

SQLSTATE: 42711

Found duplicate column(s) in the RETURNS clause column list of the user-defined routine <routineName>: <columns>.

EMITTING_ROWS_OLDER_THAN_WATERMARK_NOT_ALLOWED​

SQLSTATE: 42815

Previous node emitted a row with eventTime=<emittedRowEventTime> which is older than current_watermark_value=<currentWatermark>

This can lead to correctness issues in the stateful operators downstream in the execution pipeline.

Please correct the operator logic to emit rows after current global watermark value.

EMPTY_JSON_FIELD_VALUE​

SQLSTATE: 42604

Failed to parse an empty string for data type <dataType>.

EMPTY_LOCAL_FILE_IN_STAGING_ACCESS_QUERY​

SQLSTATE: 22023

Empty local file in staging <operation> query

EMPTY_SCHEMA_NOT_SUPPORTED_FOR_DATASOURCE​

SQLSTATE: 0A000

The <format> datasource does not support writing empty or nested empty schemas. Please make sure the data schema has at least one or more column(s).

ENCODER_NOT_FOUND​

SQLSTATE: 42704

Not found an encoder of the type <typeName> to Spark SQL internal representation.

Consider to change the input type to one of supported at '<docroot>/sql-ref-datatypes.html'.

END_LABEL_WITHOUT_BEGIN_LABEL​

SQLSTATE: 42K0L

End label <endLabel> can not exist without begin label.

END_OFFSET_HAS_GREATER_OFFSET_FOR_TOPIC_PARTITION_THAN_LATEST_WITH_TRIGGER_AVAILABLENOW​

SQLSTATE: KD000

Some of partitions in Kafka topic(s) report available offset which is less than end offset during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.

latest offset: <latestOffset>, end offset: <endOffset>

END_OFFSET_HAS_GREATER_OFFSET_FOR_TOPIC_PARTITION_THAN_PREFETCHED​

SQLSTATE: KD000

For Kafka data source with Trigger.AvailableNow, end offset should have lower or equal offset per each topic partition than pre-fetched offset. The error could be transient - restart your query, and report if you still see the same issue.

pre-fetched offset: <prefetchedOffset>, end offset: <endOffset>.

ERROR_READING_AVRO_UNKNOWN_FINGERPRINT​

SQLSTATE: KD00B

Error reading avro data -- encountered an unknown fingerprint: <fingerprint>, not sure what schema to use.

This could happen if you registered additional schemas after starting your spark context.

EVENT_LOG_EMPTY​

SQLSTATE: 55019

The event log for <tableOrPipeline> has no schema and contains no events. Try again later after events are generated

SQLSTATE: 42601

Cannot query event logs from an Assigned or No Isolation Shared cluster, please use a Shared cluster or a Databricks SQL warehouse instead.

EVENT_LOG_TVF_UNSUPPORTED_FOR_PIPELINE​

SQLSTATE: 0A000

The EVENT_LOG Table-Valued Function is not supported for pipelines using the 'schema' field or pipelines publishing to default storage.

To query the event log, publish it to the metastore by specifying the event_log field in the pipeline settings.

For more information, see the Monitor Lakeflow Declarative Pipelines documentation: https://docs.databricks.com/aws/en/delta-live-tables/observability.

EVENT_LOG_UNAVAILABLE​

SQLSTATE: 55019

No event logs available for <tableOrPipeline>. Please try again later after events are generated

EVENT_LOG_UNSUPPORTED_TABLE_TYPE​

SQLSTATE: 42832

The table type of <tableIdentifier> is <tableType>.

Querying event logs only supports materialized views, streaming tables, or Lakeflow Declarative Pipelines.

EVENT_TIME_IS_NOT_ON_TIMESTAMP_TYPE​

SQLSTATE: 42K09

The event time <eventName> has the invalid type <eventType>, but expected "TIMESTAMP".

EXCEED_LIMIT_LENGTH​

SQLSTATE: 54006

Exceeds char/varchar type length limitation: <limit>.

EXCEL_INVALID_WRITE_OPTION_VALUE​

SQLSTATE: 0A000

The Excel data source does not support the value '<value>' for the write option '<option>'.

For more details see EXCEL_INVALID_WRITE_OPTION_VALUE

EXCEL_INVALID_WRITE_SCHEMA​

SQLSTATE: 42000

The Excel data source does not support the schema '<schema>' for writes.

<hint>

EXCEL_UNSUPPORTED_WRITE_OPTION​

SQLSTATE: 42616

The Excel data source does not support the write option '<option>'.

<hint>

EXCEPT_NESTED_COLUMN_INVALID_TYPE​

SQLSTATE: 428H2

EXCEPT column <columnName> was resolved and expected to be StructType, but found type <dataType>.

EXCEPT_OVERLAPPING_COLUMNS​

SQLSTATE: 42702

Columns in an EXCEPT list must be distinct and non-overlapping, but got (<columns>).

EXCEPT_RESOLVED_COLUMNS_WITHOUT_MATCH​

SQLSTATE: 42703

EXCEPT columns [<exceptColumns>] were resolved, but do not match any of the columns [<expandedColumns>] from the star expansion.

EXCEPT_UNRESOLVED_COLUMN_IN_STRUCT_EXPANSION​

SQLSTATE: 42703

The column/field name <objectName> in the EXCEPT clause cannot be resolved. Did you mean one of the following: [<objectList>]?

Note: nested columns in the EXCEPT clause may not include qualifiers (table name, parent struct column name, etc.) during a struct expansion; try removing qualifiers if they are used with nested columns.

EXECUTOR_BROADCAST_JOIN_OOM​

SQLSTATE: 53200

There is not enough memory to build the broadcast relation <relationClassName>. Relation Size = <relationSize>. Total memory used by this task = <taskMemoryUsage>. Executor Memory Manager Metrics: onHeapExecutionMemoryUsed = <onHeapExecutionMemoryUsed>, offHeapExecutionMemoryUsed = <offHeapExecutionMemoryUsed>, onHeapStorageMemoryUsed = <onHeapStorageMemoryUsed>, offHeapStorageMemoryUsed = <offHeapStorageMemoryUsed>. [sparkPlanId: <sparkPlanId>] Disable broadcasts for this query using 'set spark.sql.autoBroadcastJoinThreshold=-1' or using join hint to force shuffle join.

EXECUTOR_BROADCAST_JOIN_STORE_OOM​

SQLSTATE: 53200

There is not enough memory to store the broadcast relation <relationClassName>. Relation Size = <relationSize>. StorageLevel = <storageLevel>. [sparkPlanId: <sparkPlanId>] Disable broadcasts for this query using 'set spark.sql.autoBroadcastJoinThreshold=-1' or using join hint to force shuffle join.

EXEC_IMMEDIATE_DUPLICATE_ARGUMENT_ALIASES​

SQLSTATE: 42701

The USING clause of this EXECUTE IMMEDIATE command contained multiple arguments with same alias (<aliases>), which is invalid; please update the command to specify unique aliases and then try it again.

EXPECT_PERMANENT_TABLE_NOT_TEMP​

SQLSTATE: 42809

'<operation>' expects a permanent table but <tableName> is a temporary table. Please specify a permanent table instead.

EXPECT_PERMANENT_VIEW_NOT_TEMP​

SQLSTATE: 42809

'<operation>' expects a permanent view but <viewName> is a temp view.

EXPECT_TABLE_NOT_VIEW​

SQLSTATE: 42809

'<operation>' expects a table but <viewName> is a view.

For more details see EXPECT_TABLE_NOT_VIEW

EXPECT_VIEW_NOT_TABLE​

SQLSTATE: 42809

The table <tableName> does not support <operation>.

For more details see EXPECT_VIEW_NOT_TABLE

EXPRESSION_DECODING_FAILED​

SQLSTATE: 42846

Failed to decode a row to a value of the expressions: <expressions>.

EXPRESSION_ENCODING_FAILED​

SQLSTATE: 42846

Failed to encode a value of the expressions: <expressions> to a row.

EXPRESSION_TYPE_IS_NOT_ORDERABLE​

SQLSTATE: 42822

Column expression <expr> cannot be sorted because its type <exprType> is not orderable.

EXTERNAL_SHALLOW_CLONE_STILL_EXISTS​

SQLSTATE: 42893

Failed to run the operation on the source table <sourceTable> because the shallow clone <targetTable> still exists and its status is invalid. If you indeed want to proceed with this operation, please clean up the shallow clone by explicitly running DROP command.

EXTERNAL_TABLE_INVALID_SCHEME​

SQLSTATE: 0A000

External tables don't support the <scheme> scheme.

FABRIC_REFRESH_INVALID_SCOPE​

SQLSTATE: 0A000

Error running 'REFRESH FOREIGN <scope> <name>'. Cannot refresh a Fabric <scope> directly, please use 'REFRESH FOREIGN CATALOG <catalogName>' to refresh the Fabric Catalog instead.

FAILED_EXECUTE_UDF​

SQLSTATE: 39000

User defined function (<functionName>: (<signature>) => <result>) failed due to: <reason>.

FAILED_FUNCTION_CALL​

SQLSTATE: 38000

Failed preparing of the function <funcName> for call. Please, double check function's arguments.

FAILED_JDBC​

SQLSTATE: HV000

Failed JDBC <url> on the operation:

For more details see FAILED_JDBC

FAILED_PARSE_STRUCT_TYPE​

SQLSTATE: 22018

Failed parsing struct: <raw>.

FAILED_READ_FILE​

SQLSTATE: KD001

Error while reading file <path>.

For more details see FAILED_READ_FILE

FAILED_REGISTER_CLASS_WITH_KRYO​

SQLSTATE: KD000

Failed to register classes with Kryo.

FAILED_RENAME_PATH​

SQLSTATE: 42K04

Failed to rename <sourcePath> to <targetPath> as destination already exists.

FAILED_RENAME_TEMP_FILE​

SQLSTATE: 58030

Failed to rename temp file <srcPath> to <dstPath> as FileSystem.rename returned false.

FAILED_ROW_TO_JSON​

SQLSTATE: 2203G

Failed to convert the row value <value> of the class <class> to the target SQL type <sqlType> in the JSON format.

FAILED_TO_LOAD_ROUTINE​

SQLSTATE: 38000

Failed to load routine <routineName>.

FAILED_TO_PARSE_TOO_COMPLEX​

SQLSTATE: 54001

The statement, including potential SQL functions and referenced views, was too complex to parse.

To mitigate this error divide the statement into multiple, less complex chunks.

FEATURE_NOT_ENABLED​

SQLSTATE: 56038

The feature <featureName> is not enabled. Consider setting the config <configKey> to <configValue> to enable this capability.

FEATURE_NOT_ON_CLASSIC_WAREHOUSE​

SQLSTATE: 56038

<feature> is not supported on Classic SQL warehouses. To use this feature, use a Pro or Serverless SQL warehouse.

FEATURE_REQUIRES_UC​

SQLSTATE: 0AKUD

<feature> is not supported without Unity Catalog. To use this feature, enable Unity Catalog.

FEATURE_UNAVAILABLE​

SQLSTATE: 56038

<feature> is not supported in your environment. To use this feature, please contact Databricks Support.

FGAC_ON_DEDICATED_COMPUTE_FAILED​

SQLSTATE: KD011

Fine-grained access control (FGAC) on dedicated compute failed due the following exception: <message>

FIELD_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot <op> column, because <fieldNames> already exists in <struct>.

FIELD_NOT_FOUND​

SQLSTATE: 42704

No such struct field <fieldName> in <fields>.

FILE_IN_STAGING_PATH_ALREADY_EXISTS​

SQLSTATE: 42K04

File in staging path <path> already exists but OVERWRITE is not set

FLATMAPGROUPSWITHSTATE_USER_FUNCTION_ERROR​

SQLSTATE: 39000

An error occurred in the user provided function in flatMapGroupsWithState. Reason: <reason>

FORBIDDEN_DATASOURCE_IN_SERVERLESS​

SQLSTATE: 0A000

Querying the data source <source> in serverless compute is not allowed. Only <allowlist> data sources are supported in serverless compute.

FORBIDDEN_KEYWORD_IN_JDBC_QUERY​

SQLSTATE: 42000

The query option <queryOption> cannot contain forbidden keywords. Please remove the following keywords from the query: <keywords>

FORBIDDEN_OPERATION​

SQLSTATE: 42809

The operation <statement> is not allowed on the <objectType>: <objectName>.

FOREACH_BATCH_USER_FUNCTION_ERROR​

SQLSTATE: 39000

An error occurred in the user provided function in foreach batch sink. Reason: <reason>

FOREACH_USER_FUNCTION_ERROR​

SQLSTATE: 39000

An error occurred in the user provided function in foreach sink. Reason: <reason>

FOREIGN_KEY_MISMATCH​

SQLSTATE: 42830

Foreign key parent columns <parentColumns> do not match primary key child columns <childColumns>.

FOREIGN_OBJECT_NAME_CANNOT_BE_EMPTY​

SQLSTATE: 42000

Cannot execute this command because the foreign <objectType> name must be non-empty.

FOREIGN_TABLE_CONVERSION_UNSUPPORTED​

SQLSTATE: 0AKUC

Table is not eligible for upgrade from UC Foreign to UC External. Reason:

For more details see FOREIGN_TABLE_CONVERSION_UNSUPPORTED

FOUND_MULTIPLE_DATA_SOURCES​

SQLSTATE: 42710

Detected multiple data sources with the name '<provider>'. Please check the data source isn't simultaneously registered and located in the classpath.

FROM_JSON_CONFLICTING_SCHEMA_UPDATES​

SQLSTATE: 42601

from_json inference encountered conflicting schema updates at: <location>

FROM_JSON_CORRUPT_RECORD_COLUMN_IN_SCHEMA​

SQLSTATE: 42601

from_json found columnNameOfCorruptRecord (<columnNameOfCorruptRecord>) present

in a JSON object and can no longer proceed. Please configure a different value for

the option 'columnNameOfCorruptRecord'.

FROM_JSON_CORRUPT_SCHEMA​

SQLSTATE: 42601

from_json inference could not read the schema stored at: <location>

FROM_JSON_INFERENCE_FAILED​

SQLSTATE: 42601

from_json was unable to infer the schema. Please provide one instead.

FROM_JSON_INFERENCE_NOT_SUPPORTED​

SQLSTATE: 0A000

from_json inference is only supported when defining streaming tables

FROM_JSON_INVALID_CONFIGURATION​

SQLSTATE: 42601

from_json configuration is invalid:

For more details see FROM_JSON_INVALID_CONFIGURATION

FROM_JSON_SCHEMA_EVOLUTION_FAILED​

SQLSTATE: 22KD3

from_json could not evolve from <old> to <new>

FUNCTION_PARAMETERS_MUST_BE_NAMED​

SQLSTATE: 07001

The function <function> requires named parameters. Parameters missing names: <exprs>. Please update the function call to add names for all parameters, e.g., <function>(param_name => ...).

GENERATED_COLUMN_WITH_DEFAULT_VALUE​

SQLSTATE: 42623

A column cannot have both a default value and a generation expression but column <colName> has default value: (<defaultValue>) and generation expression: (<genExpr>).

GET_TABLES_BY_TYPE_UNSUPPORTED_BY_HIVE_VERSION​

SQLSTATE: 56038

Hive 2.2 and lower versions don't support getTablesByType. Please use Hive 2.3 or higher version.

GET_WARMUP_TRACING_FAILED​

SQLSTATE: 42601

Failed to get warmup tracing. Cause: <cause>.

GET_WARMUP_TRACING_FUNCTION_NOT_ALLOWED​

SQLSTATE: 42601

Function get_warmup_tracing() not allowed.

GRAPHITE_SINK_INVALID_PROTOCOL​

SQLSTATE: KD000

Invalid Graphite protocol: <protocol>.

GRAPHITE_SINK_PROPERTY_MISSING​

SQLSTATE: KD000

Graphite sink requires '<property>' property.

GROUPING_COLUMN_MISMATCH​

SQLSTATE: 42803

Column of grouping (<grouping>) can't be found in grouping columns <groupingColumns>.

GROUPING_ID_COLUMN_MISMATCH​

SQLSTATE: 42803

Columns of grouping_id (<groupingIdColumn>) does not match grouping columns (<groupByColumns>).

GROUPING_SIZE_LIMIT_EXCEEDED​

SQLSTATE: 54000

Grouping sets size cannot be greater than <maxSize>.

GROUP_BY_AGGREGATE​

SQLSTATE: 42903

Aggregate functions are not allowed in GROUP BY, but found <sqlExpr>.

For more details see GROUP_BY_AGGREGATE

GROUP_BY_POS_AGGREGATE​

SQLSTATE: 42903

GROUP BY <index> refers to an expression <aggExpr> that contains an aggregate function. Aggregate functions are not allowed in GROUP BY.

GROUP_BY_POS_OUT_OF_RANGE​

SQLSTATE: 42805

GROUP BY position <index> is not in select list (valid range is [1, <size>]).

GROUP_EXPRESSION_TYPE_IS_NOT_ORDERABLE​

SQLSTATE: 42822

The expression <sqlExpr> cannot be used as a grouping expression because its data type <dataType> is not an orderable data type.

HDFS_HTTP_ERROR​

SQLSTATE: KD00F

When attempting to read from HDFS, HTTP request failed.

For more details see HDFS_HTTP_ERROR

HINT_UNSUPPORTED_FOR_JDBC_DIALECT​

SQLSTATE: 42822

The option hint is not supported for <jdbcDialect> in JDBC data source. Supported dialects are MySQLDialect, OracleDialect and DatabricksDialect.

HIVE_METASTORE_INVALID_PLACEHOLDER_PATH​

SQLSTATE: 42K06

The query or command failed to execute because the 'spark.databricks.hive.metastore.tablePlaceholderPath' configuration provided an invalid Hive Metastore table placeholder path. Please update this configuration with a new value to provide a valid path and then re-run the query or command again.

HIVE_METASTORE_TABLE_PLACEHOLDER_PATH_NOT_SET​

SQLSTATE: 42000

The query or command failed because the Hive Metastore table placeholder path is not set, which is required when schema location is on DBFS, and table location is an object/file. Please set spark.databricks.hive.metastore.tablePlaceholderPath to a path you have access to and then re-run the query or command again.

HLL_INVALID_INPUT_SKETCH_BUFFER​

SQLSTATE: 22546

Invalid call to <function>; only valid HLL sketch buffers are supported as inputs (such as those produced by the hll_sketch_agg function).

HLL_INVALID_LG_K​

SQLSTATE: 22546

Invalid call to <function>; the lgConfigK value must be between <min> and <max>, inclusive: <value>.

HLL_UNION_DIFFERENT_LG_K​

SQLSTATE: 22000

Sketches have different lgConfigK values: <left> and <right>. Set the allowDifferentLgConfigK parameter to true to call <function> with different lgConfigK values.

HMS_FEDERATION_SHALLOW_CLONE_NOT_FOUND_IN_UC​

SQLSTATE: 22KD1

The shallow clone path '<path>' could not be resolved to a table in Unity Catalog. Please ensure the table exists and is federated to Unity Catalog.

HYBRID_ANALYZER_EXCEPTION​

SQLSTATE: 0A000

An failure occurred when attempting to resolve a query or command with both the legacy fixed-point analyzer as well as the single-pass resolver.

For more details see HYBRID_ANALYZER_EXCEPTION

IDENTIFIER_TOO_MANY_NAME_PARTS​

SQLSTATE: 42601

<identifier> is not a valid identifier as it has more than 2 name parts.

IDENTITY_COLUMNS_DUPLICATED_SEQUENCE_GENERATOR_OPTION​

SQLSTATE: 42601

Duplicated IDENTITY column sequence generator option: <sequenceGeneratorOption>.

IDENTITY_COLUMNS_ILLEGAL_STEP​

SQLSTATE: 42611

IDENTITY column step cannot be 0.

IDENTITY_COLUMNS_UNSUPPORTED_DATA_TYPE​

SQLSTATE: 428H2

DataType <dataType> is not supported for IDENTITY columns.

ILLEGAL_DAY_OF_WEEK​

SQLSTATE: 22009

Illegal input for day of week: <string>.

ILLEGAL_STATE_STORE_VALUE​

SQLSTATE: 42601

Illegal value provided to the State Store

For more details see ILLEGAL_STATE_STORE_VALUE

INAPPROPRIATE_URI_SCHEME_OF_CONNECTION_OPTION​

SQLSTATE: 42616

Connection can't be created due to inappropriate scheme of URI <uri> provided for the connection option '<option>'.

Allowed scheme(s): <allowedSchemes>.

Please add a scheme if it is not present in the URI, or specify a scheme from the allowed values.

INCOMPARABLE_PIVOT_COLUMN​

SQLSTATE: 42818

Invalid pivot column <columnName>. Pivot columns must be comparable.

INCOMPATIBLE_COLUMN_TYPE​

SQLSTATE: 42825

<operator> can only be performed on tables with compatible column types. The <columnOrdinalNumber> column of the <tableOrdinalNumber> table is <dataType1> type which is not compatible with <dataType2> at the same column of the first table.<hint>.

INCOMPATIBLE_DATASOURCE_REGISTER​

SQLSTATE: 56038

Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>

INCOMPATIBLE_DATA_FOR_TABLE​

SQLSTATE: KD000

Cannot write incompatible data for the table <tableName>:

For more details see INCOMPATIBLE_DATA_FOR_TABLE

INCOMPATIBLE_JOIN_TYPES​

SQLSTATE: 42613

The join types <joinType1> and <joinType2> are incompatible.

INCOMPATIBLE_VIEW_SCHEMA_CHANGE​

SQLSTATE: 51024

The SQL query of view <viewName> has an incompatible schema change and column <colName> cannot be resolved. Expected <expectedNum> columns named <colName> but got <actualCols>.

Please try to re-create the view by running: <suggestion>.

INCOMPLETE_TYPE_DEFINITION​

SQLSTATE: 42K01

Incomplete complex type:

For more details see INCOMPLETE_TYPE_DEFINITION

INCONSISTENT_BEHAVIOR_CROSS_VERSION​

SQLSTATE: 42K0B

You may get a different result due to the upgrading to

For more details see INCONSISTENT_BEHAVIOR_CROSS_VERSION

INCORRECT_NUMBER_OF_ARGUMENTS​

SQLSTATE: 42605

<failure>, <functionName> requires at least <minArgs> arguments and at most <maxArgs> arguments.

INCORRECT_RAMP_UP_RATE​

SQLSTATE: 22003

Max offset with <rowsPerSecond> rowsPerSecond is <maxSeconds>, but 'rampUpTimeSeconds' is <rampUpTimeSeconds>.

INDETERMINATE_COLLATION​

SQLSTATE: 42P22

Could not determine which collation to use for string operation. Use COLLATE clause to set the collation explicitly.

INDETERMINATE_COLLATION_IN_EXPRESSION​

SQLSTATE: 42P22

Data type of <expr> has indeterminate collation. Use COLLATE clause to set the collation explicitly.

INDETERMINATE_COLLATION_IN_SCHEMA​

SQLSTATE: 42P22

Schema contains indeterminate collation at: [<columnPaths>]. Use COLLATE clause to set the collation explicitly.

INDEX_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot create the index <indexName> on table <tableName> because it already exists.

INDEX_NOT_FOUND​

SQLSTATE: 42704

Cannot find the index <indexName> on table <tableName>.

INFINITE_STREAMING_TRIGGER_NOT_SUPPORTED​

SQLSTATE: 0A000

Trigger type <trigger> is not supported for this cluster type.

Use a different trigger type e.g. AvailableNow, Once.

INSERT_COLUMN_ARITY_MISMATCH​

SQLSTATE: 21S01

Cannot write to <tableName>, the reason is

For more details see INSERT_COLUMN_ARITY_MISMATCH

INSERT_PARTITION_COLUMN_ARITY_MISMATCH​

SQLSTATE: 21S01

Cannot write to '<tableName>', <reason>:

Table columns: <tableColumns>.

Partition columns with static values: <staticPartCols>.

Data columns: <dataColumns>.

INSERT_REPLACE_USING_INVALID_SET_OF_COLUMNS​

SQLSTATE: 42000

Table must be partitioned and all specified columns must represent the full set of the partition columns of the table.

The following columns are not partition columns: <nonPartitionColumns>

The following partitions columns are missing: <missingPartitionsColumns>

INSERT_REPLACE_USING_NOT_ENABLED​

SQLSTATE: 0A000

Please contact your Databricks representative to enable the INSERT INTO ... REPLACE USING (...) feature.

INSUFFICIENT_PERMISSIONS​

SQLSTATE: 42501

Insufficient privileges:

<report>

INSUFFICIENT_PERMISSIONS_EXT_LOC​

SQLSTATE: 42501

User <user> has insufficient privileges for external location <location>.

INSUFFICIENT_PERMISSIONS_NO_OWNER​

SQLSTATE: 42501

There is no owner for <securableName>. Ask your administrator to set an owner.

INSUFFICIENT_PERMISSIONS_OWNERSHIP_SECURABLE​

SQLSTATE: 42501

User does not own <securableName>.

INSUFFICIENT_PERMISSIONS_SECURABLE​

SQLSTATE: 42501

User does not have permission <action> on <securableName>.

INSUFFICIENT_PERMISSIONS_SECURABLE_PARENT_OWNER​

SQLSTATE: 42501

The owner of <securableName> is different from the owner of <parentSecurableName>.

INSUFFICIENT_PERMISSIONS_SPARK_CONNECT_CLIENT_SET_CLOUDFETCH_RETENTION_TIMEOUT​

SQLSTATE: 42501

Client does not have permission to set a custom retention timeout for CloudFetch results.

INSUFFICIENT_PERMISSIONS_STORAGE_CRED​

SQLSTATE: 42501

Storage credential <credentialName> has insufficient privileges.

INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES​

SQLSTATE: 42501

User cannot <action> on <securableName> because of permissions on underlying securables.

INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES_VERBOSE​

SQLSTATE: 42501

User cannot <action> on <securableName> because of permissions on underlying securables:

<underlyingReport>

INTERVAL_ARITHMETIC_OVERFLOW​

SQLSTATE: 22015

Integer overflow while operating with intervals.

For more details see INTERVAL_ARITHMETIC_OVERFLOW

INTERVAL_DIVIDED_BY_ZERO​

SQLSTATE: 22012

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead.

INVALID_AGGREGATE_FILTER​

SQLSTATE: 42903

The FILTER expression <filterExpr> in an aggregate function is invalid.

For more details see INVALID_AGGREGATE_FILTER

INVALID_AGNOSTIC_ENCODER​

SQLSTATE: 42001

Found an invalid agnostic encoder. Expects an instance of AgnosticEncoder but got <encoderType>. For more information consult '<docroot>/api/java/index.html?org/apache/spark/sql/Encoder.html'.

INVALID_ALGORITHM_VALUE​

SQLSTATE: 22023

Invalid or unsupported edge interpolation algorithm value <alg>.

INVALID_ARRAY_INDEX​

SQLSTATE: 22003

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use the SQL function get() to tolerate accessing element at invalid index and return NULL instead.

For more details see INVALID_ARRAY_INDEX

INVALID_ARRAY_INDEX_IN_ELEMENT_AT​

SQLSTATE: 22003

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use try_element_at to tolerate accessing element at invalid index and return NULL instead.

For more details see INVALID_ARRAY_INDEX_IN_ELEMENT_AT

INVALID_ATTRIBUTE_NAME_SYNTAX​

SQLSTATE: 42601

Syntax error in the attribute name: <name>. Check that backticks appear in pairs, a quoted string is a complete name part and use a backtick only inside quoted name parts.

INVALID_AWS_AUTHENTICATION​

SQLSTATE: 42000

Please choose exactly one of the following authentication methods:

INVALID_AWS_AUTHENTICATION_EXPLICIT_OPTIONS​

SQLSTATE: 42000

Please provide either the name of your Databricks service credential (<serviceCredential>)

OR both <awsAccessKey> and <awsSecretKey>

INVALID_BITMAP_POSITION​

SQLSTATE: 22003

The 0-indexed bitmap position <bitPosition> is out of bounds. The bitmap has <bitmapNumBits> bits (<bitmapNumBytes> bytes).

INVALID_BOOLEAN_STATEMENT​

SQLSTATE: 22546

Boolean statement is expected in the condition, but <invalidStatement> was found.

INVALID_BOUNDARY​

SQLSTATE: 22003

The boundary <boundary> is invalid: <invalidValue>.

For more details see INVALID_BOUNDARY

INVALID_BUCKET_COLUMN_DATA_TYPE​

SQLSTATE: 42601

Cannot use <type> for bucket column. Collated data types are not supported for bucketing.

INVALID_BUCKET_COUNT​

SQLSTATE: 22003

Number of buckets should be greater than 0 but less than or equal to bucketing.maxBuckets (<bucketingMaxBuckets>). Got <numBuckets>.

INVALID_BUCKET_FILE​

SQLSTATE: 58030

Invalid bucket file: <path>.

INVALID_BYTE_STRING​

SQLSTATE: 22P03

The expected format is ByteString, but was <unsupported> (<class>).

INVALID_COLUMN_NAME_AS_PATH​

SQLSTATE: 46121

The datasource <datasource> cannot save the column <columnName> because its name contains some characters that are not allowed in file paths. Please, use an alias to rename it.

INVALID_COLUMN_OR_FIELD_DATA_TYPE​

SQLSTATE: 42000

Column or field <name> is of type <type> while it's required to be <expectedType>.

INVALID_CONF_VALUE​

SQLSTATE: 22022

The value '<confValue>' in the config "<confName>" is invalid.

For more details see INVALID_CONF_VALUE

INVALID_CONSTRAINT_CHARACTERISTICS​

SQLSTATE: 42613

Constraint characteristics [<characteristics>] are duplicated or conflict with each other.

INVALID_CORRUPT_RECORD_TYPE​

SQLSTATE: 42804

The column <columnName> for corrupt records must have the nullable STRING type, but got <actualType>.

INVALID_CRS_VALUE​

SQLSTATE: 22023

Invalid or unsupported CRS (coordinate reference system) value <crs>.

INVALID_CURRENT_RECIPIENT_USAGE​

SQLSTATE: 42887

current_recipient function can only be used in the CREATE VIEW statement or the ALTER VIEW statement to define a share only view in Unity Catalog.

INVALID_CURSOR​

SQLSTATE: HY109

The cursor is invalid.

For more details see INVALID_CURSOR

INVALID_DATASOURCE_FORMAT_FOR_CONNECTION_OPTIONS_INJECTION​

SQLSTATE: 42000

Connection with name <connectionName> and type <connectionType> does not support the format <actualFormat>. Supported format: <expectedFormat>.

INVALID_DATETIME_PATTERN​

SQLSTATE: 22007

Unrecognized datetime pattern: <pattern>.

For more details see INVALID_DATETIME_PATTERN

INVALID_DEFAULT_VALUE​

SQLSTATE: 42623

Failed to execute <statement> command because the destination column or variable <colName> has a DEFAULT value <defaultValue>,

For more details see INVALID_DEFAULT_VALUE

INVALID_DELIMITER_VALUE​

SQLSTATE: 42602

Invalid value for delimiter.

For more details see INVALID_DELIMITER_VALUE

INVALID_DEST_CATALOG​

SQLSTATE: 42809

Destination catalog of the SYNC command must be within Unity Catalog. Found <catalog>.

INVALID_DRIVER_MEMORY​

SQLSTATE: F0000

System memory <systemMemory> must be at least <minSystemMemory>.

Please increase heap size using the --driver-memory option or "<config>" in Spark configuration.

INVALID_DYNAMIC_OPTIONS​

SQLSTATE: 42K10

Options passed <option_list> are forbidden for foreign table <table_name>.

INVALID_EMPTY_LOCATION​

SQLSTATE: 42K05

The location name cannot be empty string, but <location> was given.

INVALID_ENVIRONMENT_SETTINGS_DEPENDENCIES​

SQLSTATE: 42000

The environment settings dependencies parameter is missing or it couldn't be parsed to a list of strings. Expected format: ["dep1", "dep2"]

INVALID_ERROR_CONDITION_DECLARATION​

SQLSTATE: 42K0R

Invalid condition declaration.

For more details see INVALID_ERROR_CONDITION_DECLARATION

INVALID_ESC​

SQLSTATE: 42604

Found an invalid escape string: <invalidEscape>. The escape string must contain only one character.

INVALID_ESCAPE_CHAR​

SQLSTATE: 42604

EscapeChar should be a string literal of length one, but got <sqlExpr>.

INVALID_EXECUTOR_MEMORY​

SQLSTATE: F0000

Executor memory <executorMemory> must be at least <minSystemMemory>.

Please increase executor memory using the --executor-memory option or "<config>" in Spark configuration.

INVALID_EXPRESSION_ENCODER​

SQLSTATE: 42001

Found an invalid expression encoder. Expects an instance of ExpressionEncoder but got <encoderType>. For more information consult '<docroot>/api/java/index.html?org/apache/spark/sql/Encoder.html'.

INVALID_EXTERNAL_TYPE​

SQLSTATE: 42K0N

The external type <externalType> is not valid for the type <type> at the expression <expr>.

SQLSTATE: 42000

Can't extract a value from <base>. Need a complex type [STRUCT, ARRAY, MAP] but got <other>.

SQLSTATE: 42601

Cannot extract <field> from <expr>.

SQLSTATE: 42000

Field name should be a non-null string literal, but it's <extraction>.

INVALID_FIELD_NAME​

SQLSTATE: 42000

Field name <fieldName> is invalid: <path> is not a struct.

INVALID_FORMAT​

SQLSTATE: 42601

The format is invalid: <format>.

For more details see INVALID_FORMAT

INVALID_FRACTION_OF_SECOND​

SQLSTATE: 22023

Valid range for seconds is [0, 60] (inclusive), but the provided value is <secAndMicros>. To avoid this error, use try_make_timestamp, which returns NULL on error.

If you do not want to use the session default timestamp version of this function, use try_make_timestamp_ntz or try_make_timestamp_ltz.

INVALID_GET_DIAGNOSTICS_USAGE​

SQLSTATE: 42612

Invalid usage of GET DIAGNOSTICS statement.

For more details see INVALID_GET_DIAGNOSTICS_USAGE

INVALID_GET_DIAGNOSTICS_USAGE_CONDITION_NUMBER_MUST_BE_ONE​

SQLSTATE: 35000

Invalid usage of GET DIAGNOSTICS statement. The only supported value for a condition number in the GET DIAGNOSTICS statement is 1.

INVALID_HANDLE​

SQLSTATE: HY000

The handle <handle> is invalid.

For more details see INVALID_HANDLE

INVALID_HANDLER_DECLARATION​

SQLSTATE: 42K0Q

Invalid handler declaration.

For more details see INVALID_HANDLER_DECLARATION

INVALID_HTTP_REQUEST_METHOD​

SQLSTATE: 22023

The input parameter: method, value: <paramValue> is not a valid parameter for http_request because it is not a valid HTTP method.

INVALID_HTTP_REQUEST_PATH​

SQLSTATE: 22023

The input parameter: path, value: <paramValue> is not a valid parameter for http_request because path traversal is not allowed.

INVALID_IDENTIFIER​

SQLSTATE: 42602

The unquoted identifier <ident> is invalid and must be back quoted as: <ident>.

Unquoted identifiers can only contain ASCII letters ('a' - 'z', 'A' - 'Z'), digits ('0' - '9'), and underbar ('_').

Unquoted identifiers must also not start with a digit.

Different data sources and meta stores may impose additional restrictions on valid identifiers.

INVALID_INDEX_OF_ZERO​

SQLSTATE: 22003

The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).

INVALID_INLINE_TABLE​

SQLSTATE: 42000

Invalid inline table.

For more details see INVALID_INLINE_TABLE

INVALID_INTERVAL_FORMAT​

SQLSTATE: 22006

Error parsing '<input>' to interval. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format.

For more details see INVALID_INTERVAL_FORMAT

INVALID_INTERVAL_WITH_MICROSECONDS_ADDITION​

SQLSTATE: 22006

Cannot add an interval to a date because its microseconds part is not 0. To resolve this, cast the input date to a timestamp, which supports the addition of intervals with non-zero microseconds.

INVALID_JAVA_IDENTIFIER_AS_FIELD_NAME​

SQLSTATE: 46121

<fieldName> is not a valid identifier of Java and cannot be used as field name

<walkedTypePath>.

INVALID_JDBC_CONNECTION_OPTION​

SQLSTATE: 42616

The option <optionKey> is not a valid parameter for this jdbc connection.

INVALID_JDBC_CONNECTION_OPTION_VALUE​

SQLSTATE: 42616

The option <optionKey> with a value <optionValue> is not a valid option for this jdbc connection.

INVALID_JOIN_TYPE_FOR_JOINWITH​

SQLSTATE: 42613

Invalid join type in joinWith: <joinType>.

INVALID_JSON_DATA_TYPE​

SQLSTATE: 2203G

Failed to convert the JSON string '<invalidType>' to a data type. Please enter a valid data type.

INVALID_JSON_DATA_TYPE_FOR_COLLATIONS​

SQLSTATE: 2203G

Collations can only be applied to string types, but the JSON data type is <jsonType>.

INVALID_JSON_RECORD_TYPE​

SQLSTATE: 22023

Detected an invalid type of a JSON record while inferring a common schema in the mode <failFastMode>. Expected a STRUCT type, but found <invalidType>.

INVALID_JSON_ROOT_FIELD​

SQLSTATE: 22032

Cannot convert JSON root field to target Spark type.

INVALID_JSON_SCHEMA_MAP_TYPE​

SQLSTATE: 22032

Input schema <jsonSchema> can only contain STRING as a key type for a MAP.

INVALID_KRYO_SERIALIZER_BUFFER_SIZE​

SQLSTATE: F0000

The value of the config "<bufferSizeConfKey>" must be less than 2048 MiB, but got <bufferSizeConfValue> MiB.

INVALID_LABEL_USAGE​

SQLSTATE: 42K0L

The usage of the label <labelName> is invalid.

For more details see INVALID_LABEL_USAGE

INVALID_LAMBDA_FUNCTION_CALL​

SQLSTATE: 42K0D

Invalid lambda function call.

For more details see INVALID_LAMBDA_FUNCTION_CALL

INVALID_LATERAL_JOIN_TYPE​

SQLSTATE: 42613

The <joinType> JOIN with LATERAL correlation is not allowed because an OUTER subquery cannot correlate to its join partner. Remove the LATERAL correlation or use an INNER JOIN, or LEFT OUTER JOIN instead.

INVALID_LIMIT_LIKE_EXPRESSION​

SQLSTATE: 42K0E

The limit like expression <expr> is invalid.

For more details see INVALID_LIMIT_LIKE_EXPRESSION

INVALID_LOG_VERSION​

SQLSTATE: KD002

UnsupportedLogVersion.

For more details see INVALID_LOG_VERSION

INVALID_NON_ABSOLUTE_PATH​

SQLSTATE: 22KD1

The provided non absolute path <path> can not be qualified. Please update the path to be a valid dbfs mount location.

INVALID_NON_DETERMINISTIC_EXPRESSIONS​

SQLSTATE: 42K0E

The operator expects a deterministic expression, but the actual expression is <sqlExprs>.

INVALID_NUMERIC_LITERAL_RANGE​

SQLSTATE: 22003

Numeric literal <rawStrippedQualifier> is outside the valid range for <typeName> with minimum value of <minValue> and maximum value of <maxValue>. Please adjust the value accordingly.

INVALID_OBSERVED_METRICS​

SQLSTATE: 42K0E

Invalid observed metrics.

For more details see INVALID_OBSERVED_METRICS

INVALID_OPTIONS​

SQLSTATE: 42K06

Invalid options:

For more details see INVALID_OPTIONS

INVALID_PANDAS_UDF_PLACEMENT​

SQLSTATE: 0A000

The group aggregate pandas UDF <functionList> cannot be invoked together with as other, non-pandas aggregate functions.

INVALID_PARAMETER_MARKER_VALUE​

SQLSTATE: 22023

An invalid parameter mapping was provided:

For more details see INVALID_PARAMETER_MARKER_VALUE

INVALID_PARAMETER_VALUE​

SQLSTATE: 22023

The value of parameter(s) <parameter> in <functionName> is invalid:

For more details see INVALID_PARAMETER_VALUE

INVALID_PARTITION_COLUMN_DATA_TYPE​

SQLSTATE: 0A000

Cannot use <type> for partition column.

INVALID_PARTITION_OPERATION​

SQLSTATE: 42601

The partition command is invalid.

For more details see INVALID_PARTITION_OPERATION

INVALID_PARTITION_VALUE​

SQLSTATE: 42846

Failed to cast value <value> to data type <dataType> for partition column <columnName>. Ensure the value matches the expected data type for this partition column.

INVALID_PIPELINE_ID​

SQLSTATE: 42604

Pipeline id <pipelineId> is not valid.

A pipeline id should be a UUID in the format of 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'

INVALID_PRIVILEGE​

SQLSTATE: 42852

Privilege <privilege> is not valid for <securable>.

INVALID_PROPERTY_KEY​

SQLSTATE: 42602

<key> is an invalid property key, please use quotes, e.g. SET <key>=<value>.

INVALID_PROPERTY_VALUE​

SQLSTATE: 42602

<value> is an invalid property value, please use quotes, e.g. SET <key>=<value>

INVALID_QUALIFIED_COLUMN_NAME​

SQLSTATE: 42000

The column name <columnName> is invalid because it is not qualified with a table name or consists of more than 4 name parts.

INVALID_QUERY_MIXED_QUERY_PARAMETERS​

SQLSTATE: 42613

Parameterized query must either use positional, or named parameters, but not both.

INVALID_RECURSIVE_CTE​

SQLSTATE: 42836

Invalid recursive definition found. Recursive queries must contain an UNION or an UNION ALL statement with 2 children. The first child needs to be the anchor term without any recursive references.

INVALID_RECURSIVE_REFERENCE​

SQLSTATE: 42836

Invalid recursive reference found inside WITH RECURSIVE clause.

For more details see INVALID_RECURSIVE_REFERENCE

INVALID_REGEXP_REPLACE​

SQLSTATE: 22023

Could not perform regexp_replace for source = "<source>", pattern = "<pattern>", replacement = "<replacement>" and position = <position>.

INVALID_RESET_COMMAND_FORMAT​

SQLSTATE: 42000

Expected format is 'RESET' or 'RESET key'. If you want to include special characters in key, please use quotes, e.g., RESET key.

INVALID_RESIGNAL_USAGE​

SQLSTATE: 0K000

RESIGNAL when handler not active. RESIGNAL statement can only be used inside Exception Handler body.

INVALID_S3_COPY_CREDENTIALS​

SQLSTATE: 42501

COPY INTO credentials must include AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN.

INVALID_SAVE_MODE​

SQLSTATE: 42000

The specified save mode <mode> is invalid. Valid save modes include "append", "overwrite", "ignore", "error", "errorifexists", and "default".

INVALID_SCHEMA​

SQLSTATE: 42K07

The input schema <inputSchema> is not a valid schema string.

For more details see INVALID_SCHEMA

INVALID_SCHEMA_OR_RELATION_NAME​

SQLSTATE: 42602

<name> is not a valid name for tables/schemas. Valid names only contain alphabet characters, numbers and _.

INVALID_SCHEMA_TYPE_NON_STRUCT​

SQLSTATE: 42K09

Invalid schema type. Expect a struct type, but got <dataType>.

INVALID_SCHEME​

SQLSTATE: 0AKUC

Unity catalog does not support <name> as the default file scheme.

INVALID_SECRET_LOOKUP​

SQLSTATE: 22531

Invalid secret lookup:

For more details see INVALID_SECRET_LOOKUP

INVALID_SET_SYNTAX​

SQLSTATE: 42000

Expected format is 'SET', 'SET key', or 'SET key=value'. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET key=value.

SQLSTATE: 42601

The <sharedObjectType> alias name must be of the form "schema.name".

INVALID_SINGLE_VARIANT_COLUMN​

SQLSTATE: 42613

User specified schema <schema> is invalid when the singleVariantColumn option is enabled. The schema must either be a variant field, or a variant field plus a corrupt column field.

INVALID_SOURCE_CATALOG​

SQLSTATE: 42809

Source catalog must not be within Unity Catalog for the SYNC command. Found <catalog>.

INVALID_SOURCE_FOR_FILTERING_SERVICE_MERGE_COMMAND​

SQLSTATE: 42KDH

The source of filtering service MERGE operation can only contain projections and filters.

Please adjust the MERGE command or use a staging table as the source instead.

<stmt>

INVALID_SPARK_CONFIG​

SQLSTATE: 42616

Invalid Spark config:

For more details see INVALID_SPARK_CONFIG

INVALID_SQLSTATE​

SQLSTATE: 428B3

Invalid SQLSTATE value: '<sqlState>'. SQLSTATE must be exactly 5 characters long and contain only A-Z and 0-9. SQLSTATE must not start with '00', '01', or 'XX'.

INVALID_SQL_ARG​

SQLSTATE: 42K08

The argument <name> of sql() is invalid. Consider to replace it either by a SQL literal or by collection constructor functions such as map(), array(), struct().

INVALID_SQL_SYNTAX​

SQLSTATE: 42000

Invalid SQL syntax:

For more details see INVALID_SQL_SYNTAX

INVALID_STAGING_PATH_IN_STAGING_ACCESS_QUERY​

SQLSTATE: 42604

Invalid staging path in staging <operation> query: <path>

INVALID_STATEMENT_FOR_EXECUTE_INTO​

SQLSTATE: 07501

The INTO clause of EXECUTE IMMEDIATE is only valid for queries but the given statement is not a query: <sqlString>.

INVALID_STATEMENT_OR_CLAUSE​

SQLSTATE: 42601

The statement or clause: <operation> is not valid.

INVALID_STREAMING_RATE_SOURCE_VERSION​

SQLSTATE: 22023

Invalid version for rate source: <version>. The version must be either 1 or 2.

INVALID_STREAMING_REAL_TIME_MODE_TRIGGER_INTERVAL​

SQLSTATE: 22023

The real-time trigger interval is set to <interval> ms. This is less than the <minBatchDuration> ms minimum specified by spark.databricks.streaming.realTimeMode.minBatchDuration.

INVALID_STREAMING_REAL_TIME_MODE_TRIGGER_OVERRIDE_INTERVAL​

SQLSTATE: 22023

The real-time trigger's checkpoint interval of <interval> could not be parsed. Please verify you have passed a positive integer.

INVALID_SUBQUERY_EXPRESSION​

SQLSTATE: 42823

Invalid subquery:

For more details see INVALID_SUBQUERY_EXPRESSION

INVALID_TARGET_FOR_ALTER_COMMAND​

SQLSTATE: 42809

ALTER <commandTableType> ... <command> does not support <tableName>. Please use ALTER <targetTableType> ... <command> instead.

INVALID_TARGET_FOR_SET_TBLPROPERTIES_COMMAND​

SQLSTATE: 42809

ALTER <commandTableType> ... SET TBLPROPERTIES does not support <tableName>. Please use ALTER <targetTableType> ... SET TBLPROPERTIES instead.

INVALID_TEMP_OBJ_REFERENCE​

SQLSTATE: 42K0F

Cannot create the persistent object <objName> of the type <obj> because it references to the temporary object <tempObjName> of the type <tempObj>. Please make the temporary object <tempObjName> persistent, or make the persistent object <objName> temporary.

INVALID_TIMESTAMP_FORMAT​

SQLSTATE: 22000

The provided timestamp <timestamp> doesn't match the expected syntax <format>.

INVALID_TIMEZONE​

SQLSTATE: 22009

The timezone: <timeZone> is invalid. The timezone must be either a region-based zone ID or a zone offset. Region IDs must have the form 'area/city', such as 'America/Los_Angeles'. Zone offsets must be in the format '(+|-)HH', '(+|-)HH:mm' or '(+|-)HH:mm:ss', e.g '-08' , '+01:00' or '-13:33:33', and must be in the range from -18:00 to +18:00. 'Z' and 'UTC' are accepted as synonyms for '+00:00'.

INVALID_TIME_TRAVEL_SPEC​

SQLSTATE: 42K0E

Cannot specify both version and timestamp when time travelling the table.

INVALID_TIME_TRAVEL_TIMESTAMP_EXPR​

SQLSTATE: 42K0E

The time travel timestamp expression <expr> is invalid.

For more details see INVALID_TIME_TRAVEL_TIMESTAMP_EXPR

INVALID_TYPED_LITERAL​

SQLSTATE: 42604

The value of the typed literal <valueType> is invalid: <value>.

INVALID_UDF_IMPLEMENTATION​

SQLSTATE: 38000

Function <funcName> does not implement a ScalarFunction or AggregateFunction.

INVALID_UPGRADE_SYNTAX​

SQLSTATE: 42809

<command> <supportedOrNot> the source table is in Hive Metastore and the destination table is in Unity Catalog.

INVALID_URL​

SQLSTATE: 22P02

The url is invalid: <url>. Use try_parse_url to tolerate invalid URL and return NULL instead.

INVALID_USAGE_OF_STAR_OR_REGEX​

SQLSTATE: 42000

Invalid usage of <elem> in <prettyName>.

INVALID_UTF8_STRING​

SQLSTATE: 22029

Invalid UTF8 byte sequence found in string: <str>.

INVALID_UUID​

SQLSTATE: 42604

Input <uuidInput> is not a valid UUID.

The UUID should be in the format of 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'

Please check the format of the UUID.

INVALID_VARIABLE_DECLARATION​

SQLSTATE: 42K0M

Invalid variable declaration.

For more details see INVALID_VARIABLE_DECLARATION

INVALID_VARIABLE_TYPE_FOR_QUERY_EXECUTE_IMMEDIATE​

SQLSTATE: 42K09

Variable type must be string type but got <varType>.

INVALID_VARIANT_CAST​

SQLSTATE: 22023

The variant value <value> cannot be cast into <dataType>. Please use try_variant_get instead.

INVALID_VARIANT_FROM_PARQUET​

SQLSTATE: 22023

Invalid variant.

For more details see INVALID_VARIANT_FROM_PARQUET

INVALID_VARIANT_GET_PATH​

SQLSTATE: 22023

The path <path> is not a valid variant extraction path in <functionName>.

A valid path should start with $ and is followed by zero or more segments like [123], .name, ['name'], or ["name"].

INVALID_VARIANT_SHREDDING_SCHEMA​

SQLSTATE: 22023

The schema <schema> is not a valid variant shredding schema.

INVALID_WHERE_CONDITION​

SQLSTATE: 42903

The WHERE condition <condition> contains invalid expressions: <expressionList>.

Rewrite the query to avoid window functions, aggregate functions, and generator functions in the WHERE clause.

INVALID_WINDOW_SPEC_FOR_AGGREGATION_FUNC​

SQLSTATE: 42601

Cannot specify ORDER BY or a window frame for <aggFunc>.

INVALID_WITHIN_GROUP_EXPRESSION​

SQLSTATE: 42K0K

Invalid function <funcName> with WITHIN GROUP.

For more details see INVALID_WITHIN_GROUP_EXPRESSION

INVALID_WRITER_COMMIT_MESSAGE​

SQLSTATE: 42KDE

The data source writer has generated an invalid number of commit messages. Expected exactly one writer commit message from each task, but received <detail>.

INVALID_WRITE_DISTRIBUTION​

SQLSTATE: 42000

The requested write distribution is invalid.

For more details see INVALID_WRITE_DISTRIBUTION

ISOLATED_COMMAND_FAILURE​

SQLSTATE: 39000

Failed to execute <command>. Command output:

<output>

ISOLATED_COMMAND_UNKNOWN_FAILURE​

SQLSTATE: 39000

Failed to execute <command>.

JDBC_EXTERNAL_ENGINE_SYNTAX_ERROR​

SQLSTATE: 42000

JDBC external engine syntax error. The error was caused by the query <jdbcQuery>.

For more details see JDBC_EXTERNAL_ENGINE_SYNTAX_ERROR

JOIN_CONDITION_IS_NOT_BOOLEAN_TYPE​

SQLSTATE: 42K0E

The join condition <joinCondition> has the invalid type <conditionType>, expected "BOOLEAN".

KAFKA_DATA_LOSS​

SQLSTATE: 22000

Some data may have been lost because they are not available in Kafka any more;

either the data was aged out by Kafka or the topic may have been deleted before all the data in the

topic was processed.

If you don't want your streaming query to fail on such cases, set the source option failOnDataLoss to false.

Reason:

For more details see KAFKA_DATA_LOSS

KINESIS_COULD_NOT_READ_SHARD_UNTIL_END_OFFSET​

SQLSTATE: 22000

Could not read until the desired sequence number <endSeqNum> for shard <shardId> in

kinesis stream <stream> with consumer mode <consumerMode>. The query will fail due to

potential data loss. The last read record was at sequence number <lastSeqNum>.

This can happen if the data with endSeqNum has already been aged out, or the Kinesis stream was

deleted and reconstructed with the same name. The failure behavior can be overridden

by setting spark.databricks.kinesis.failOnDataLoss to false in spark configuration.

KINESIS_EFO_CONSUMER_NOT_FOUND​

SQLSTATE: 51000

For kinesis stream <streamId>, the previously registered EFO consumer <consumerId> of the stream has been deleted.

Restart the query so that a new consumer will be registered.

KINESIS_EFO_SUBSCRIBE_LIMIT_EXCEEDED​

SQLSTATE: 51000

For shard <shard>, the previous call of subscribeToShard API was within the 5 seconds of the next call.

Restart the query after 5 seconds or more.

KINESIS_FETCHED_SHARD_LESS_THAN_TRACKED_SHARD​

SQLSTATE: 42K04

The minimum fetched shardId from Kinesis (<fetchedShardId>)

is less than the minimum tracked shardId (<trackedShardId>).

This is unexpected and occurs when a Kinesis stream is deleted and recreated with the same name,

and a streaming query using this Kinesis stream is restarted using an existing checkpoint location.

Restart the streaming query with a new checkpoint location, or create a stream with a new name.

KINESIS_POLLING_MODE_UNSUPPORTED​

SQLSTATE: 0A000

Kinesis polling mode is unsupported.

KINESIS_RECORD_SEQ_NUMBER_ORDER_VIOLATION​

SQLSTATE: 22000

For shard <shard>, the last record read from Kinesis in previous fetches has sequence number <lastSeqNum>,

which is greater than the record read in current fetch with sequence number <recordSeqNum>.

This is unexpected and can happen when the start position of retry or next fetch is incorrectly initialized, and may result in duplicate records downstream.

KINESIS_SOURCE_MUST_BE_IN_EFO_MODE_TO_CONFIGURE_CONSUMERS​

SQLSTATE: 42KDF

To read from Kinesis Streams with consumer configurations (consumerName, consumerNamePrefix, or registeredConsumerId), consumerMode must be efo.

KINESIS_SOURCE_MUST_SPECIFY_REGISTERED_CONSUMER_ID_AND_TYPE​

SQLSTATE: 42KDF

To read from Kinesis Streams with registered consumers, you must specify both the registeredConsumerId and registeredConsumerIdType options.

KINESIS_SOURCE_MUST_SPECIFY_STREAM_NAMES_OR_ARNS​

SQLSTATE: 42KDF

To read from Kinesis Streams, you must configure either (but not both) of the streamName or streamARN options as a comma-separated list of stream names/ARNs.

KINESIS_SOURCE_NO_CONSUMER_OPTIONS_WITH_REGISTERED_CONSUMERS​

SQLSTATE: 42KDF

To read from Kinesis Streams with registered consumers, do not configure consumerName or consumerNamePrefix options as they will not take effect.

KINESIS_SOURCE_REGISTERED_CONSUMER_ID_COUNT_MISMATCH​

SQLSTATE: 22023

The number of registered consumer ids should be equal to the number of Kinesis streams but got <numConsumerIds> consumer ids and <numStreams> streams.

KINESIS_SOURCE_REGISTERED_CONSUMER_NOT_FOUND​

SQLSTATE: 22023

The registered consumer <consumerId> provided cannot be found for streamARN <streamARN>. Verify that you have registered the consumer or do not provide the registeredConsumerId option.

KINESIS_SOURCE_REGISTERED_CONSUMER_TYPE_INVALID​

SQLSTATE: 22023

The registered consumer type <consumerType> is invalid. It must be either name or ARN.

KRYO_BUFFER_OVERFLOW​

SQLSTATE: 54006

Kryo serialization failed: <exceptionMsg>. To avoid this, increase "<bufferSizeConfKey>" value.

LABELS_MISMATCH​

SQLSTATE: 42K0L

Begin label <beginLabel> does not match the end label <endLabel>.

LABEL_ALREADY_EXISTS​

SQLSTATE: 42K0L

The label <label> already exists. Choose another name or rename the existing label.

LABEL_NAME_FORBIDDEN​

SQLSTATE: 42K0L

The label name <label> is forbidden.

LAKEHOUSE_FEDERATION_DATA_SOURCE_REQUIRES_NEWER_DBR_VERSION​

SQLSTATE: 0A000

Lakehouse federation data source '<provider>' requires newer Databricks Runtime version.

For more details see LAKEHOUSE_FEDERATION_DATA_SOURCE_REQUIRES_NEWER_DBR_VERSION

LOAD_DATA_PATH_NOT_EXISTS​

SQLSTATE: 42K03

LOAD DATA input path does not exist: <path>.

LOCAL_MUST_WITH_SCHEMA_FILE​

SQLSTATE: 42601

LOCAL must be used together with the schema of file, but got: <actualSchema>.

LOCATION_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot name the managed table as <identifier>, as its associated location <location> already exists. Please pick a different table name, or remove the existing location first.

LOST_TOPIC_PARTITIONS_IN_END_OFFSET_WITH_TRIGGER_AVAILABLENOW​

SQLSTATE: KD000

Some of partitions in Kafka topic(s) have been lost during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.

topic-partitions for latest offset: <tpsForLatestOffset>, topic-partitions for end offset: <tpsForEndOffset>

MALFORMED_AVRO_MESSAGE​

SQLSTATE: KD000

Malformed Avro messages are detected in message deserialization. Parse Mode: <mode>. To process malformed Avro message as null result, try setting the option 'mode' as 'PERMISSIVE'.

MALFORMED_CHARACTER_CODING​

SQLSTATE: 22000

Invalid value found when performing <function> with <charset>

MALFORMED_CSV_RECORD​

SQLSTATE: KD000

Malformed CSV record: <badRecord>

MALFORMED_LOG_FILE​

SQLSTATE: KD002

Log file was malformed: failed to read correct log version from <text>.

MALFORMED_PROTOBUF_MESSAGE​

SQLSTATE: 42K0G

Malformed Protobuf messages are detected in message deserialization. Parse Mode: <failFastMode>. To process malformed protobuf message as null result, try setting the option 'mode' as 'PERMISSIVE'.

MALFORMED_RECORD_IN_PARSING​

SQLSTATE: 22023

Malformed records are detected in record parsing: <badRecord>.

Parse Mode: <failFastMode>. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.

For more details see MALFORMED_RECORD_IN_PARSING

MALFORMED_STATE_IN_RATE_PER_MICRO_BATCH_SOURCE​

SQLSTATE: 22000

Malformed state in RatePerMicroBatch source.

For more details see MALFORMED_STATE_IN_RATE_PER_MICRO_BATCH_SOURCE

MALFORMED_VARIANT​

SQLSTATE: 22023

Variant binary is malformed. Please check the data source is valid.

MANAGED_ICEBERG_ATTEMPTED_TO_ENABLE_CLUSTERING_WITHOUT_DISABLING_DVS_OR_ROW_TRACKING​

SQLSTATE: 0A000

Attempted to enable Liquid clustering on a Managed Apache Iceberg table without disabling both deletion vectors and row tracking. Deletion vectors and row tracking are not supported for Managed Apache Iceberg tables, but are required for row-level concurrency with Liquid tables. To enable Liquid clustering on a Managed Apache Iceberg table with reduced concurrency control, deletion vectors and row tracking must be disabled for this table.

MANAGED_ICEBERG_OPERATION_NOT_SUPPORTED​

SQLSTATE: 0A000

Managed Apache Iceberg tables do not support <operation>.

MANAGED_TABLE_WITH_CRED​

SQLSTATE: 42613

Create managed table with storage credential is not supported.

MATERIALIZED_VIEW_MESA_REFRESH_WITHOUT_PIPELINE_ID​

SQLSTATE: 55019

Cannot <refreshType> the materialized view because it predates having a pipelineId. To enable <refreshType> please drop and recreate the materialized view.

MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED​

SQLSTATE: 56038

The materialized view operation <operation> is not allowed:

For more details see MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED

MATERIALIZED_VIEW_OUTPUT_WITHOUT_EXPLICIT_ALIAS​

SQLSTATE: 0A000

Output expression <expression> in a materialized view must be explicitly aliased.

MATERIALIZED_VIEW_OVER_STREAMING_QUERY_INVALID​

SQLSTATE: 42000

materialized view <name> could not be created with streaming query. Please use CREATE [OR REFRESH] STREAMING TABLE or remove the STREAM keyword to your FROM clause to turn this relation into a batch query instead.

MATERIALIZED_VIEW_UNSUPPORTED_OPERATION​

SQLSTATE: 0A000

Operation <operation> is currently not supported on materialized views.

MAX_NUMBER_VARIABLES_IN_SESSION_EXCEEDED​

SQLSTATE: 54KD1

Cannot create the new variable <variableName> because the number of variables in the session exceeds the maximum allowed number (<maxNumVariables>).

MAX_RECORDS_PER_FETCH_INVALID_FOR_KINESIS_SOURCE​

SQLSTATE: 22023

maxRecordsPerFetch needs to be a positive integer less than or equal to <kinesisRecordLimit>

MERGE_CARDINALITY_VIOLATION​

SQLSTATE: 23K01

The ON search condition of the MERGE statement matched a single row from the target table with multiple rows of the source table.

This could result in the target row being operated on more than once with an update or delete operation and is not allowed.

MERGE_WITHOUT_WHEN​

SQLSTATE: 42601

There must be at least one WHEN clause in a MERGE statement.

METRIC_CONSTRAINT_NOT_SUPPORTED​

SQLSTATE: 0A000

METRIC CONSTRAINT is not enabled.

METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR​

SQLSTATE: 22023

Provided value "<argValue>" is not supported by argument "<argName>" for the METRIC_STORE table function.

For more details see METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR

METRIC_STORE_UNSUPPORTED_ERROR​

SQLSTATE: 56038

Metric Store routine <routineName> is currently disabled in this environment.

METRIC_VIEW_AMBIGUOUS_JOIN_CRITERIA​

SQLSTATE: 42K0E

The metric view definition contains a join with an ambiguous criteria: <expr>. [Either use the using join criteria or explicitly qualify columns with the <sourceAlias> alias.]

METRIC_VIEW_CACHE_TABLE_NOT_SUPPORTED​

SQLSTATE: 42K0E

The metric view is not allowed to use cache tables.

METRIC_VIEW_FEATURE_DISABLED​

SQLSTATE: 42K0E

The metric view feature is disabled. Please make sure "spark.databricks.sql.metricView.enabled" is set to true.

METRIC_VIEW_INVALID_MEASURE_FUNCTION_INPUT​

SQLSTATE: 42K0E

The MEASURE() function only takes an attribute as input, but got <expr>

METRIC_VIEW_INVALID_VIEW_DEFINITION​

SQLSTATE: 42K0E

The metric view definition is invalid. Reason: <reason>.

METRIC_VIEW_IN_CTE_NOT_SUPPORTED​

SQLSTATE: 42K0E

The metric view is not allowed in CTE definitions. plan: <plan>

METRIC_VIEW_JOIN_NOT_SUPPORTED​

SQLSTATE: 42K0E

The metric view is not allowed to use joins. plan: <plan>

METRIC_VIEW_MATERIALIZATIONS_DISABLED​

SQLSTATE: 42K0E

Metric view materializations are disabled. Please make sure "spark.databricks.sql.metricView.materializations.enabled" is set to true.

METRIC_VIEW_MISSING_MEASURE_FUNCTION​

SQLSTATE: 42K0E

The usage of measure column <column> of a metric view requires a MEASURE() function to produce results.

METRIC_VIEW_RENAME_DIFFERENT_CATALOG_AND_SCHEMA​

SQLSTATE: 42602

The metric view <oldName> is not allowed to rename to a different catalog or schema: <newName>.

METRIC_VIEW_SNOWFLAKE_JOIN_FEATURE_DISABLED​

SQLSTATE: 42K0E

The metric view snowflake join feature is disabled. Please make sure "spark.databricks.sql.metricView.snowflake.join.enable" is set to true.

METRIC_VIEW_UNSUPPORTED_USAGE​

SQLSTATE: 42K0E

The metric view usage is not supported. plan: <plan>

METRIC_VIEW_WINDOW_FUNCTION_NOT_SUPPORTED​

SQLSTATE: 42K0E

The metric view is not allowed to use window function <expr>.

MIGRATION_NOT_SUPPORTED​

SQLSTATE: 42601

<table> is not supported for migrating to UC managed table because it is not a <tableKind> table.

Make sure that the table being migrated is a UC external delta table and

it is referenced by its name instead of path.

MIGRATION_ROLLBACK_NOT_SUPPORTED​

SQLSTATE: 42809

<table> is not supported for rollback from to managed to external table because it is not a <tableKind> table.

MISMATCHED_TOPIC_PARTITIONS_BETWEEN_END_OFFSET_AND_PREFETCHED​

SQLSTATE: KD000

Kafka data source in Trigger.AvailableNow should provide the same topic partitions in pre-fetched offset to end offset for each microbatch. The error could be transient - restart your query, and report if you still see the same issue.

topic-partitions for pre-fetched offset: <tpsForPrefetched>, topic-partitions for end offset: <tpsForEndOffset>.

MISSING_AGGREGATION​

SQLSTATE: 42803

The non-aggregating expression <expression> is based on columns which are not participating in the GROUP BY clause.

Add the columns or the expression to the GROUP BY, aggregate the expression, or use <expressionAnyValue> if you do not care which of the values within a group is returned.

For more details see MISSING_AGGREGATION

MISSING_CLAUSES_FOR_OPERATION​

SQLSTATE: 42601

Missing clause <clauses> for operation <operation>. Please add the required clauses.

MISSING_CONNECTION_OPTION​

SQLSTATE: 42000

Connections of type '<connectionType>' must include the following option(s): <requiredOptions>.

MISSING_DATABASE_FOR_V1_SESSION_CATALOG​

SQLSTATE: 3F000

Database name is not specified in the v1 session catalog. Please ensure to provide a valid database name when interacting with the v1 catalog.

MISSING_GROUP_BY​

SQLSTATE: 42803

The query does not include a GROUP BY clause. Add GROUP BY or turn it into the window functions using OVER clauses.

MISSING_NAME_FOR_CHECK_CONSTRAINT​

SQLSTATE: 42000

CHECK constraint must have a name.

MISSING_PARAMETER_FOR_KAFKA​

SQLSTATE: 42KDF

Parameter <parameterName> is required for Kafka, but is not specified in <functionName>.

MISSING_PARAMETER_FOR_ROUTINE​

SQLSTATE: 42KDF

Parameter <parameterName> is required, but is not specified in <functionName>.

MISSING_SCHEDULE_DEFINITION​

SQLSTATE: 42000

A schedule definition must be provided following SCHEDULE.

MISSING_TIMEOUT_CONFIGURATION​

SQLSTATE: HY000

The operation has timed out, but no timeout duration is configured. To set a processing time-based timeout, use 'GroupState.setTimeoutDuration()' in your 'mapGroupsWithState' or 'flatMapGroupsWithState' operation. For event-time-based timeout, use 'GroupState.setTimeoutTimestamp()' and define a watermark using 'Dataset.withWatermark()'.

MISSING_WINDOW_SPECIFICATION​

SQLSTATE: 42P20

Window specification is not defined in the WINDOW clause for <windowName>. For more information about WINDOW clauses, please refer to '<docroot>/sql-ref-syntax-qry-select-window.html'.

MODIFY_BUILTIN_CATALOG​

SQLSTATE: 42832

Modifying built-in catalog <catalogName> is not supported.

MULTIPLE_LOAD_PATH​

SQLSTATE: 42000

Databricks Delta does not support multiple input paths in the load() API.

paths: <pathList>. To build a single DataFrame by loading

multiple paths from the same Delta table, please load the root path of

the Delta table with the corresponding partition filters. If the multiple paths

are from different Delta tables, please use Dataset's union()/unionByName() APIs

to combine the DataFrames generated by separate load() API calls.

MULTIPLE_MATCHING_CONSTRAINTS​

SQLSTATE: 42891

Found at least two matching constraints with the given condition.

MULTIPLE_QUERY_RESULT_CLAUSES_WITH_PIPE_OPERATORS​

SQLSTATE: 42000

<clause1> and <clause2> cannot coexist in the same SQL pipe operator using '|>'. Please separate the multiple result clauses into separate pipe operators and then retry the query again.

MULTIPLE_TIME_TRAVEL_SPEC​

SQLSTATE: 42K0E

Cannot specify time travel in both the time travel clause and options.

MULTIPLE_XML_DATA_SOURCE​

SQLSTATE: 42710

Detected multiple data sources with the name <provider> (<sourceNames>). Please specify the fully qualified class name or remove <externalSource> from the classpath.

MULTI_ALIAS_WITHOUT_GENERATOR​

SQLSTATE: 42K0E

Multi part aliasing (<names>) is not supported with <expr> as it is not a generator function.

MULTI_SOURCES_UNSUPPORTED_FOR_EXPRESSION​

SQLSTATE: 42K0E

The expression <expr> does not support more than one source.

MULTI_STATEMENT_TRANSACTION_CDF_SCHEMA_WITH_RESERVED_COLUMN_NAME​

SQLSTATE: 42939

Change Data Feed cannot be enabled in a Multi Statement Transaction because a table contains a reserved column name (<column_name>).

To proceed, make sure the table uses only non-reserved column names.

MULTI_STATEMENT_TRANSACTION_CDF_SETTING_HIGH_WATERMARK_NOT_ALLOWED​

SQLSTATE: 25000

Manually setting the CDC Identities high watermark is not allowed.

MULTI_STATEMENT_TRANSACTION_CONCURRENT_CATALOG_METADATA_CHANGE​

SQLSTATE: 40000

A concurrent metadata change has been detected on table/view <table>. Please run ROLLBACK and then retry this transaction. Details:

For more details see MULTI_STATEMENT_TRANSACTION_CONCURRENT_CATALOG_METADATA_CHANGE

MULTI_STATEMENT_TRANSACTION_CONTEXT_MISMATCH​

SQLSTATE: 25000

Transaction context inconsistency was detected between the current thread and the Spark session. This typically occurs when a Spark session is shared across multiple threads. Please use a dedicated session and thread for each transaction, and commit/rollback the transaction in its thread before reusing the session and thread for a new transaction. Details:

For more details see MULTI_STATEMENT_TRANSACTION_CONTEXT_MISMATCH

MULTI_STATEMENT_TRANSACTION_NOT_SUPPORTED​

SQLSTATE: 0A000

Failed to execute the statement.

For more details see MULTI_STATEMENT_TRANSACTION_NOT_SUPPORTED

MULTI_STATEMENT_TRANSACTION_NO_ACTIVE_TRANSACTION​

SQLSTATE: 25000

There is no active transaction to <action>.

MULTI_STATEMENT_TRANSACTION_ROLLBACK_REQUIRED_AFTER_ABORT​

SQLSTATE: 40000

The current transaction has been aborted. Please run ROLLBACK TRANSACTION before continuing. Abort reason:

For more details see MULTI_STATEMENT_TRANSACTION_ROLLBACK_REQUIRED_AFTER_ABORT

MULTI_UDF_INTERFACE_ERROR​

SQLSTATE: 0A000

Not allowed to implement multiple UDF interfaces, UDF class <className>.

MUTUALLY_EXCLUSIVE_CLAUSES​

SQLSTATE: 42613

Mutually exclusive clauses or options <clauses>. Please remove one of these clauses.

MV_ST_ALTER_QUERY_INCORRECT_BACKING_TYPE​

SQLSTATE: 42601

The input query expects a <expectedType>, but the underlying table is a <givenType>.

NAMED_PARAMETERS_NOT_SUPPORTED​

SQLSTATE: 4274K

Named parameters are not supported for function <functionName>; please retry the query with positional arguments to the function call instead.

NAMED_PARAMETERS_NOT_SUPPORTED_FOR_SQL_UDFS​

SQLSTATE: 0A000

Cannot call function <functionName> because named argument references are not supported. In this case, the named argument reference was <argument>.

NAMED_PARAMETER_SUPPORT_DISABLED​

SQLSTATE: 0A000

Cannot call function <functionName> because named argument references are not enabled here.

In this case, the named argument reference was <argument>.

Set "spark.sql.allowNamedFunctionArguments" to "true" to turn on feature.

NAMESPACE_ALREADY_EXISTS​

SQLSTATE: 42000

Cannot create namespace <nameSpaceName> because it already exists.

Choose a different name, drop the existing namespace, or add the IF NOT EXISTS clause to tolerate pre-existing namespace.

NAMESPACE_NOT_EMPTY​

SQLSTATE: 42000

Cannot drop a namespace <nameSpaceNameName> because it contains objects.

Use DROP NAMESPACE ... CASCADE to drop the namespace and all its objects.

NAMESPACE_NOT_FOUND​

SQLSTATE: 42000

The namespace <nameSpaceName> cannot be found. Verify the spelling and correctness of the namespace.

If you did not qualify the name with, verify the current_schema() output, or qualify the name with the correctly.

To tolerate the error on drop use DROP NAMESPACE IF EXISTS.

NATIVE_IO_ERROR​

SQLSTATE: KD00F

Native request failed. requestId: <requestId>, cloud: <cloud>, operation: <operation>

request: [https: <https>, method = <method>, path = <path>, params = <params>, host = <host>, headers = <headers>, bodyLen = <bodyLen>],

error: <error>

NATIVE_XML_DATA_SOURCE_NOT_ENABLED​

SQLSTATE: 56038

Native XML Data Source is not enabled in this cluster.

NEGATIVE_SCALE_DISALLOWED​

SQLSTATE: 0A000

Negative scale is not allowed: '<scale>'. Set the config <sqlConf> to "true" to allow it.

NEGATIVE_VALUES_IN_FREQUENCY_EXPRESSION​

SQLSTATE: 22003

Found the negative value in <frequencyExpression>: <negativeValue>, but expected a positive integral value.

NESTED_AGGREGATE_FUNCTION​

SQLSTATE: 42607

It is not allowed to use an aggregate function in the argument of another aggregate function. Please use the inner aggregate function in a sub-query.

NESTED_EXECUTE_IMMEDIATE​

SQLSTATE: 07501

Nested EXECUTE IMMEDIATE commands are not allowed. Please ensure that the SQL query provided (<sqlString>) does not contain another EXECUTE IMMEDIATE command.

NESTED_REFERENCES_IN_SUBQUERY_NOT_SUPPORTED​

SQLSTATE: 0A000

Detected outer scope references <expression> in the subquery which is not supported.

NONEXISTENT_FIELD_NAME_IN_LIST​

SQLSTATE: HV091

Field(s) <nonExistFields> do(es) not exist. Available fields: <fieldNames>

NON_FOLDABLE_ARGUMENT​

SQLSTATE: 42K08

The function <funcName> requires the parameter <paramName> to be a foldable expression of the type <paramType>, but the actual argument is a non-foldable.

NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION​

SQLSTATE: 42613

When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.

NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION​

SQLSTATE: 42613

When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.

NON_LAST_NOT_MATCHED_BY_TARGET_CLAUSE_OMIT_CONDITION​

SQLSTATE: 42613

When there are more than one NOT MATCHED [BY TARGET] clauses in a MERGE statement, only the last NOT MATCHED [BY TARGET] clause can omit the condition.

NON_LITERAL_PIVOT_VALUES​

SQLSTATE: 42K08

Literal expressions required for pivot values, found <expression>.

NON_PARTITION_COLUMN​

SQLSTATE: 42000

PARTITION clause cannot contain the non-partition column: <columnName>.

NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING​

SQLSTATE: 42KDE

Window function is not supported in <windowFunc> (as column <columnName>) on streaming DataFrames/Datasets.

Structured Streaming only supports time-window aggregation using the WINDOW function. (window specification: <windowSpec>)

NOT_ALLOWED_IN_FROM​

SQLSTATE: 42601

Not allowed in the FROM clause:

For more details see NOT_ALLOWED_IN_FROM

NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE​

SQLSTATE: 42601

Not allowed in the pipe WHERE clause:

For more details see NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE

NOT_A_CONSTANT_STRING​

SQLSTATE: 42601

The expression <expr> used for the routine or clause <name> must be a constant STRING which is NOT NULL.

For more details see NOT_A_CONSTANT_STRING

NOT_A_PARTITIONED_TABLE​

SQLSTATE: 42809

Operation <operation> is not allowed for <tableIdentWithDB> because it is not a partitioned table.

NOT_A_SCALAR_FUNCTION​

SQLSTATE: 42887

<functionName> appears as a scalar expression here, but the function was defined as a table function. Please update the query to move the function call into the FROM clause, or redefine <functionName> as a scalar function instead.

NOT_A_TABLE_FUNCTION​

SQLSTATE: 42887

<functionName> appears as a table function here, but the function was defined as a scalar function. Please update the query to move the function call outside the FROM clause, or redefine <functionName> as a table function instead.

NOT_NULL_ASSERT_VIOLATION​

SQLSTATE: 42000

NULL value appeared in non-nullable field: <walkedTypePath>If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (such as java.lang.Integer instead of int/scala.Int).

NOT_NULL_CONSTRAINT_VIOLATION​

SQLSTATE: 42000

Assigning a NULL is not allowed here.

For more details see NOT_NULL_CONSTRAINT_VIOLATION

NOT_SUPPORTED_CHANGE_COLUMN​

SQLSTATE: 0A000

ALTER TABLE ALTER/CHANGE COLUMN is not supported for changing <table>'s column <originName> with type <originType> to <newName> with type <newType>.

NOT_SUPPORTED_CHANGE_SAME_COLUMN​

SQLSTATE: 0A000

ALTER TABLE ALTER/CHANGE COLUMN is not supported for changing <table>'s column <fieldName> including its nested fields multiple times in the same command.

NOT_SUPPORTED_COMMAND_FOR_V2_TABLE​

SQLSTATE: 0A000

<cmd> is not supported for v2 tables.

NOT_SUPPORTED_COMMAND_WITHOUT_HIVE_SUPPORT​

SQLSTATE: 0A000

<cmd> is not supported, if you want to enable it, please set "spark.sql.catalogImplementation" to "hive".

NOT_SUPPORTED_IN_JDBC_CATALOG​

SQLSTATE: 0A000

Not supported command in JDBC catalog:

For more details see NOT_SUPPORTED_IN_JDBC_CATALOG

NOT_SUPPORTED_WITH_DB_SQL​

SQLSTATE: 0A000

<operation> is not supported on a SQL <endpoint>.

NOT_SUPPORTED_WITH_SERVERLESS​

SQLSTATE: 0A000

<operation> is not supported on serverless compute.

NOT_UNRESOLVED_ENCODER​

SQLSTATE: 42601

Unresolved encoder expected, but <attr> was found.

NO_DEFAULT_COLUMN_VALUE_AVAILABLE​

SQLSTATE: 42608

Can't determine the default value for <colName> because it is not nullable and it has no default value.

NO_HANDLER_FOR_UDAF​

SQLSTATE: 42000

No handler for UDAF '<functionName>'. Use sparkSession.udf.register(...) instead.

NO_MERGE_ACTION_SPECIFIED​

SQLSTATE: 42K0E

df.mergeInto needs to be followed by at least one of whenMatched/whenNotMatched/whenNotMatchedBySource.

NO_PARENT_EXTERNAL_LOCATION_FOR_PATH​

SQLSTATE: none assigned

No parent external location was found for path '<path>'. Please create an external location on one of the parent paths and then retry the query or command again.

NO_SQL_TYPE_IN_PROTOBUF_SCHEMA​

SQLSTATE: 42S22

Cannot find <catalystFieldPath> in Protobuf schema.

NO_STORAGE_LOCATION_FOR_TABLE​

SQLSTATE: none assigned

No storage location was found for table '<tableId>' when generating table credentials. Please verify the table type and the table location URL and then retry the query or command again.

NO_SUCH_CATALOG_EXCEPTION​

SQLSTATE: 42704

Catalog '<catalog>' was not found. Please verify the catalog name and then retry the query or command again.

NO_SUCH_CLEANROOM_EXCEPTION​

SQLSTATE: none assigned

The clean room '<cleanroom>' does not exist. Please verify that the clean room name is spelled correctly and matches the name of a valid existing clean room and then retry the query or command again.

NO_SUCH_CREDENTIAL_EXCEPTION​

SQLSTATE: KD000

The credential '<credential>' does not exist. Please verify that the credential name is spelled correctly and matches the name of a valid existing credential and then retry the query or command again.

NO_SUCH_EXTERNAL_LOCATION_EXCEPTION​

SQLSTATE: none assigned

The external location '<externalLocation>' does not exist. Please verify that the external location name is correct and then retry the query or command again.

NO_SUCH_METASTORE_EXCEPTION​

SQLSTATE: none assigned

The metastore was not found. Please ask your account administrator to assign a metastore to the current workspace and then retry the query or command again.

NO_SUCH_PROVIDER_EXCEPTION​

SQLSTATE: none assigned

The share provider '<providerName>' does not exist. Please verify the share provider name is spelled correctly and matches the name of a valid existing provider name and then retry the query or command again.

NO_SUCH_RECIPIENT_EXCEPTION​

SQLSTATE: none assigned

The recipient '<recipient>' does not exist. Please verify that the recipient name is spelled correctly and matches the name of a valid existing recipient and then retry the query or command again.

SQLSTATE: none assigned

The share '<share>' does not exist. Please verify that the share name is spelled correctly and matches the name of a valid existing share and then retry the query or command again.

NO_SUCH_STORAGE_CREDENTIAL_EXCEPTION​

SQLSTATE: none assigned

The storage credential '<storageCredential>' does not exist. Please verify that the storage credential name is spelled correctly and matches the name of a valid existing storage credential and then retry the query or command again.

NO_SUCH_USER_EXCEPTION​

SQLSTATE: none assigned

The user '<userName>' does not exist. Please verify that the user to whom you grant permission or alter ownership is spelled correctly and matches the name of a valid existing user and then retry the query or command again.

NO_UDF_INTERFACE​

SQLSTATE: 38000

UDF class <className> doesn't implement any UDF interface.

NULLABLE_COLUMN_OR_FIELD​

SQLSTATE: 42000

Column or field <name> is nullable while it's required to be non-nullable.

NULLABLE_ROW_ID_ATTRIBUTES​

SQLSTATE: 42000

Row ID attributes cannot be nullable: <nullableRowIdAttrs>.

NULL_DATA_SOURCE_OPTION​

SQLSTATE: 22024

Data source read/write option <option> cannot have null value.

NULL_MAP_KEY​

SQLSTATE: 2200E

Cannot use null as map key.

NULL_QUERY_STRING_EXECUTE_IMMEDIATE​

SQLSTATE: 22004

Execute immediate requires a non-null variable as the query string, but the provided variable <varName> is null.

NULL_VALUE_SIGNAL_STATEMENT​

SQLSTATE: 22004

Signal statement arguments require non-null values, but <argument> received a null value.

NUMERIC_OUT_OF_SUPPORTED_RANGE​

SQLSTATE: 22003

The value <value> cannot be interpreted as a numeric because it has more than 38 digits.

NUMERIC_VALUE_OUT_OF_RANGE​

SQLSTATE: 22003

For more details see NUMERIC_VALUE_OUT_OF_RANGE

NUM_COLUMNS_MISMATCH​

SQLSTATE: 42826

<operator> can only be performed on inputs with the same number of columns, but the first input has <firstNumColumns> columns and the <invalidOrdinalNum> input has <invalidNumColumns> columns.

NUM_TABLE_VALUE_ALIASES_MISMATCH​

SQLSTATE: 42826

Number of given aliases does not match number of output columns.

Function name: <funcName>; number of aliases: <aliasesNum>; number of output columns: <outColsNum>.

OAUTH_CUSTOM_IDENTITY_CLAIM_NOT_PROVIDED​

SQLSTATE: 22KD2

No custom identity claim was provided.

ONLY_SECRET_FUNCTION_SUPPORTED_HERE​

SQLSTATE: 42K0E

Calling function <functionName> is not supported in this <location>; <supportedFunctions> supported here.

ONLY_SUPPORTED_WITH_UC_SQL_CONNECTOR​

SQLSTATE: 0A000

SQL operation <operation> is only supported on Databricks SQL connectors with Unity Catalog support.

OPERATION_CANCELED​

SQLSTATE: HY008

Operation has been canceled.

OPERATION_REQUIRES_UNITY_CATALOG​

SQLSTATE: 0AKUD

Operation <operation> requires Unity Catalog enabled.

OP_NOT_SUPPORTED_READ_ONLY​

SQLSTATE: 42KD1

<plan> is not supported in read-only session mode.

ORDER_BY_POS_OUT_OF_RANGE​

SQLSTATE: 42805

ORDER BY position <index> is not in select list (valid range is [1, <size>]).

PARQUET_CONVERSION_FAILURE​

SQLSTATE: 42846

Unable to create a Parquet converter for the data type <dataType> whose Parquet type is <parquetType>.

For more details see PARQUET_CONVERSION_FAILURE

PARQUET_TYPE_ILLEGAL​

SQLSTATE: 42846

Illegal Parquet type: <parquetType>.

PARQUET_TYPE_NOT_RECOGNIZED​

SQLSTATE: 42846

Unrecognized Parquet type: <field>.

PARQUET_TYPE_NOT_SUPPORTED​

SQLSTATE: 42846

Parquet type not yet supported: <parquetType>.

PARSE_EMPTY_STATEMENT​

SQLSTATE: 42617

Syntax error, unexpected empty statement.

PARSE_MODE_UNSUPPORTED​

SQLSTATE: 42601

The function <funcName> doesn't support the <mode> mode. Acceptable modes are PERMISSIVE and FAILFAST.

PARSE_SYNTAX_ERROR​

SQLSTATE: 42601

Syntax error at or near <error> <hint>.

PARTITIONS_ALREADY_EXIST​

SQLSTATE: 428FT

Cannot ADD or RENAME TO partition(s) <partitionList> in table <tableName> because they already exist.

Choose a different name, drop the existing partition, or add the IF NOT EXISTS clause to tolerate a pre-existing partition.

PARTITIONS_NOT_FOUND​

SQLSTATE: 428FT

The partition(s) <partitionList> cannot be found in table <tableName>.

Verify the partition specification and table name.

To tolerate the error on drop use ALTER TABLE … DROP IF EXISTS PARTITION.

PARTITION_COLUMN_NOT_FOUND_IN_SCHEMA​

SQLSTATE: 42000

Partition column <column> not found in schema <schema>. Please provide the existing column for partitioning.

PARTITION_LOCATION_ALREADY_EXISTS​

SQLSTATE: 42K04

Partition location <locationPath> already exists in table <tableName>.

PARTITION_LOCATION_IS_NOT_UNDER_TABLE_DIRECTORY​

SQLSTATE: 42KD5

Failed to execute the ALTER TABLE SET PARTITION LOCATION statement, because the

partition location <location> is not under the table directory <table>.

To fix it, please set the location of partition to a subdirectory of <table>.

PARTITION_METADATA​

SQLSTATE: 0AKUC

<action> is not allowed on table <tableName> because storing partition metadata is not supported in Unity Catalog.

PARTITION_NUMBER_MISMATCH​

SQLSTATE: KD009

Number of values (<partitionNumber>) did not match schema size (<partitionSchemaSize>): values are <partitionValues>, schema is <partitionSchema>, file path is <urlEncodedPath>.

Please re-materialize the table or contact the owner.

PARTITION_TRANSFORM_EXPRESSION_NOT_IN_PARTITIONED_BY​

SQLSTATE: 42S23

The expression <expression> must be inside 'partitionedBy'.

PATH_ALREADY_EXISTS​

SQLSTATE: 42K04

Path <outputPath> already exists. Set mode as "overwrite" to overwrite the existing path.

PATH_NOT_FOUND​

SQLSTATE: 42K03

Path does not exist: <path>.

PHOTON_DESERIALIZED_PROTOBUF_MEMORY_LIMIT_EXCEEDED​

SQLSTATE: 54000

Deserializing the Photon protobuf plan requires at least <size> bytes, which exceeds the

limit of <limit> bytes. This could be due to a very large plan or the presence of a very

wide schema. Try to simplify the query, remove unnecessary columns, or disable Photon.

PHOTON_SERIALIZED_PROTOBUF_MEMORY_LIMIT_EXCEEDED​

SQLSTATE: 54000

The serialized Photon protobuf plan has size <size> bytes, which exceeds the limit of

<limit> bytes. The serialized size of data types in the plan is <dataTypeSize> bytes.

This could be due to a very large plan or the presence of a very wide schema.

Consider rewriting the query to remove unwanted operations and columns or disable Photon.

PIPELINE_DOES_NOT_EXIST​

SQLSTATE: 42K03

Pipeline '<pipelineId>' does not exist

For more details see PIPELINE_DOES_NOT_EXIST

PIPE_OPERATOR_AGGREGATE_EXPRESSION_CONTAINS_NO_AGGREGATE_FUNCTION​

SQLSTATE: 0A000

Non-grouping expression <expr> is provided as an argument to the |> AGGREGATE pipe operator but does not contain any aggregate function; please update it to include an aggregate function and then retry the query again.

PIPE_OPERATOR_CONTAINS_AGGREGATE_FUNCTION​

SQLSTATE: 0A000

Aggregate function <expr> is not allowed when using the pipe operator |> <clause> clause; please use the pipe operator |> AGGREGATE clause instead.

PIVOT_VALUE_DATA_TYPE_MISMATCH​

SQLSTATE: 42K09

Invalid pivot value '<value>': value data type <valueType> does not match pivot column data type <pivotType>.

POINTER_ARRAY_OUT_OF_MEMORY​

SQLSTATE: 82002

Not enough memory to grow pointer array

POLICY_ALREADY_EXISTS​

SQLSTATE: 42000

Cannot create policy <policyName> because it already exists.

Choose a different name or drop the existing policy to tolerate pre-existing connections.

POLICY_NOT_FOUND​

SQLSTATE: 22023

Cannot execute <commandType> command because the policy <policyName> on <securableFullname> cannot be found.

Please verify the spelling and correctness.

POLICY_ON_SECURABLE_TYPE_NOT_SUPPORTED​

SQLSTATE: 42000

Cannot create policy on securable type <securableType>. Supported securable types: <allowedTypes>.

PROCEDURE_ARGUMENT_NUMBER_MISMATCH​

SQLSTATE: 42605

Procedure <procedureName> expects <expected> arguments, but <actual> were provided.

PROCEDURE_CREATION_EMPTY_ROUTINE​

SQLSTATE: 0A000

CREATE PROCEDURE with an empty routine definition is not allowed.

PROCEDURE_CREATION_PARAMETER_OUT_INOUT_WITH_DEFAULT​

SQLSTATE: 42601

The parameter <parameterName> is defined with parameter mode <parameterMode>. OUT and INOUT parameter cannot be omitted when invoking a routine and therefore do not support a DEFAULT expression. To proceed, remove the DEFAULT clause or change the parameter mode to IN.

PROCEDURE_NOT_SUPPORTED​

SQLSTATE: 0A000

Stored procedure is not supported

PROCEDURE_NOT_SUPPORTED_WITH_HMS​

SQLSTATE: 0A000

Stored procedure is not supported with Hive Metastore. Please use Unity Catalog instead.

PROTOBUF_DEPENDENCY_NOT_FOUND​

SQLSTATE: 42K0G

Could not find dependency: <dependencyName>.

PROTOBUF_DESCRIPTOR_FILE_NOT_FOUND​

SQLSTATE: 42K0G

Error reading Protobuf descriptor file at path: <filePath>.

PROTOBUF_FIELD_MISSING​

SQLSTATE: 42K0G

Searching for <field> in Protobuf schema at <protobufSchema> gave <matchSize> matches. Candidates: <matches>.

PROTOBUF_FIELD_MISSING_IN_SQL_SCHEMA​

SQLSTATE: 42K0G

Found <field> in Protobuf schema but there is no match in the SQL schema.

PROTOBUF_FIELD_TYPE_MISMATCH​

SQLSTATE: 42K0G

Type mismatch encountered for field: <field>.

PROTOBUF_JAVA_CLASSES_NOT_SUPPORTED​

SQLSTATE: 0A000

Java classes are not supported for <protobufFunction>. Contact Databricks Support about alternate options.

PROTOBUF_MESSAGE_NOT_FOUND​

SQLSTATE: 42K0G

Unable to locate Message <messageName> in Descriptor.

PROTOBUF_NOT_LOADED_SQL_FUNCTIONS_UNUSABLE​

SQLSTATE: 22KD3

Cannot call the <functionName> SQL function because the Protobuf data source is not loaded.

Please restart your job or session with the 'spark-protobuf' package loaded, such as by using the --packages argument on the command line, and then retry your query or command again.

PROTOBUF_TYPE_NOT_SUPPORT​

SQLSTATE: 42K0G

Protobuf type not yet supported: <protobufType>.

PS_FETCH_RETRY_EXCEPTION​

SQLSTATE: 22000

Task in pubsub fetch stage cannot be retried. Partition <partitionInfo> in stage <stageInfo>, TID <taskId>.

PS_INVALID_EMPTY_OPTION​

SQLSTATE: 42000

<key> cannot be an empty string.

PS_INVALID_KEY_TYPE​

SQLSTATE: 22000

Invalid key type for PubSub dedup: <key>.

PS_INVALID_OPTION​

SQLSTATE: 42000

The option <key> is not supported by PubSub. It can only be used in testing.

PS_INVALID_OPTION_TYPE​

SQLSTATE: 42000

Invalid type for <key>. Expected type of <key> to be type <type>.

PS_INVALID_READ_LIMIT​

SQLSTATE: 42000

Invalid read limit on PubSub stream: <limit>.

PS_INVALID_UNSAFE_ROW_CONVERSION_FROM_PROTO​

SQLSTATE: 22000

Invalid UnsafeRow to decode to PubSubMessageMetadata, the desired proto schema is: <protoSchema>. The input UnsafeRow might be corrupted: <unsafeRow>.

PS_INVALID_WORKLOAD_IDENTITY_FEDERATION_AUDIENCE_OPTION​

SQLSTATE: 42000

The query or command failed because of an invalid read option: in spark.readStream.format("pubsub").option("workloadIdentityFederation.audience", <audience>). Update <audience> to match the following format: //iam.googleapis.com/projects/{PROJECT_NUMBER}/locations/global/workloadIdentityPools/{POOL_ID}/providers/{PROVIDER_ID} and then retry the query or command again.

PS_MISSING_AUTH_INFO​

SQLSTATE: 42000

Failed to find complete PubSub authentication information.

PS_MISSING_REQUIRED_OPTION​

SQLSTATE: 42000

Could not find required option: <key>.

PS_MOVING_CHECKPOINT_FAILURE​

SQLSTATE: 22000

Fail to move raw data checkpoint files from <src> to destination directory: <dest>.

PS_MULTIPLE_AUTH_OPTIONS​

SQLSTATE: 42000

Please provide either your Databricks service credential or your GCP service account credentials.

PS_MULTIPLE_FAILED_EPOCHS​

SQLSTATE: 22000

PubSub stream cannot be started as there is more than one failed fetch: <failedEpochs>.

PS_OPTION_NOT_IN_BOUNDS​

SQLSTATE: 22000

<key> must be within the following bounds (<min>, <max>) exclusive of both bounds.

PS_PROVIDE_CREDENTIALS_WITH_OPTION​

SQLSTATE: 42000

Shared clusters do not support authentication with instance profiles. Provide credentials to the stream directly using .option().

PS_SPARK_SPECULATION_NOT_SUPPORTED​

SQLSTATE: 0A000

PubSub source connector is only available in cluster with spark.speculation disabled.

PS_UNABLE_TO_CREATE_SUBSCRIPTION​

SQLSTATE: 42000

An error occurred while trying to create subscription <subId> on topic <topicId>. Please check that there are sufficient permissions to create a subscription and try again.

PS_UNABLE_TO_PARSE_PROTO​

SQLSTATE: 22000

Unable to parse serialized bytes to generate proto.

PS_UNSUPPORTED_GET_OFFSET_CALL​

SQLSTATE: 0A000

getOffset is not supported without supplying a limit.

PYTHON_DATA_SOURCE_ERROR​

SQLSTATE: 38000

Failed to <action> Python data source <type>: <msg>

PYTHON_STREAMING_DATA_SOURCE_RUNTIME_ERROR​

SQLSTATE: 38000

Failed when Python streaming data source perform <action>: <msg>

QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY​

SQLSTATE: 428HD

Unable to access referenced table because a previously assigned column mask is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:

For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY

QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY​

SQLSTATE: 428HD

Unable to access referenced table because a previously assigned row level security policy is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:

For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY

QUERY_EXECUTION_TIMEOUT_EXCEEDED​

SQLSTATE: 57KD0

Query execution was cancelled due to exceeding the timeout (<timeoutSec>s). You can increase the limit in seconds by setting <config>.

QUERY_REJECTED​

SQLSTATE: 08004

Query execution was rejected.

QUERY_RESULT_WRITE_TO_CLOUD_STORE_PERMISSION_ERROR​

SQLSTATE: 42501

The workspace internal storage configuration prevents Databricks from accessing the cloud store.

READ_CURRENT_FILE_NOT_FOUND​

SQLSTATE: 42K03

<message>

It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.

READ_FILES_AMBIGUOUS_ROUTINE_PARAMETERS​

SQLSTATE: 4274K

The invocation of function <functionName> has <parameterName> and <alternativeName> set, which are aliases of each other. Please set only one of them.

READ_FILES_CREDENTIALS_PARSE_ERROR​

SQLSTATE: 42000

An error occurred while parsing the temporary credentials of the read_files() function.

For more details see READ_FILES_CREDENTIALS_PARSE_ERROR

READ_TVF_UNEXPECTED_REQUIRED_PARAMETER​

SQLSTATE: 4274K

The function <functionName> required parameter <parameterName> must be assigned at position <expectedPos> without the name.

RECIPIENT_EXPIRATION_NOT_SUPPORTED​

SQLSTATE: 0A000

Only TIMESTAMP/TIMESTAMP_LTZ/TIMESTAMP_NTZ types are supported for recipient expiration timestamp.

RECURSION_LEVEL_LIMIT_EXCEEDED​

SQLSTATE: 42836

Recursion level limit <levelLimit> reached but query has not exhausted, try increasing it like 'WITH RECURSIVE t(col) MAX RECURSION LEVEL 200'.

RECURSION_ROW_LIMIT_EXCEEDED​

SQLSTATE: 42836

Recursion row limit <rowLimit> reached but query has not exhausted, try setting a larger LIMIT value when querying the CTE relation.

RECURSIVE_CTE_IN_LEGACY_MODE​

SQLSTATE: 42836

Recursive definitions cannot be used in legacy CTE precedence mode (spark.sql.legacy.ctePrecedencePolicy=LEGACY).

RECURSIVE_CTE_WITH_LEGACY_INLINE_FLAG​

SQLSTATE: 42836

Recursive definitions cannot be used when legacy inline flag is set to true (spark.sql.legacy.inlineCTEInCommands=true).

RECURSIVE_PROTOBUF_SCHEMA​

SQLSTATE: 42K0G

Found recursive reference in Protobuf schema, which can not be processed by Spark by default: <fieldDescriptor>. try setting the option recursive.fields.max.depth 1 to 10. Going beyond 10 levels of recursion is not allowed.

RECURSIVE_VIEW​

SQLSTATE: 42K0H

Recursive view <viewIdent> detected (cycle: <newPath>).

REF_DEFAULT_VALUE_IS_NOT_ALLOWED_IN_PARTITION​

SQLSTATE: 42601

References to DEFAULT column values are not allowed within the PARTITION clause.

RELATION_LARGER_THAN_8G​

SQLSTATE: 54000

Can not build a <relationName> that is larger than 8G.

REMOTE_FUNCTION_HTTP_FAILED_ERROR​

SQLSTATE: 57012

The remote HTTP request failed with code <errorCode>, and error message <errorMessage>

REMOTE_FUNCTION_HTTP_RESULT_PARSE_ERROR​

SQLSTATE: 22032

Failed to evaluate the <functionName> SQL function due to inability to parse the JSON result from the remote HTTP response; the error message is <errorMessage>. Check API documentation: <docUrl>. Please fix the problem indicated in the error message and retry the query again.

REMOTE_FUNCTION_HTTP_RESULT_UNEXPECTED_ERROR​

SQLSTATE: 57012

Failed to evaluate the <functionName> SQL function due to inability to process the unexpected remote HTTP response; the error message is <errorMessage>. Check API documentation: <docUrl>. Please fix the problem indicated in the error message and retry the query again.

REMOTE_FUNCTION_HTTP_RETRY_TIMEOUT​

SQLSTATE: 57012

The remote request failed after retrying <N> times; the last failed HTTP error code was <errorCode> and the message was <errorMessage>

REMOTE_FUNCTION_MISSING_REQUIREMENTS_ERROR​

SQLSTATE: 57012

Failed to evaluate the <functionName> SQL function because <errorMessage>. Check requirements in <docUrl>. Please fix the problem indicated in the error message and retry the query again.

REMOTE_QUERY_FUNCTION_UNSUPPORTED_CONNECTOR_PARAMETERS​

SQLSTATE: 4274K

Parameters <parameterNames> are not supported for the remote_query function, which queries a connection of type '<connectionType>'.

For more details see REMOTE_QUERY_FUNCTION_UNSUPPORTED_CONNECTOR_PARAMETERS

RENAME_SRC_PATH_NOT_FOUND​

SQLSTATE: 42K03

Failed to rename as <sourcePath> was not found.

REPEATED_CLAUSE​

SQLSTATE: 42614

The <clause> clause may be used at most once per <operation> operation.

REQUIRED_PARAMETER_ALREADY_PROVIDED_POSITIONALLY​

SQLSTATE: 4274K

The routine <routineName> required parameter <parameterName> has been assigned at position <positionalIndex> without the name.

Please update the function call to either remove the named argument with <parameterName> for this parameter or remove the positional

argument at <positionalIndex> and then try the query again.

REQUIRED_PARAMETER_NOT_FOUND​

SQLSTATE: 4274K

Cannot invoke routine <routineName> because the parameter named <parameterName> is required, but the routine call did not supply a value. Please update the routine call to supply an argument value (either positionally at index <index> or by name) and retry the query again.

REQUIRES_SINGLE_PART_NAMESPACE​

SQLSTATE: 42K05

<sessionCatalog> requires a single-part namespace, but got <namespace>.

RESCUED_DATA_COLUMN_CONFLICT_WITH_SINGLE_VARIANT​

SQLSTATE: 4274K

The 'rescuedDataColumn' DataFrame API reader option is mutually exclusive with the 'singleVariantColumn' DataFrame API option.

Please remove one of them and then retry the DataFrame operation again.

RESERVED_CDC_COLUMNS_ON_WRITE​

SQLSTATE: 42939

The write contains reserved columns <columnList> that are used

internally as metadata for Change Data Feed. To write to the table either rename/drop

these columns or disable Change Data Feed on the table by setting

<config> to false.

RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED​

SQLSTATE: 0A000

The option <option> has restricted values on Shared clusters for the <source> source.

For more details see RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED

ROUTINE_ALREADY_EXISTS​

SQLSTATE: 42723

Cannot create the <newRoutineType> <routineName> because a <existingRoutineType> of that name already exists.

Choose a different name, drop or replace the existing <existingRoutineType>, or add the IF NOT EXISTS clause to tolerate a pre-existing <newRoutineType>.

ROUTINE_NOT_FOUND​

SQLSTATE: 42883

The routine <routineName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP ... IF EXISTS.

ROUTINE_PARAMETER_NOT_FOUND​

SQLSTATE: 42000

The routine <routineName> does not support the parameter <parameterName> specified at position <pos>.<suggestion>

ROUTINE_USES_SYSTEM_RESERVED_CLASS_NAME​

SQLSTATE: 42939

The function <routineName> cannot be created because the specified classname '<className>' is reserved for system use. Please rename the class and try again.

ROW_LEVEL_SECURITY_ABAC_MISMATCH​

SQLSTATE: 0A000

Row filters could not be resolved on <tableName> because there was a mismatch between row filters inherited from policies and explicitly defined row filters. To proceed, please disable Attribute Based Access Control (ABAC) and contact Databricks support.

ROW_LEVEL_SECURITY_CHECK_CONSTRAINT_UNSUPPORTED​

SQLSTATE: 0A000

Creating CHECK constraint on table <tableName> with row level security policies is not supported.

ROW_LEVEL_SECURITY_COLUMN_MASK_UNRESOLVED_REFERENCE_COLUMN​

SQLSTATE: 42703

A column with name <objectName> referenced in a row filter or column mask function parameter cannot be resolved.

This may happen if the underlying table schema has changed and the referenced column no longer exists.

For example, this can occur if the column was removed in an external system (for example, a federated table), or if a REPLACE operation on the table dropped the column.

To resolve this, users with management privileges on the table can inspect the current row filters and column masks using DESCRIBE TABLE EXTENDED, and drop or re-create any that reference non-existent columns using ALTER TABLE ... SET/DROP ROW FILTER or MASK.

Note: Databricks introduced a safety improvement to preserve column masks during REPLACE operations when the new schema includes the same column, even if the mask is not specified. This prevents unintentional policy loss on tables.

ROW_LEVEL_SECURITY_DUPLICATE_COLUMN_NAME​

SQLSTATE: 42734

A <statementType> statement attempted to assign a row level security policy to a table, but two or more referenced columns had the same name <columnName>, which is invalid.

ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED​

SQLSTATE: 0A000

Row level security policies for <tableName> are not supported:

For more details see ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED

ROW_LEVEL_SECURITY_INCOMPATIBLE_SCHEMA_CHANGE​

SQLSTATE: 0A000

Unable to <statementType> <columnName> from table <tableName> because it's referenced in a row level security policy. The table owner must remove or alter this policy before proceeding.

ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_SOURCE​

SQLSTATE: 0A000

MERGE INTO operations do not support row level security policies in source table <tableName>.

ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_TARGET​

SQLSTATE: 0A000

MERGE INTO operations do not support writing into table <tableName> with row level security policies.

ROW_LEVEL_SECURITY_MULTI_PART_COLUMN_NAME​

SQLSTATE: 42K05

This statement attempted to assign a row level security policy to a table, but referenced column <columnName> had multiple name parts, which is invalid.

ROW_LEVEL_SECURITY_REQUIRE_UNITY_CATALOG​

SQLSTATE: 0A000

Row level security policies are only supported in Unity Catalog.

ROW_LEVEL_SECURITY_SHOW_PARTITIONS_UNSUPPORTED​

SQLSTATE: 0A000

SHOW PARTITIONS command is not supported for<format> tables with row level security policy.

ROW_LEVEL_SECURITY_TABLE_CLONE_SOURCE_NOT_SUPPORTED​

SQLSTATE: 0A000

<mode> clone from table <tableName> with row level security policy is not supported.

ROW_LEVEL_SECURITY_TABLE_CLONE_TARGET_NOT_SUPPORTED​

SQLSTATE: 0A000

<mode> clone to table <tableName> with row level security policy is not supported.

ROW_LEVEL_SECURITY_UNSUPPORTED_CONSTANT_AS_PARAMETER​

SQLSTATE: 0AKD1

Using a constant as a parameter in a row level security policy is not supported. Please update your SQL command to remove the constant from the row filter definition and then retry the command again.

ROW_LEVEL_SECURITY_UNSUPPORTED_DATA_TYPE​

SQLSTATE: 0AKDC

Function <functionName> used for row level security policy has parameter with unsupported data type <dataType>.

ROW_LEVEL_SECURITY_UNSUPPORTED_PROVIDER​

SQLSTATE: 0A000

Failed to execute <statementType> command because assigning row level security policy is not supported for target data source with table provider: "<provider>".

ROW_SUBQUERY_TOO_MANY_ROWS​

SQLSTATE: 21000

More than one row returned by a subquery used as a row.

ROW_VALUE_IS_NULL​

SQLSTATE: 22023

Found NULL in a row at the index <index>, expected a non-NULL value.

RULE_ID_NOT_FOUND​

SQLSTATE: 22023

Not found an id for the rule name "<ruleName>". Please modify RuleIdCollection.scala if you are adding a new rule.

SQLSTATE: 42505

Authorization to the Salesforce Data Share API failed. Verify that the Databricks connection details are provided to the appropriate Salesforce data share target.

SAMPLE_TABLE_PERMISSIONS​

SQLSTATE: 42832

Permissions not supported on sample databases/tables.

SCALAR_FUNCTION_NOT_COMPATIBLE​

SQLSTATE: 42K0O

ScalarFunction <scalarFunc> not overrides method 'produceResult(InternalRow)' with custom implementation.

SCALAR_FUNCTION_NOT_FULLY_IMPLEMENTED​

SQLSTATE: 42K0P

ScalarFunction <scalarFunc> not implements or overrides method 'produceResult(InternalRow)'.

SCALAR_SUBQUERY_IS_IN_GROUP_BY_OR_AGGREGATE_FUNCTION​

SQLSTATE: 0A000

The correlated scalar subquery '<sqlExpr>' is neither present in GROUP BY, nor in an aggregate function.

Add it to GROUP BY using ordinal position or wrap it in first() (or first_value) if you don't care which value you get.

SCALAR_SUBQUERY_TOO_MANY_ROWS​

SQLSTATE: 21000

More than one row returned by a subquery used as an expression.

SCHEDULE_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot add <scheduleType> to a table that already has <existingScheduleType>. Please drop the existing schedule or use ALTER TABLE ... ALTER <scheduleType> ... to alter it.

SCHEDULE_PERIOD_INVALID​

SQLSTATE: 22003

The schedule period for <timeUnit> must be an integer value between 1 and <upperBound> (inclusive). Received: <actual>.

SCHEMA_ALREADY_EXISTS​

SQLSTATE: 42P06

Cannot create schema <schemaName> because it already exists.

Choose a different name, drop the existing schema, or add the IF NOT EXISTS clause to tolerate pre-existing schema.

SCHEMA_NOT_EMPTY​

SQLSTATE: 2BP01

Cannot drop a schema <schemaName> because it contains objects.

Use DROP SCHEMA ... CASCADE to drop the schema and all its objects.

SCHEMA_NOT_FOUND​

SQLSTATE: 42704

The schema <schemaName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog.

To tolerate the error on drop use DROP SCHEMA IF EXISTS.

SCHEMA_REGISTRY_CONFIGURATION_ERROR​

SQLSTATE: 42K0G

Schema from schema registry could not be initialized. <reason>.

SECOND_FUNCTION_ARGUMENT_NOT_INTEGER​

SQLSTATE: 22023

The second argument of <functionName> function needs to be an integer.

SECRET_FUNCTION_INVALID_LOCATION​

SQLSTATE: 42K0E

Cannot execute <commandType> command with one or more non-encrypted references to the SECRET function; please encrypt the result of each such function call with AES_ENCRYPT and try the command again

SEED_EXPRESSION_IS_UNFOLDABLE​

SQLSTATE: 42K08

The seed expression <seedExpr> of the expression <exprWithSeed> must be foldable.

SERVER_IS_BUSY​

SQLSTATE: 08KD1

The server is busy and could not handle the request. Please wait a moment and try again.

SFTP_DEPRECATED_SSH_RSA_KEY_ALGORITHM​

SQLSTATE: 08006

The SFTP server <host>:<port> is using the deprecated SSH RSA algorithm for key exchange.

Consider upgrade the SFTP server to use a more secure algorithm such as ECDSA or ED25519.

Alternatively, bypass this error by setting <escapeHatchConf> to true

SFTP_UNABLE_TO_CONNECT​

SQLSTATE: 08006

Failed to connect to SFTP server <host> on port <port> with username <user>.

<error>

SFTP_UNKNOWN_HOST_KEY​

SQLSTATE: 08006

The host key of the SFTP server <host> is unknown or changed.

Verify the SSH fingerprint matching error message below during the connection attempt:

<error>

Then extract the fingerprint hash and provide it as part of the connection creation options with option name key_fingerprint.

For example, if the message states 'ECDSA key fingerprint is SHA256:XXX/YYY', submit 'SHA256:XXX/YYY' as part of the connection options.

SFTP_USER_DOES_NOT_MATCH​

SQLSTATE: 08006

The user retrieved from the credential <credentialUser> does not match that of the one specified in SFTP path <path>.

SHOW_COLUMNS_WITH_CONFLICT_NAMESPACE​

SQLSTATE: 42K05

SHOW COLUMNS with conflicting namespaces: <namespaceA> != <namespaceB>.

SORT_BY_WITHOUT_BUCKETING​

SQLSTATE: 42601

sortBy must be used together with bucketBy.

SPARK_JOB_CANCELLED​

SQLSTATE: HY008

Job <jobId> cancelled <reason>

SPECIFY_BUCKETING_IS_NOT_ALLOWED​

SQLSTATE: 42601

A CREATE TABLE without explicit column list cannot specify bucketing information.

Please use the form with explicit column list and specify bucketing information.

Alternatively, allow bucketing information to be inferred by omitting the clause.

SPECIFY_CLUSTER_BY_WITH_BUCKETING_IS_NOT_ALLOWED​

SQLSTATE: 42908

Cannot specify both CLUSTER BY and CLUSTERED BY INTO BUCKETS.

SPECIFY_CLUSTER_BY_WITH_PARTITIONED_BY_IS_NOT_ALLOWED​

SQLSTATE: 42908

Cannot specify both CLUSTER BY and PARTITIONED BY.

SPECIFY_PARTITION_IS_NOT_ALLOWED​

SQLSTATE: 42601

A CREATE TABLE without explicit column list cannot specify PARTITIONED BY.

Please use the form with explicit column list and specify PARTITIONED BY.

Alternatively, allow partitioning to be inferred by omitting the PARTITION BY clause.

SPILL_OUT_OF_MEMORY​

SQLSTATE: 82003

Error while calling spill() on <consumerToSpill> : <message>

SQL_CONF_NOT_FOUND​

SQLSTATE: 42K0I

The SQL config <sqlConf> cannot be found. Please verify that the config exists.

SQL_SCRIPT_IN_EXECUTE_IMMEDIATE​

SQLSTATE: 07501

SQL Scripts in EXECUTE IMMEDIATE commands are not allowed. Please ensure that the SQL query provided (<sqlString>) is not SQL Script. Make sure the sql_string is a well-formed SQL statement and does not contain BEGIN and END.

SQL_SCRIPT_MAX_NUMBER_OF_CHARACTERS_EXCEEDED​

SQLSTATE: 54000

Maximum number of characters in a SQL Script (id: <scriptId>) has been exceeded. The maximum number of characters allowed is <maxChars>, and the script had <chars> characters.

SQL_SCRIPT_MAX_NUMBER_OF_LINES_EXCEEDED​

SQLSTATE: 54000

Maximum number of lines in a SQL Script (id: <scriptId>) has been exceeded. The maximum number of lines allowed is <maxLines>, and the script had <lines> lines.

SQL_SCRIPT_MAX_NUMBER_OF_LOCAL_VARIABLE_DECLARATIONS_EXCEEDED​

SQLSTATE: 54KD1

Maximum number of local variable declarations in a SQL Script (id: <scriptId>) has been exceeded. The maximum number of declarations allowed is <maxDeclarations>, and the script had <declarations>.

SQL_STORED_PROCEDURES_NESTED_CALLS_LIMIT_EXCEEDED​

SQLSTATE: 54000

Maximum number of nested procedure calls has been exceeded with procedure (name: <procedureName>, callId: <procedureCallId>). The maximum allowed number of nested procedure calls is <limit>.

STAGING_PATH_CURRENTLY_INACCESSIBLE​

SQLSTATE: 22000

Transient error while accessing target staging path <path>, please try in a few minutes

STAR_GROUP_BY_POS​

SQLSTATE: 0A000

Star (*) is not allowed in a select list when GROUP BY an ordinal position is used.

STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_HANDLE_STATE​

SQLSTATE: 42802

Failed to perform stateful processor operation=<operationType> with invalid handle state=<handleState>.

STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_TIME_MODE​

SQLSTATE: 42802

Failed to perform stateful processor operation=<operationType> with invalid timeMode=<timeMode>

STATEFUL_PROCESSOR_DUPLICATE_STATE_VARIABLE_DEFINED​

SQLSTATE: 42802

State variable with name <stateVarName> has already been defined in the StatefulProcessor.

STATEFUL_PROCESSOR_INCORRECT_TIME_MODE_TO_ASSIGN_TTL​

SQLSTATE: 42802

Cannot use TTL for state=<stateName> in timeMode=<timeMode>, use TimeMode.ProcessingTime() instead.

STATEFUL_PROCESSOR_TTL_DURATION_MUST_BE_POSITIVE​

SQLSTATE: 42802

TTL duration must be greater than zero for State store operation=<operationType> on state=<stateName>.

STATEFUL_PROCESSOR_UNKNOWN_TIME_MODE​

SQLSTATE: 42802

Unknown time mode <timeMode>. Accepted timeMode modes are 'none', 'processingTime', 'eventTime'

STATE_STORE_CANNOT_CREATE_COLUMN_FAMILY_WITH_RESERVED_CHARS​

SQLSTATE: 42802

Failed to create column family with unsupported starting character and name=<colFamilyName>.

STATE_STORE_CANNOT_USE_COLUMN_FAMILY_WITH_INVALID_NAME​

SQLSTATE: 42802

Failed to perform column family operation=<operationName> with invalid name=<colFamilyName>. Column family name cannot be empty or include leading/trailing spaces or use the reserved keyword=default

STATE_STORE_COLUMN_FAMILY_SCHEMA_INCOMPATIBLE​

SQLSTATE: 42802

Incompatible schema transformation with column family=<colFamilyName>, oldSchema=<oldSchema>, newSchema=<newSchema>.

STATE_STORE_DOES_NOT_SUPPORT_REUSABLE_ITERATOR​

SQLSTATE: 42K06

StateStore <inputClass> does not support reusable iterator.

STATE_STORE_HANDLE_NOT_INITIALIZED​

SQLSTATE: 42802

The handle has not been initialized for this StatefulProcessor.

Please only use the StatefulProcessor within the transformWithState operator.

STATE_STORE_INCORRECT_NUM_ORDERING_COLS_FOR_RANGE_SCAN​

SQLSTATE: 42802

Incorrect number of ordering ordinals=<numOrderingCols> for range scan encoder. The number of ordering ordinals cannot be zero or greater than number of schema columns.

STATE_STORE_INCORRECT_NUM_PREFIX_COLS_FOR_PREFIX_SCAN​

SQLSTATE: 42802

Incorrect number of prefix columns=<numPrefixCols> for prefix scan encoder. Prefix columns cannot be zero or greater than or equal to num of schema columns.

STATE_STORE_INVALID_CONFIG_AFTER_RESTART​

SQLSTATE: 42K06

Cannot change <configName> from <oldConfig> to <newConfig> between restarts. Please set <configName> to <oldConfig>, or restart with a new checkpoint directory.

STATE_STORE_INVALID_PROVIDER​

SQLSTATE: 42K06

The given State Store Provider <inputClass> does not extend org.apache.spark.sql.execution.streaming.state.StateStoreProvider.

STATE_STORE_INVALID_VARIABLE_TYPE_CHANGE​

SQLSTATE: 42K06

Cannot change <stateVarName> to <newType> between query restarts. Please set <stateVarName> to <oldType>, or restart with a new checkpoint directory.

STATE_STORE_KEY_SCHEMA_NOT_COMPATIBLE​

SQLSTATE: 42000

The provided key schema does not match existing schema in operator state.

Existing schema=<storedKeySchema>; provided schema=<newKeySchema>.

To run the query without schema validation, set spark.sql.streaming.stateStore.stateSchemaCheck to false.

Note that running without schema validation can have non-deterministic behavior.

STATE_STORE_NATIVE_ROCKSDB_TIMEOUT​

SQLSTATE: 58030

When accessing RocksDB state store for stateful streaming operation, calling native RocksDB function <funcName> timed out after waiting timeout=<timeoutMs> ms. Please try again and restart the cluster if error persists.

STATE_STORE_NULL_TYPE_ORDERING_COLS_NOT_SUPPORTED​

SQLSTATE: 42802

Null type ordering column with name=<fieldName> at index=<index> is not supported for range scan encoder.

STATE_STORE_PROVIDER_DOES_NOT_SUPPORT_FINE_GRAINED_STATE_REPLAY​

SQLSTATE: 42K06

The given State Store Provider <inputClass> does not extend org.apache.spark.sql.execution.streaming.state.SupportsFineGrainedReplay.

Therefore, it does not support option snapshotStartBatchId or readChangeFeed in state data source.

STATE_STORE_STATE_SCHEMA_FILES_THRESHOLD_EXCEEDED​

SQLSTATE: 42K06

The number of state schema files <numStateSchemaFiles> exceeds the maximum number of state schema files for this query: <maxStateSchemaFiles>.

Added: <addedColumnFamilies>, Removed: <removedColumnFamilies>

Please set 'spark.sql.streaming.stateStore.stateSchemaFilesThreshold' to a higher number, or revert state schema modifications

STATE_STORE_UNSUPPORTED_OPERATION_ON_MISSING_COLUMN_FAMILY​

SQLSTATE: 42802

State store operation=<operationType> not supported on missing column family=<colFamilyName>.

STATE_STORE_VALUE_SCHEMA_EVOLUTION_THRESHOLD_EXCEEDED​

SQLSTATE: 42K06

The number of state schema evolutions <numSchemaEvolutions> exceeds the maximum number of state schema evolutions, <maxSchemaEvolutions>, allowed for this column family.

Offending column family: <colFamilyName>

Please set 'spark.sql.streaming.stateStore.valueStateSchemaEvolutionThreshold' to a higher number, or revert state schema modifications

STATE_STORE_VALUE_SCHEMA_NOT_COMPATIBLE​

SQLSTATE: 42000

The provided value schema does not match existing schema in operator state.

Existing schema=<storedValueSchema>; provided schema=<newValueSchema>.

To run the query without schema validation, set spark.sql.streaming.stateStore.stateSchemaCheck to false.

Note that running without schema validation can have non-deterministic behavior.

STATE_STORE_VARIABLE_SIZE_ORDERING_COLS_NOT_SUPPORTED​

SQLSTATE: 42802

Variable size ordering column with name=<fieldName> at index=<index> is not supported for range scan encoder.

STATIC_PARTITION_COLUMN_IN_INSERT_COLUMN_LIST​

SQLSTATE: 42713

Static partition column <staticName> is also specified in the column list.

STDS_COMMITTED_BATCH_UNAVAILABLE​

SQLSTATE: KD006

No committed batch found, checkpoint location: <checkpointLocation>. Ensure that the query has run and committed any microbatch before stopping.

STDS_CONFLICT_OPTIONS​

SQLSTATE: 42613

The options <options> cannot be specified together. Please specify the one.

STDS_FAILED_TO_READ_OPERATOR_METADATA​

SQLSTATE: 42K03

Failed to read the operator metadata for checkpointLocation=<checkpointLocation> and batchId=<batchId>.

Either the file does not exist, or the file is corrupted.

Rerun the streaming query to construct the operator metadata, and report to the corresponding communities or vendors if the error persists.

STDS_FAILED_TO_READ_STATE_SCHEMA​

SQLSTATE: 42K03

Failed to read the state schema. Either the file does not exist, or the file is corrupted. options: <sourceOptions>.

Rerun the streaming query to construct the state schema, and report to the corresponding communities or vendors if the error persists.

STDS_INVALID_OPTION_VALUE​

SQLSTATE: 42616

Invalid value for source option '<optionName>':

For more details see STDS_INVALID_OPTION_VALUE

STDS_NO_PARTITION_DISCOVERED_IN_STATE_STORE​

SQLSTATE: KD006

The state does not have any partition. Please double check that the query points to the valid state. options: <sourceOptions>

STDS_OFFSET_LOG_UNAVAILABLE​

SQLSTATE: KD006

The offset log for <batchId> does not exist, checkpoint location: <checkpointLocation>.

Please specify the batch ID which is available for querying - you can query the available batch IDs via using state metadata data source.

STDS_OFFSET_METADATA_LOG_UNAVAILABLE​

SQLSTATE: KD006

Metadata is not available for offset log for <batchId>, checkpoint location: <checkpointLocation>.

The checkpoint seems to be only run with older Spark version(s). Run the streaming query with the recent Spark version, so that Spark constructs the state metadata.

STDS_REQUIRED_OPTION_UNSPECIFIED​

SQLSTATE: 42601

'<optionName>' must be specified.

STREAMING_AQE_NOT_SUPPORTED_FOR_STATEFUL_OPERATORS​

SQLSTATE: 0A000

Adaptive Query Execution is not supported for stateful operators in Structured Streaming.

STREAMING_FROM_MATERIALIZED_VIEW​

SQLSTATE: 0A000

Cannot stream from materialized view <viewName>. Streaming from materialized views is not supported.

STREAMING_OUTPUT_MODE​

SQLSTATE: 42KDE

Invalid streaming output mode: <outputMode>.

For more details see STREAMING_OUTPUT_MODE

STREAMING_RATE_SOURCE_OFFSET_VERSION_MISMATCH​

SQLSTATE: KD002

Expect rate source offset version <expectedVersion>, but got version <actualVersion>. To continue, set the option "version" to <expectedVersion> in the rate source options. For example, spark.readStream.format("rate").option("version", "<expectedVersion>").

STREAMING_RATE_SOURCE_V2_RAMPUP_TIME_UNSUPPORTED​

SQLSTATE: 0A000

The option "rampUpTime" is not supported by rate version 2. To use this option, set option "version" to 1. For example, spark.readStream.format("rate").option("version", "1").

STREAMING_REAL_TIME_MODE​

SQLSTATE: 0A000

Streaming real-time mode has the following limitation:

For more details see STREAMING_REAL_TIME_MODE

STREAMING_SINK_DELIVERY_MODE​

SQLSTATE: 42KDE

Invalid streaming sink delivery mode: <deliveryMode>.

For more details see STREAMING_SINK_DELIVERY_MODE

STREAMING_STATEFUL_OPERATOR_NOT_MATCH_IN_STATE_METADATA​

SQLSTATE: 42K03

Streaming stateful operator name does not match with the operator in state metadata. This likely to happen when user adds/removes/changes stateful operator of existing streaming query.

Stateful operators in the metadata: [<OpsInMetadataSeq>]; Stateful operators in current batch: [<OpsInCurBatchSeq>].

STREAMING_TABLE_NEEDS_REFRESH​

SQLSTATE: 55019

streaming table <tableName> needs to be refreshed to execute <operation>.

If the table is created from DBSQL, please run REFRESH STREAMING TABLE.

If the table is created by a pipeline in DLTs, please run a pipeline update.

STREAMING_TABLE_NOT_SUPPORTED​

SQLSTATE: 56038

streaming tables can only be created and refreshed in DLTs and Databricks SQL Warehouses.

STREAMING_TABLE_OPERATION_NOT_ALLOWED​

SQLSTATE: 42601

The operation <operation> is not allowed:

For more details see STREAMING_TABLE_OPERATION_NOT_ALLOWED

STREAMING_TABLE_QUERY_INVALID​

SQLSTATE: 42000

streaming table <tableName> can only be created from a streaming query. Please add the STREAM keyword to your FROM clause to turn this relation into a streaming query.

STREAM_NOT_FOUND_FOR_KINESIS_SOURCE​

SQLSTATE: 42K02

Kinesis stream <streamName> in <region> not found.

Please start a new query pointing to the correct stream name.

STRUCT_ARRAY_LENGTH_MISMATCH​

SQLSTATE: 2201E

Input row doesn't have expected number of values required by the schema. <expected> fields are required while <actual> values are provided.

SUM_OF_LIMIT_AND_OFFSET_EXCEEDS_MAX_INT​

SQLSTATE: 22003

The sum of the LIMIT clause and the OFFSET clause must not be greater than the maximum 32-bit integer value (2,147,483,647) but found limit = <limit>, offset = <offset>.

SYNC_METADATA_DELTA_ONLY​

SQLSTATE: 0AKDD

Repair table sync metadata command is only supported for delta table.

SYNC_SRC_TARGET_TBL_NOT_SAME​

SQLSTATE: 42KD2

Source table name <srcTable> must be same as destination table name <destTable>.

SYNTAX_DISCONTINUED​

SQLSTATE: 42601

Support of the clause or keyword: <clause> has been discontinued in this context.

For more details see SYNTAX_DISCONTINUED

TABLE_OR_VIEW_ALREADY_EXISTS​

SQLSTATE: 42P07

Cannot create table or view <relationName> because it already exists.

Choose a different name, drop the existing object, add the IF NOT EXISTS clause to tolerate pre-existing objects, add the OR REPLACE clause to replace the existing materialized view, or add the OR REFRESH clause to refresh the existing streaming table.

TABLE_OR_VIEW_NOT_FOUND​

SQLSTATE: 42P01

The table or view <relationName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS.

For more details see TABLE_OR_VIEW_NOT_FOUND

TABLE_VALUED_ARGUMENTS_NOT_YET_IMPLEMENTED_FOR_SQL_FUNCTIONS​

SQLSTATE: 0A000

Cannot <action> SQL user-defined function <functionName> with TABLE arguments because this functionality is not yet implemented.

TABLE_VALUED_FUNCTION_FAILED_TO_ANALYZE_IN_PYTHON​

SQLSTATE: 38000

Failed to analyze the Python user defined table function: <msg>

TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INCOMPATIBLE_WITH_CALL​

SQLSTATE: 22023

Failed to evaluate the table function <functionName> because its table metadata <requestedMetadata>, but the function call <invalidFunctionCallProperty>.

TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INVALID​

SQLSTATE: 22023

Failed to evaluate the table function <functionName> because its table metadata was invalid; <reason>.

TABLE_VALUED_FUNCTION_TOO_MANY_TABLE_ARGUMENTS​

SQLSTATE: 54023

There are too many table arguments for table-valued function.

It allows one table argument, but got: <num>.

If you want to allow it, please set "spark.sql.allowMultipleTableArguments.enabled" to "true"

TABLE_WITH_ID_NOT_FOUND​

SQLSTATE: 42P01

Table with ID <tableId> cannot be found. Verify the correctness of the UUID.

TASK_WRITE_FAILED​

SQLSTATE: 58030

Task failed while writing rows to <path>.

TEMP_CHECKPOINT_LOCATION_NOT_SUPPORTED​

SQLSTATE: 0A000

Implicit temporary streaming checkpoint locations are not supported in the current workspace, please specify a checkpoint location explicitly.

For display(), set the checkpoint location using:

display(df, checkpointLocation = "your_path")

For all other streaming queries, use:

.option("checkpointLocation", "your_path").

TEMP_TABLE_CREATION_LEGACY_WITH_QUERY​

SQLSTATE: 0A000

CREATE TEMPORARY TABLE ... AS ... is not supported here, please use CREATE TEMPORARY VIEW instead

TEMP_TABLE_CREATION_MUTUAL_EXCLUSIVE_SPECS​

SQLSTATE: 0A000

CREATE TEMPORARY TABLE does not support specifying <unsupportedSpec>, please create a permanent table instead.

TEMP_TABLE_CREATION_REQUIRES_SINGLE_PART_NAME​

SQLSTATE: 42K05

Creating a session-local temporary table requires a single part table name, but got <tableName>. Please update the command to use a single part table name and retry again.

TEMP_TABLE_DELETION_MUTUAL_EXCLUSIVE_SPECS​

SQLSTATE: 0A000

DROP TEMPORARY TABLE does not support specifying <unsupportedSpec>, please remove this specification, or drop a permanent table instead using DROP TABLE command.

TEMP_TABLE_DELETION_REQUIRES_SINGLE_PART_NAME​

SQLSTATE: 42K05

Dropping a session-local temporary table requires a single part table name, but got <tableName>. Please update the DROP TEMPORARY TABLE command to use a single part table name to drop a temporary table, or use DROP TABLE command instead to drop a permanent table.

TEMP_TABLE_DELETION_REQUIRES_V2_COMMAND​

SQLSTATE: 0A000

DROP TEMPORARY TABLE requires turning on V2 commands. Please set configuration "spark.sql.legacy.useV1Command" to false and retry.

TEMP_TABLE_NOT_FOUND​

SQLSTATE: 42P01

Temporary table <tableName> cannot be found in the current session. Verify the spelling and correctness of the table name, and retry the query or command again.

To tolerate the error on drop use DROP TEMP TABLE IF EXISTS.

TEMP_TABLE_NOT_SUPPORTED_WITH_DATABRICKS_JOBS​

SQLSTATE: 0A000

Temporary table is not yet supported in Databricks Jobs. Please use in Databricks Notebooks instead and contact Databricks Support for more information.

TEMP_TABLE_NOT_SUPPORTED_WITH_HMS​

SQLSTATE: 0A000

Temporary table operation <operation> is not supported in Hive Metastore.

TEMP_TABLE_OPERATION_NOT_SUPPORTED​

SQLSTATE: 0A000

Operations on the session-local temporary table <tableName> are not supported:

For more details see TEMP_TABLE_OPERATION_NOT_SUPPORTED

TEMP_TABLE_OR_VIEW_ALREADY_EXISTS​

SQLSTATE: 42P07

Cannot create the temporary table or view <relationName> because it already exists.

Choose a different name, drop the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

TEMP_TABLE_REQUIRES_DELTA​

SQLSTATE: 0AKDD

Operations on session-local temporary tables requires Delta catalog to be enabled. Please turn on the Delta catalog and retry.

TEMP_TABLE_REQUIRES_UC​

SQLSTATE: 0AKUD

Operations on session-local temporary tables require Unity Catalog. Please enable Unity Catalog in your running environment and retry.

TEMP_VIEW_NAME_TOO_MANY_NAME_PARTS​

SQLSTATE: 428EK

CREATE TEMPORARY VIEW or the corresponding Dataset APIs only accept single-part view names, but got: <actualName>.

TRAILING_COMMA_IN_SELECT​

SQLSTATE: 42601

Trailing comma detected in SELECT clause. Remove the trailing comma before the FROM clause.

TRANSACTION_MAX_COMMIT_TIMESTAMP_EXCEEDED​

SQLSTATE: 25000

Transaction cannot commit as max commit timestamp is exceeded. maxCommitTimestamp:<maxCommitTimestampMs> commitTimestamp:<commitTimestampMs>

TRANSFORM_WITH_STATE_USER_FUNCTION_ERROR​

SQLSTATE: 39000

An error occurred in the user-defined function <function> of the StatefulProcessor. Reason: <reason>.

TRANSPOSE_EXCEED_ROW_LIMIT​

SQLSTATE: 54006

Number of rows exceeds the allowed limit of <maxValues> for TRANSPOSE. If this was intended, set <config> to at least the current row count.

TRANSPOSE_INVALID_INDEX_COLUMN​

SQLSTATE: 42804

Invalid index column for TRANSPOSE because: <reason>

TRANSPOSE_NO_LEAST_COMMON_TYPE​

SQLSTATE: 42K09

Transpose requires non-index columns to share a least common type, but <dt1> and <dt2> do not.

TRIGGER_INTERVAL_INVALID​

SQLSTATE: 22003

The trigger interval must be a positive duration that can be converted into whole seconds. Received: <actual> seconds.

TUPLE_IS_EMPTY​

SQLSTATE: 22004

Due to Scala's limited support of tuple, empty tuple is not supported.

TUPLE_SIZE_EXCEEDS_LIMIT​

SQLSTATE: 54011

Due to Scala's limited support of tuple, tuples with more than 22 elements are not supported.

UC_BUCKETED_TABLES​

SQLSTATE: 0AKUC

Bucketed tables are not supported in Unity Catalog.

UC_CATALOG_NAME_NOT_PROVIDED​

SQLSTATE: 3D000

For Unity Catalog, please specify the catalog name explicitly. E.g. SHOW GRANT your.address@email.com ON CATALOG main.

UC_COMMAND_NOT_SUPPORTED​

SQLSTATE: 0AKUC

The command(s): <commandName> are not supported in Unity Catalog.

For more details see UC_COMMAND_NOT_SUPPORTED

UC_COMMAND_NOT_SUPPORTED_IN_SERVERLESS​

SQLSTATE: 0AKUC

The command(s): <commandName> are not supported for Unity Catalog clusters in serverless. Use single user or shared clusters instead.

SQLSTATE: 0AKUC

The command(s): <commandName> are not supported for Unity Catalog clusters in shared access mode. Use single user access mode instead.

UC_CONNECTION_NOT_FOUND_FOR_FILE_SYSTEM_SOURCE_ACCESS​

SQLSTATE: 42704

Could not find a valid UC connection for accessing <path> after evaluating <connectionNames>.

Ensure that at least one valid UC connection is available for accessing the target path.

Detailed errors for the connections evaluated:

<connectionErrors>

UC_CREDENTIAL_PURPOSE_NOT_SUPPORTED​

SQLSTATE: 0AKUC

The specified credential kind is not supported.

UC_DATASOURCE_NOT_SUPPORTED​

SQLSTATE: 0AKUC

Data source format <dataSourceFormatName> is not supported in Unity Catalog.

UC_DATASOURCE_OPTIONS_NOT_SUPPORTED​

SQLSTATE: 0AKUC

Data source options are not supported in Unity Catalog.

UC_DEPENDENCY_DOES_NOT_EXIST​

SQLSTATE: 42P01

Dependency does not exist in Unity Catalog:

<errorMessage>

UC_EXTERNAL_VOLUME_MISSING_LOCATION​

SQLSTATE: 42601

LOCATION clause must be present for external volume. Please check the syntax 'CREATE EXTERNAL VOLUME ... LOCATION ...' for creating an external volume.

UC_FAILED_PROVISIONING_STATE​

SQLSTATE: 0AKUC

The query failed because it attempted to refer to table <tableName> but was unable to do so: <failureReason>. Please update the table <tableName> to ensure it is in an Active provisioning state and then retry the query again.

UC_FILE_SCHEME_FOR_TABLE_CREATION_NOT_SUPPORTED​

SQLSTATE: 0AKUC

Creating table in Unity Catalog with file scheme <schemeName> is not supported.

Instead, please create a federated data source connection using the CREATE CONNECTION command for the same table provider, then create a catalog based on the connection with a CREATE FOREIGN CATALOG command to reference the tables therein.

UC_HIVE_METASTORE_DISABLED_EXCEPTION​

SQLSTATE: 0A000

The operation attempted to use Hive Metastore, which is disabled due to legacy access being turned off in your account or workspace. Please double check the default catalog in current session and default namespace setting. If you need to access the Hive Metastore, please ask your admin to set up Hive Metastore federation through Unity Catalog.

UC_HIVE_METASTORE_FEDERATION_CROSS_CATALOG_VIEW_NOT_SUPPORTED​

SQLSTATE: 56038

Hive Metastore Federation view does not support dependencies across multiple catalogs. View <view> in Hive Metastore Federation catalog must use dependency from hive_metastore or spark_catalog catalog but its dependency <dependency> is in another catalog <referencedCatalog>. Please update the dependencies to satisfy this constraint and then retry your query or command again.

UC_HIVE_METASTORE_FEDERATION_NOT_ENABLED​

SQLSTATE: 0A000

Hive Metastore federation is not enabled on this cluster.

Accessing the catalog <catalogName> is not supported on this cluster

UC_INVALID_DEPENDENCIES​

SQLSTATE: 56098

Dependencies of <viewName> are recorded as <storedDeps> while being parsed as <parsedDeps>. This likely occurred through improper use of a non-SQL API. You can repair dependencies in Databricks Runtime by running ALTER VIEW <viewName> AS <viewText>.

UC_INVALID_NAMESPACE​

SQLSTATE: 0AKUC

Nested or empty namespaces are not supported in Unity Catalog.

UC_INVALID_REFERENCE​

SQLSTATE: 0AKUC

Non-Unity-Catalog object <name> can't be referenced in Unity Catalog objects.

UC_LAKEHOUSE_FEDERATION_WRITES_NOT_ALLOWED​

SQLSTATE: 56038

Unity Catalog Lakehouse Federation write support is not enabled for provider <provider> on this cluster.

UC_LOCATION_FOR_MANAGED_VOLUME_NOT_SUPPORTED​

SQLSTATE: 42601

Managed volume does not accept LOCATION clause. Please check the syntax 'CREATE VOLUME ...' for creating a managed volume.

UC_NOT_ENABLED​

SQLSTATE: 56038

Unity Catalog is not enabled on this cluster.

UC_QUERY_FEDERATION_NOT_ENABLED​

SQLSTATE: 56038

Unity Catalog Query Federation is not enabled on this cluster.

UC_RESOLVED_DBFS_PATH_MISMATCH​

SQLSTATE: 0AKUC

The query failed because it attempted to refer to <objectType> <name> but was unable to do so: Resolved DBFS path <resolvedHmsPath> does not match Unity Catalog storage location <ucStorageLocation>.

UC_SERVICE_CREDENTIALS_NOT_ENABLED​

SQLSTATE: 56038

Service credentials are not enabled on this cluster.

UC_VOLUMES_NOT_ENABLED​

SQLSTATE: 56038

Support for Unity Catalog Volumes is not enabled on this instance.

UC_VOLUMES_SHARING_NOT_ENABLED​

SQLSTATE: 56038

Support for Volume Sharing is not enabled on this instance.

UC_VOLUME_NOT_FOUND​

SQLSTATE: 42704

Volume <name> does not exist. Please use 'SHOW VOLUMES' to list available volumes.

UDF_ENVIRONMENT_ERROR​

SQLSTATE: 39000

Failed to install UDF dependencies for <udfName> due to a system error.

For more details see UDF_ENVIRONMENT_ERROR

UDF_ENVIRONMENT_USER_ERROR​

SQLSTATE: 39000

Failed to install UDF dependencies for <udfName>.

For more details see UDF_ENVIRONMENT_USER_ERROR

UDF_ERROR​

SQLSTATE: none assigned

Execution of function <fn> failed

For more details see UDF_ERROR

UDF_LIMITS​

SQLSTATE: 54KD0

One or more UDF limits were breached.

For more details see UDF_LIMITS

UDF_MAX_COUNT_EXCEEDED​

SQLSTATE: 54KD0

Exceeded query-wide UDF limit of <maxNumUdfs> UDFs (limited during public preview). Found <numUdfs>. The UDFs were: <udfNames>.

UDF_PYSPARK_ERROR​

SQLSTATE: 39000

Python worker exited unexpectedly

For more details see UDF_PYSPARK_ERROR

UDF_PYSPARK_UNSUPPORTED_TYPE​

SQLSTATE: 0A000

PySpark UDF <udf> (<eval-type>) is not supported on clusters in Shared access mode.

UDF_PYSPARK_USER_CODE_ERROR​

SQLSTATE: 39000

Execution failed.

For more details see UDF_PYSPARK_USER_CODE_ERROR

UDF_UNSUPPORTED_PARAMETER_DEFAULT_VALUE​

SQLSTATE: 0A000

Parameter default value is not supported for user-defined <functionType> function.

UDF_USER_CODE_ERROR​

SQLSTATE: 39000

Execution of function <fn> failed.

For more details see UDF_USER_CODE_ERROR

UDTF_ALIAS_NUMBER_MISMATCH​

SQLSTATE: 42802

The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF.

Expected <aliasesSize> aliases, but got <aliasesNames>.

Please ensure that the number of aliases provided matches the number of columns output by the UDTF.

UDTF_INVALID_ALIAS_IN_REQUESTED_ORDERING_STRING_FROM_ANALYZE_METHOD​

SQLSTATE: 42802

Failed to evaluate the user-defined table function because its 'analyze' method returned a requested OrderingColumn whose column name expression included an unnecessary alias <aliasName>; please remove this alias and then try the query again.

UDTF_INVALID_REQUESTED_SELECTED_EXPRESSION_FROM_ANALYZE_METHOD_REQUIRES_ALIAS​

SQLSTATE: 42802

Failed to evaluate the user-defined table function because its 'analyze' method returned a requested 'select' expression (<expression>) that does not include a corresponding alias; please update the UDTF to specify an alias there and then try the query again.

UNABLE_TO_ACQUIRE_MEMORY​

SQLSTATE: 53200

Unable to acquire <requestedBytes> bytes of memory, got <receivedBytes>.

UNABLE_TO_CONVERT_TO_PROTOBUF_MESSAGE_TYPE​

SQLSTATE: 42K0G

Unable to convert SQL type <toType> to Protobuf type <protobufType>.

UNABLE_TO_FETCH_HIVE_TABLES​

SQLSTATE: 58030

Unable to fetch tables of Hive database: <dbName>. Error Class Name: <className>.

UNABLE_TO_INFER_SCHEMA​

SQLSTATE: 42KD9

Unable to infer schema for <format>. It must be specified manually.

UNAUTHORIZED_ACCESS​

SQLSTATE: 42501

Unauthorized access:

<report>

UNBOUND_SQL_PARAMETER​

SQLSTATE: 42P02

Found the unbound parameter: <name>. Please, fix args and provide a mapping of the parameter to either a SQL literal or collection constructor functions such as map(), array(), struct().

SQLSTATE: 42601

Found an unclosed bracketed comment. Please, append */ at the end of the comment.

UNEXPECTED_INPUT_TYPE​

SQLSTATE: 42K09

Parameter <paramIndex> of function <functionName> requires the <requiredType> type, however <inputSql> has the type <inputType>.

UNEXPECTED_INPUT_TYPE_OF_NAMED_PARAMETER​

SQLSTATE: 42K09

The <namedParamKey> parameter of function <functionName> requires the <requiredType> type, however <inputSql> has the type <inputType>.<hint>

UNEXPECTED_OPERATOR_IN_STREAMING_VIEW​

SQLSTATE: 42KDD

Unexpected operator <op> in the CREATE VIEW statement as a streaming source.

A streaming view query must consist only of SELECT, WHERE, and UNION ALL operations.

UNEXPECTED_POSITIONAL_ARGUMENT​

SQLSTATE: 4274K

Cannot invoke routine <routineName> because it contains positional argument(s) following the named argument assigned to <parameterName>; please rearrange them so the positional arguments come first and then retry the query again.

UNEXPECTED_SERIALIZER_FOR_CLASS​

SQLSTATE: 42846

The class <className> has an unexpected expression serializer. Expects "STRUCT" or "IF" which returns "STRUCT" but found <expr>.

UNION_NOT_SUPPORTED_IN_RECURSIVE_CTE​

SQLSTATE: 42836

The UNION operator is not yet supported within recursive common table expressions (WITH clauses that refer to themselves, directly or indirectly). Please use UNION ALL instead.

UNIQUE_CONSTRAINT_DISABLED​

SQLSTATE: 0A000

Unique constraint feature is disabled. To enable it, set "spark.databricks.sql.dsv2.unique.enabled" as true.

UNKNOWN_FIELD_EXCEPTION​

SQLSTATE: KD003

Encountered <changeType> during parsing: <unknownFieldBlob>, which can be fixed by an automatic retry: <isRetryable>

For more details see UNKNOWN_FIELD_EXCEPTION

UNKNOWN_POSITIONAL_ARGUMENT​

SQLSTATE: 4274K

The invocation of routine <routineName> contains an unknown positional argument <sqlExpr> at position <pos>. This is invalid.

UNKNOWN_PRIMITIVE_TYPE_IN_VARIANT​

SQLSTATE: 22023

Unknown primitive type with id <id> was found in a variant value.

UNKNOWN_PROTOBUF_MESSAGE_TYPE​

SQLSTATE: 42K0G

Attempting to treat <descriptorName> as a Message, but it was <containingType>.

UNPIVOT_REQUIRES_ATTRIBUTES​

SQLSTATE: 42K0A

UNPIVOT requires all given <given> expressions to be columns when no <empty> expressions are given. These are not columns: [<expressions>].

UNPIVOT_REQUIRES_VALUE_COLUMNS​

SQLSTATE: 42K0A

At least one value column needs to be specified for UNPIVOT, all columns specified as ids.

UNPIVOT_VALUE_DATA_TYPE_MISMATCH​

SQLSTATE: 42K09

Unpivot value columns must share a least common type, some types do not: [<types>].

UNPIVOT_VALUE_SIZE_MISMATCH​

SQLSTATE: 428C4

All unpivot value columns must have the same size as there are value column names (<names>).

UNRECOGNIZED_PARAMETER_NAME​

SQLSTATE: 4274K

Cannot invoke routine <routineName> because the routine call included a named argument reference for the argument named <argumentName>, but this routine does not include any signature containing an argument with this name. Did you mean one of the following? [<proposal>].

UNRECOGNIZED_SQL_TYPE​

SQLSTATE: 42704

Unrecognized SQL type - name: <typeName>, id: <jdbcType>.

UNRECOGNIZED_STATISTIC​

SQLSTATE: 42704

The statistic <stats> is not recognized. Valid statistics include count, count_distinct, approx_count_distinct, mean, stddev, min, max, and percentile values. Percentile must be a numeric value followed by '%', within the range 0% to 100%.

UNRESOLVABLE_TABLE_VALUED_FUNCTION​

SQLSTATE: 42883

Could not resolve <name> to a table-valued function.

Please make sure that <name> is defined as a table-valued function and that all required parameters are provided correctly.

If <name> is not defined, please create the table-valued function before using it.

For more information about defining table-valued functions, please refer to the Apache Spark documentation.

UNRESOLVED_ALL_IN_GROUP_BY​

SQLSTATE: 42803

Cannot infer grouping columns for GROUP BY ALL based on the select clause. Please explicitly specify the grouping columns.

UNRESOLVED_COLUMN​

SQLSTATE: 42703

A column, variable, or function parameter with name <objectName> cannot be resolved.

For more details see UNRESOLVED_COLUMN

UNRESOLVED_FIELD​

SQLSTATE: 42703

A field with name <fieldName> cannot be resolved with the struct-type column <columnPath>.

For more details see UNRESOLVED_FIELD

UNRESOLVED_INSERT_REPLACE_USING_COLUMN​

SQLSTATE: 42703

REPLACE USING column <colName> cannot be resolved in the <relationType>.

Did you mean one of the following column(s)? [<suggestion>].

UNRESOLVED_MAP_KEY​

SQLSTATE: 42703

Cannot resolve column <objectName> as a map key. If the key is a string literal, add the single quotes '' around it.

For more details see UNRESOLVED_MAP_KEY

UNRESOLVED_ROUTINE​

SQLSTATE: 42883

Cannot resolve routine <routineName> on search path <searchPath>.

Verify the spelling of <routineName>, check that the routine exists, and confirm you have USE privilege on the catalog and schema, and EXECUTE on the routine.

For more details see UNRESOLVED_ROUTINE

UNRESOLVED_USING_COLUMN_FOR_JOIN​

SQLSTATE: 42703

USING column <colName> cannot be resolved on the <side> side of the join. The <side>-side columns: [<suggestion>].

UNRESOLVED_VARIABLE​

SQLSTATE: 42883

Cannot resolve variable <variableName> on search path <searchPath>.

UNSTRUCTURED_DATA_PROCESSING_UNSUPPORTED_FILE_FORMAT​

SQLSTATE: 0A000

Unstructured file format <format> is not supported. Supported file formats are <supportedFormats>.

Please update the format from your <expr> expression to one of the supported formats and then retry the query again.

UNSTRUCTURED_DATA_PROCESSING_UNSUPPORTED_MODEL_OPTION​

SQLSTATE: 0A000

Unstructured model option ('<option>' -> '<value>') is not supported. Supported values are: <supportedValues>.

Switch to one of the supported values and then retry the query again.

UNSTRUCTURED_OCR_COLUMN_NOT_ALLOWED_WITH_METADATA_MODEL_OPTION​

SQLSTATE: 42000

The function parameter 'ocrText' must be NULL or omitted when the 'metadataModel' option is specified. A specified 'metadataModel' option triggers metadata extraction, where a provided 'ocrText' is forbidden.

UNSUPPORTED_ADD_FILE​

SQLSTATE: 0A000

Don't support add file.

For more details see UNSUPPORTED_ADD_FILE

UNSUPPORTED_ALTER_COLUMN_PARAMETER​

SQLSTATE: 0A000

Specifying <parameter> with ALTER <commandTableType> is not supported.

UNSUPPORTED_ARROWTYPE​

SQLSTATE: 0A000

Unsupported arrow type <typeName>.

UNSUPPORTED_BATCH_TABLE_VALUED_FUNCTION​

SQLSTATE: 42000

The function <funcName> does not support batch queries.

UNSUPPORTED_CALL​

SQLSTATE: 0A000

Cannot call the method "<methodName>" of the class "<className>".

For more details see UNSUPPORTED_CALL

UNSUPPORTED_CHAR_OR_VARCHAR_AS_STRING​

SQLSTATE: 0A000

The char/varchar type can't be used in the table schema.

If you want Spark treat them as string type as same as Spark 3.0 and earlier, please set "spark.sql.legacy.charVarcharAsString" to "true".

UNSUPPORTED_CHAR_OR_VARCHAR_COLLATION​

SQLSTATE: 0A000

The char/varchar type <type> cannot have collation specified.

UNSUPPORTED_CLAUSE_FOR_OPERATION​

SQLSTATE: 0A000

The <clause> is not supported for <operation>.

UNSUPPORTED_COLLATION​

SQLSTATE: 0A000

Collation <collationName> is not supported for:

For more details see UNSUPPORTED_COLLATION

UNSUPPORTED_COMMON_ANCESTOR_LOC_FOR_FILE_STREAM_SOURCE​

SQLSTATE: 42616

The common ancestor of source path and sourceArchiveDir should be registered with UC.

If you see this error message, it's likely that you register the source path and sourceArchiveDir in different external locations.

Please put them into a single external location.

UNSUPPORTED_CONNECT_FEATURE​

SQLSTATE: 0A000

Feature is not supported in Spark Connect:

For more details see UNSUPPORTED_CONNECT_FEATURE

UNSUPPORTED_CONSTRAINT_CHARACTERISTIC​

SQLSTATE: 0A000

Constraint characteristic '<characteristic>' is not supported for constraint type '<constraintType>'.

UNSUPPORTED_CONSTRAINT_CLAUSES​

SQLSTATE: 0A000

Constraint clauses <clauses> are unsupported.

UNSUPPORTED_CONSTRAINT_TYPE​

SQLSTATE: 42000

Unsupported constraint type. Only <supportedConstraintTypes> are supported

UNSUPPORTED_DATASOURCE_FOR_DIRECT_QUERY​

SQLSTATE: 0A000

Unsupported data source type for direct query on files: <dataSourceType>

UNSUPPORTED_DATATYPE​

SQLSTATE: 0A000

Unsupported data type <typeName>.

UNSUPPORTED_DATA_SOURCE_SAVE_MODE​

SQLSTATE: 0A000

The data source "<source>" cannot be written in the <createMode> mode. Please use either the "Append" or "Overwrite" mode instead.

UNSUPPORTED_DATA_TYPE_FOR_DATASOURCE​

SQLSTATE: 0A000

The <format> datasource doesn't support the column <columnName> of the type <columnType>.

UNSUPPORTED_DATA_TYPE_FOR_ENCODER​

SQLSTATE: 0A000

Cannot create encoder for <dataType>. Please use a different output data type for your UDF or DataFrame.

UNSUPPORTED_DEFAULT_VALUE​

SQLSTATE: 0A000

DEFAULT column values is not supported.

For more details see UNSUPPORTED_DEFAULT_VALUE

UNSUPPORTED_DESERIALIZER​

SQLSTATE: 0A000

The deserializer is not supported:

For more details see UNSUPPORTED_DESERIALIZER

UNSUPPORTED_EXPRESSION_GENERATED_COLUMN​

SQLSTATE: 42621

Cannot create generated column <fieldName> with generation expression <expressionStr> because <reason>.

UNSUPPORTED_EXPR_FOR_OPERATOR​

SQLSTATE: 42K0E

A query operator contains one or more unsupported expressions.

Consider to rewrite it to avoid window functions, aggregate functions, and generator functions in the WHERE clause.

Invalid expressions: [<invalidExprSqls>]

UNSUPPORTED_EXPR_FOR_PARAMETER​

SQLSTATE: 42K0E

A query parameter contains unsupported expression.

Parameters can either be variables or literals.

Invalid expression: [<invalidExprSql>]

UNSUPPORTED_EXPR_FOR_WINDOW​

SQLSTATE: 42P20

Expression <sqlExpr> not supported within a window function.

UNSUPPORTED_FEATURE​

SQLSTATE: 0A000

The feature is not supported:

For more details see UNSUPPORTED_FEATURE

UNSUPPORTED_FN_TYPE​

SQLSTATE: 0A000

Unsupported user defined function type: <language>

UNSUPPORTED_GENERATOR​

SQLSTATE: 42K0E

The generator is not supported:

For more details see UNSUPPORTED_GENERATOR

UNSUPPORTED_GROUPING_EXPRESSION​

SQLSTATE: 42K0E

grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup.

UNSUPPORTED_INITIAL_POSITION_AND_TRIGGER_PAIR_FOR_KINESIS_SOURCE​

SQLSTATE: 42616

<trigger> with initial position <initialPosition> is not supported with the Kinesis source

UNSUPPORTED_INSERT​

SQLSTATE: 42809

Can't insert into the target.

For more details see UNSUPPORTED_INSERT

UNSUPPORTED_JOIN_TYPE​

SQLSTATE: 0A000

Unsupported join type '<typ>'. Supported join types include: <supported>.

UNSUPPORTED_MANAGED_TABLE_CREATION​

SQLSTATE: 0AKDD

Creating a managed table <tableName> using datasource <dataSource> is not supported. You need to use datasource DELTA or create an external table using CREATE EXTERNAL TABLE <tableName> ... USING <dataSource> ...

UNSUPPORTED_MERGE_CONDITION​

SQLSTATE: 42K0E

MERGE operation contains unsupported <condName> condition.

For more details see UNSUPPORTED_MERGE_CONDITION

UNSUPPORTED_NESTED_ROW_OR_COLUMN_ACCESS_POLICY​

SQLSTATE: 0A000

Table <tableName> has a row level security policy or column mask which indirectly refers to another table with a row level security policy or column mask; this is not supported. Call sequence: <callSequence>

UNSUPPORTED_OPERATION_FOR_CONTINUOUS_MEMORY_SINK​

SQLSTATE: 0A000

Operation <operation> is not supported for continuous memory sink. If you are writing a test for Streaming Real-Time Mode, consider using CheckAnswerWithTimeout over other checks.

UNSUPPORTED_OVERWRITE​

SQLSTATE: 42902

Can't overwrite the target that is also being read from.

For more details see UNSUPPORTED_OVERWRITE

UNSUPPORTED_PARTITION_TRANSFORM​

SQLSTATE: 0A000

Unsupported partition transform: <transform>. The supported transforms are identity, bucket, and clusterBy. Ensure your transform expression uses one of these.

UNSUPPORTED_PROCEDURE_COLLATION​

SQLSTATE: 0A000

Procedure <procedureName> must specify or inherit DEFAULT COLLATION UTF8_BINARY. Use CREATE PROCEDURE <procedureName> (...) DEFAULT COLLATION UTF_BINARY ....

UNSUPPORTED_SAVE_MODE​

SQLSTATE: 0A000

The save mode <saveMode> is not supported for:

For more details see UNSUPPORTED_SAVE_MODE

UNSUPPORTED_SHOW_CREATE_TABLE​

SQLSTATE: 0A000

Unsupported a SHOW CREATE TABLE command.

For more details see UNSUPPORTED_SHOW_CREATE_TABLE

UNSUPPORTED_SINGLE_PASS_ANALYZER_FEATURE​

SQLSTATE: 0A000

The single-pass analyzer cannot process this query or command because it does not yet support <feature>.

UNSUPPORTED_SQL_UDF_USAGE​

SQLSTATE: 0A000

Using SQL function <functionName> in <nodeName> is not supported.

UNSUPPORTED_STREAMING_OPERATOR_WITHOUT_WATERMARK​

SQLSTATE: 0A000

<outputMode> output mode not supported for <statefulOperator> on streaming DataFrames/DataSets without watermark.

UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW​

SQLSTATE: 0A000

Unsupported for streaming a view. Reason:

For more details see UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW

UNSUPPORTED_STREAMING_OPTIONS_PERMISSION_ENFORCED​

SQLSTATE: 0A000

Streaming options <options> are not supported for data source <source> on a shared cluster. Please confirm that the options are specified and spelled correctly, and check https://docs.databricks.com/en/compute/access-mode-limitations.html#streaming-limitations-and-requirements-for-unity-catalog-shared-access-mode for limitations.

UNSUPPORTED_STREAMING_SINK_PERMISSION_ENFORCED​

SQLSTATE: 0A000

Data source <sink> is not supported as a streaming sink on a shared cluster.

UNSUPPORTED_STREAMING_SOURCE_PERMISSION_ENFORCED​

SQLSTATE: 0A000

Data source <source> is not supported as a streaming source on a shared cluster.

UNSUPPORTED_STREAMING_TABLE_VALUED_FUNCTION​

SQLSTATE: 42000

The function <funcName> does not support streaming. Please remove the STREAM keyword

UNSUPPORTED_STREAM_READ_LIMIT_FOR_KINESIS_SOURCE​

SQLSTATE: 0A000

<streamReadLimit> is not supported with the Kinesis source

UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY​

SQLSTATE: 0A000

Unsupported subquery expression:

For more details see UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY

UNSUPPORTED_TABLE_CHANGE_IN_JDBC_CATALOG​

SQLSTATE: 42000

The table change <change> is not supported for the JDBC catalog on table <tableName>. Supported changes include: AddColumn, RenameColumn, DeleteColumn, UpdateColumnType, UpdateColumnNullability.

UNSUPPORTED_TIMESERIES_COLUMNS​

SQLSTATE: 56038

Creating primary key with timeseries columns is not supported

UNSUPPORTED_TIMESERIES_WITH_MORE_THAN_ONE_COLUMN​

SQLSTATE: 0A000

Creating primary key with more than one timeseries column <colSeq> is not supported

UNSUPPORTED_TIME_PRECISION​

SQLSTATE: 0A001

The seconds precision <precision> of the TIME data type is out of the supported range [0, 6].

UNSUPPORTED_TIME_TYPE​

SQLSTATE: 0A000

The data type TIME is not supported.

UNSUPPORTED_TRIGGER_FOR_KINESIS_SOURCE​

SQLSTATE: 0A000

<trigger> is not supported with the Kinesis source

UNSUPPORTED_TYPED_LITERAL​

SQLSTATE: 0A000

Literals of the type <unsupportedType> are not supported. Supported types are <supportedTypes>.

UNSUPPORTED_UDF_FEATURE​

SQLSTATE: 0A000

The function <function> uses the following feature(s) that require a newer version of Databricks runtime: <features>. Please consult <docLink> for details.

UNSUPPORTED_UDF_TYPES_IN_SAME_PLACE​

SQLSTATE: 0A000

UDF types cannot be used together: <types>

UNTYPED_SCALA_UDF​

SQLSTATE: 42K0E

You're using untyped Scala UDF, which does not have the input type information.

Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType), the result is 0 for null input. To get rid of this error, you could:

  1. use typed Scala UDF APIs(without return type parameter), e.g. udf((x: Int) => x).

  2. use Java UDF APIs, e.g. udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType), if input types are all non primitive.

  3. set "spark.sql.legacy.allowUntypedScalaUDF" to "true" and use this API with caution.

UPGRADE_NOT_SUPPORTED​

SQLSTATE: 0AKUC

Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason:

For more details see UPGRADE_NOT_SUPPORTED

USER_DEFINED_FUNCTIONS​

SQLSTATE: 42601

User defined function is invalid:

For more details see USER_DEFINED_FUNCTIONS

USER_RAISED_EXCEPTION​

SQLSTATE: P0001

<errorMessage>

USER_RAISED_EXCEPTION_PARAMETER_MISMATCH​

SQLSTATE: P0001

The raise_error() function was used to raise error class: <errorClass> which expects parameters: <expectedParms>.

The provided parameters <providedParms> do not match the expected parameters.

Please make sure to provide all expected parameters.

USER_RAISED_EXCEPTION_UNKNOWN_ERROR_CLASS​

SQLSTATE: P0001

The raise_error() function was used to raise an unknown error class: <errorClass>

VARIABLE_ALREADY_EXISTS​

SQLSTATE: 42723

Cannot create the variable <variableName> because it already exists.

Choose a different name, or drop or replace the existing variable.

VARIABLE_NOT_FOUND​

SQLSTATE: 42883

The variable <variableName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP VARIABLE IF EXISTS.

VARIANT_CONSTRUCTOR_SIZE_LIMIT​

SQLSTATE: 22023

Cannot construct a Variant larger than 16 MiB. The maximum allowed size of a Variant value is 16 MiB.

VARIANT_DUPLICATE_KEY​

SQLSTATE: 22023

Failed to build variant because of a duplicate object key <key>.

VARIANT_SIZE_LIMIT​

SQLSTATE: 22023

Cannot build variant bigger than <sizeLimit> in <functionName>.

Please avoid large input strings to this expression (for example, add function calls(s) to check the expression size and convert it to NULL first if it is too big).

VERSIONED_CLONE_UNSUPPORTED_TABLE_FEATURE​

SQLSTATE: 56038

The source table history contains table feature(s) not supported by versioned clone in this DBR version: <unsupportedFeatureNames>.

Please upgrade to a newer DBR version.

VIEW_ALREADY_EXISTS​

SQLSTATE: 42P07

Cannot create view <relationName> because it already exists.

Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

VIEW_EXCEED_MAX_NESTED_DEPTH​

SQLSTATE: 54K00

The depth of view <viewName> exceeds the maximum view resolution depth (<maxNestedDepth>).

Analysis is aborted to avoid errors. If you want to work around this, please try to increase the value of "spark.sql.view.maxNestedViewDepth".

VIEW_NOT_FOUND​

SQLSTATE: 42P01

The view <relationName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP VIEW IF EXISTS.

VOLUME_ALREADY_EXISTS​

SQLSTATE: 42000

Cannot create volume <relationName> because it already exists.

Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

WATERMARK_ADVANCEMENT_STRATEGY​

SQLSTATE: 0A000

Streaming watermark advancement strategy has the following limitation:

For more details see WATERMARK_ADVANCEMENT_STRATEGY

WINDOW_FUNCTION_AND_FRAME_MISMATCH​

SQLSTATE: 42K0E

<funcName> function can only be evaluated in an ordered row-based window frame with a single offset: <windowExpr>.

WINDOW_FUNCTION_WITHOUT_OVER_CLAUSE​

SQLSTATE: 42601

Window function <funcName> requires an OVER clause.

WITH_CREDENTIAL​

SQLSTATE: 42601

WITH CREDENTIAL syntax is not supported for <type>.

WRITE_STREAM_NOT_ALLOWED​

SQLSTATE: 42601

writeStream can be called only on streaming Dataset/DataFrame.

WRONG_COLUMN_DEFAULTS_FOR_DELTA_ALTER_TABLE_ADD_COLUMN_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Failed to execute the command because DEFAULT values are not supported when adding new

columns to previously existing Delta tables; please add the column without a default

value first, then run a second ALTER TABLE ALTER COLUMN SET DEFAULT command to apply

for future inserted rows instead.

WRONG_COLUMN_DEFAULTS_FOR_DELTA_FEATURE_NOT_ENABLED​

SQLSTATE: 0AKDE

Failed to execute <commandType> command because it assigned a column DEFAULT value,

but the corresponding table feature was not enabled. Please retry the command again

after executing ALTER TABLE tableName SET

TBLPROPERTIES('delta.feature.allowColumnDefaults' = 'supported').

WRONG_COMMAND_FOR_OBJECT_TYPE​

SQLSTATE: 42809

The operation <operation> requires a <requiredType>. But <objectName> is a <foundType>. Use <alternative> instead.

WRONG_NUM_ARGS​

SQLSTATE: 42605

The <functionName> requires <expectedNum> parameters but the actual number is <actualNum>.

For more details see WRONG_NUM_ARGS

XML_ROW_TAG_MISSING​

SQLSTATE: 42KDF

<rowTag> option is required for reading/writing files in XML format.

XML_UNSUPPORTED_NESTED_TYPES​

SQLSTATE: 0N000

XML doesn't support <innerDataType> as inner type of <dataType>. Please wrap the <innerDataType> within a StructType field when using it inside <dataType>.

XML_WILDCARD_RESCUED_DATA_CONFLICT_ERROR​

SQLSTATE: 22023

Rescued data and wildcard column cannot be simultaneously enabled. Remove the wildcardColumnName option.

ZORDERBY_COLUMN_DOES_NOT_EXIST​

SQLSTATE: 42703

ZOrderBy column <columnName> doesn't exist.

Delta Lake​ DELTA_ACTIVE_SPARK_SESSION_NOT_FOUND​

SQLSTATE: 08003

Could not find active SparkSession.

DELTA_ACTIVE_TRANSACTION_ALREADY_SET​

SQLSTATE: 0B000

Cannot set a new txn as active when one is already active.

DELTA_ADDING_COLUMN_WITH_INTERNAL_NAME_FAILED​

SQLSTATE: 42000

Failed to add column <colName> because the name is reserved.

DELTA_ADDING_DELETION_VECTORS_DISALLOWED​

SQLSTATE: 0A000

The current operation attempted to add a deletion vector to a table that does not permit the creation of new deletion vectors. Please file a bug report.

DELTA_ADDING_DELETION_VECTORS_WITH_TIGHT_BOUNDS_DISALLOWED​

SQLSTATE: 42000

All operations that add deletion vectors should set the tightBounds column in statistics to false. Please file a bug report.

DELTA_ADD_COLUMN_AT_INDEX_LESS_THAN_ZERO​

SQLSTATE: 42KD3

Index <columnIndex> to add column <columnName> is lower than 0.

DELTA_ADD_COLUMN_PARENT_NOT_STRUCT​

SQLSTATE: 42KD3

Cannot add <columnName> because its parent is not a StructType. Found <other>.

DELTA_ADD_COLUMN_STRUCT_NOT_FOUND​

SQLSTATE: 42KD3

Struct not found at position <position>.

DELTA_ADD_CONSTRAINTS​

SQLSTATE: 0A000

Please use ALTER TABLE ADD CONSTRAINT to add CHECK constraints.

DELTA_AGGREGATE_IN_GENERATED_COLUMN​

SQLSTATE: 42621

Found <sqlExpr>. A generated column cannot use an aggregate expression.

DELTA_AGGREGATION_NOT_SUPPORTED​

SQLSTATE: 42903

Aggregate functions are not supported in the <operation> <predicate>.

DELTA_ALTER_COLLATION_NOT_SUPPORTED_BLOOM_FILTER​

SQLSTATE: 428FR

Failed to change the collation of column <column> because it has a bloom filter index. Please either retain the existing collation or else drop the bloom filter index and then retry the command again to change the collation.

DELTA_ALTER_COLLATION_NOT_SUPPORTED_CLUSTER_BY​

SQLSTATE: 428FR

Failed to change the collation of column <column> because it is a clustering column. Please either retain the existing collation or else change the column to a non-clustering column with an ALTER TABLE command and then retry the command again to change the collation.

DELTA_ALTER_TABLE_CHANGE_COL_NOT_SUPPORTED​

SQLSTATE: 42837

ALTER TABLE CHANGE COLUMN is not supported for changing column <currentType> to <newType>.

DELTA_ALTER_TABLE_CLUSTER_BY_NOT_ALLOWED​

SQLSTATE: 42000

ALTER TABLE CLUSTER BY is supported only for Delta table with Liquid clustering.

DELTA_ALTER_TABLE_CLUSTER_BY_ON_PARTITIONED_TABLE_NOT_ALLOWED​

SQLSTATE: 42000

ALTER TABLE CLUSTER BY cannot be applied to a partitioned table.

DELTA_ALTER_TABLE_RENAME_NOT_ALLOWED​

SQLSTATE: 42000

Operation not allowed: ALTER TABLE RENAME TO is not allowed for managed Delta tables on S3, as eventual consistency on S3 may corrupt the Delta transaction log. If you insist on doing so and are sure that there has never been a Delta table with the new name <newName> before, you can enable this by setting <key> to be true.

DELTA_ALTER_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED​

SQLSTATE: 42000

Cannot enable <tableFeature> table feature using ALTER TABLE SET TBLPROPERTIES. Please use CREATE OR REPLACE TABLE CLUSTER BY to create a Delta table with clustering.

DELTA_ALTER_TABLE_SET_MANAGED_DOES_NOT_SUPPORT_UNIFORM_ICEBERG​

SQLSTATE: 0A000

ALTER TABLE...SET MANAGED does not support Uniform Apache Iceberg table. Disable Uniform and try again.

DELTA_ALTER_TABLE_SET_MANAGED_FAILED​

SQLSTATE: 42809

ALTER TABLE <table> SET MANAGED failed.

For more details see DELTA_ALTER_TABLE_SET_MANAGED_FAILED

DELTA_ALTER_TABLE_SET_MANAGED_NOT_ENABLED​

SQLSTATE: 0AKDC

ALTER TABLE ... SET MANAGED command is not enabled. Contact your Databricks support team for assistance.

DELTA_ALTER_TABLE_UNSET_MANAGED_DOES_NOT_SUPPORT_UNIFORM​

SQLSTATE: 0AKDC

ALTER TABLE...UNSET MANAGED does not support Uniform. Disable Uniform and try again.

DELTA_ALTER_TABLE_UNSET_MANAGED_FAILED​

SQLSTATE: 42809

<table> cannot be rolled back from managed to external table.

For more details see DELTA_ALTER_TABLE_UNSET_MANAGED_FAILED

DELTA_ALTER_TABLE_UNSET_MANAGED_NOT_ENABLED​

SQLSTATE: 0AKDC

ALTER TABLE ... UNSET MANAGED command is not enabled. Contact your Databricks support team for assistance.

DELTA_AMBIGUOUS_DATA_TYPE_CHANGE​

SQLSTATE: 429BQ

Cannot change data type of <column> from <from> to <to>. This change contains column removals and additions, therefore they are ambiguous. Please make these changes individually using ALTER TABLE [ADD | DROP | RENAME] COLUMN.

DELTA_AMBIGUOUS_PARTITION_COLUMN​

SQLSTATE: 42702

Ambiguous partition column <column> can be <colMatches>.

DELTA_AMBIGUOUS_PATHS_IN_CREATE_TABLE​

SQLSTATE: 42613

CREATE TABLE contains two different locations: <identifier> and <location>.

You can remove the LOCATION clause from the CREATE TABLE statement, or set

<config> to true to skip this check.

DELTA_ARCHIVED_FILES_IN_LIMIT​

SQLSTATE: 42KDC

Table <table> does not contain enough records in non-archived files to satisfy specified LIMIT of <limit> records.

DELTA_ARCHIVED_FILES_IN_SCAN​

SQLSTATE: 42KDC

Found <numArchivedFiles> potentially archived file(s) in table <table> that need to be scanned as part of this query.

Archived files cannot be accessed. The current time until archival is configured as <archivalTime>.

Please adjust your query filters to exclude any archived files.

DELTA_BLOCK_COLUMN_MAPPING_AND_CDC_OPERATION​

SQLSTATE: 42KD4

Operation "<opName>" is not allowed when the table has enabled change data feed (CDF) and has undergone schema changes using DROP COLUMN or RENAME COLUMN.

DELTA_BLOOM_FILTER_DROP_ON_NON_EXISTING_COLUMNS​

SQLSTATE: 42703

Cannot drop bloom filter indices for the following non-existent column(s): <unknownColumns>.

DELTA_BLOOM_FILTER_OOM_ON_WRITE​

SQLSTATE: 82100

OutOfMemoryError occurred while writing bloom filter indices for the following column(s): <columnsWithBloomFilterIndices>.

You can reduce the memory footprint of bloom filter indices by choosing a smaller value for the 'numItems' option, a larger value for the 'fpp' option, or by indexing fewer columns.

DELTA_CANNOT_CHANGE_DATA_TYPE​

SQLSTATE: 429BQ

Cannot change data type: <dataType>.

DELTA_CANNOT_CHANGE_LOCATION​

SQLSTATE: 42601

Cannot change the 'location' of the Delta table using SET TBLPROPERTIES. Please use ALTER TABLE SET LOCATION instead.

DELTA_CANNOT_CHANGE_PROVIDER​

SQLSTATE: 42939

'provider' is a reserved table property, and cannot be altered.

DELTA_CANNOT_CREATE_BLOOM_FILTER_NON_EXISTING_COL​

SQLSTATE: 42703

Cannot create bloom filter indices for the following non-existent column(s): <unknownCols>.

DELTA_CANNOT_CREATE_LOG_PATH​

SQLSTATE: 42KD5

Cannot create <path>.

DELTA_CANNOT_DESCRIBE_VIEW_HISTORY​

SQLSTATE: 42809

Cannot describe the history of a view.

DELTA_CANNOT_DROP_BLOOM_FILTER_ON_NON_INDEXED_COLUMN​

SQLSTATE: 42703

Cannot drop bloom filter index on a non indexed column: <columnName>.

DELTA_CANNOT_DROP_CHECK_CONSTRAINT_FEATURE​

SQLSTATE: 0AKDE

Cannot drop the CHECK constraints table feature.

The following constraints must be dropped first: <constraints>.

DELTA_CANNOT_DROP_COLLATIONS_FEATURE​

SQLSTATE: 0AKDE

Cannot drop the collations table feature.

Columns with non-default collations must be altered to using UTF8_BINARY first: <colNames>.

DELTA_CANNOT_DROP_GEOSPATIAL_FEATURE​

SQLSTATE: 0AKDE

Cannot drop the geospatial table feature. Recreate the table or drop columns with geometry/geography types: <colNames> and try again.

DELTA_CANNOT_EVALUATE_EXPRESSION​

SQLSTATE: 0AKDC

Cannot evaluate expression: <expression>.

DELTA_CANNOT_FIND_BUCKET_SPEC​

SQLSTATE: 22000

Expecting a bucketing Delta table but cannot find the bucket spec in the table.

DELTA_CANNOT_GENERATE_CODE_FOR_EXPRESSION​

SQLSTATE: 0AKDC

Cannot generate code for expression: <expression>.

DELTA_CANNOT_MODIFY_APPEND_ONLY​

SQLSTATE: 42809

This table is configured to only allow appends. If you would like to permit updates or deletes, use 'ALTER TABLE <table_name> SET TBLPROPERTIES (<config> =false)'.

DELTA_CANNOT_MODIFY_CATALOG_OWNED_DEPENDENCIES​

SQLSTATE: 42616

Cannot override or unset in-commit timestamp table properties because this table is catalog-owned. Remove "delta.enableInCommitTimestamps", "delta.inCommitTimestampEnablementVersion", and "delta.inCommitTimestampEnablementTimestamp" from the TBLPROPERTIES clause and then retry the command.

DELTA_CANNOT_MODIFY_COORDINATED_COMMITS_DEPENDENCIES​

SQLSTATE: 42616

<Command> cannot override or unset in-commit timestamp table properties because coordinated commits is enabled in this table and depends on them. Please remove them ("delta.enableInCommitTimestamps", "delta.inCommitTimestampEnablementVersion", "delta.inCommitTimestampEnablementTimestamp") from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_MODIFY_TABLE_PROPERTY​

SQLSTATE: 42939

The Delta table configuration <prop> cannot be specified by the user.

DELTA_CANNOT_OVERRIDE_COORDINATED_COMMITS_CONFS​

SQLSTATE: 42616

<Command> cannot override coordinated commits configurations for an existing target table. Please remove them ("delta.coordinatedCommits.commitCoordinator-preview", "delta.coordinatedCommits.commitCoordinatorConf-preview", "delta.coordinatedCommits.tableConf-preview") from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_RECONSTRUCT_PATH_FROM_URI​

SQLSTATE: 22KD1

A uri (<uri>) which can't be turned into a relative path was found in the transaction log.

DELTA_CANNOT_RELATIVIZE_PATH​

SQLSTATE: 42000

A path (<path>) which can't be relativized with the current input found in the

transaction log. Please re-run this as:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog("<userPath>", true)

and then also run:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog("<path>")

DELTA_CANNOT_RENAME_PATH​

SQLSTATE: 22KD1

Cannot rename <currentPath> to <newPath>.

DELTA_CANNOT_REPLACE_MISSING_TABLE​

SQLSTATE: 42P01

Table <tableName> cannot be replaced as it does not exist. Use CREATE OR REPLACE TABLE to create the table.

DELTA_CANNOT_RESOLVE_CLUSTERING_COLUMN​

SQLSTATE: 42703

Cannot resolve the clustering column <columnName> in <schema> due to an unexpected error. Run ALTER TABLE ... CLUSTER BY ... to repair Delta clustering metadata.

DELTA_CANNOT_RESOLVE_COLUMN​

SQLSTATE: 42703

Can't resolve column <columnName> in <schema>

DELTA_CANNOT_RESTORE_TABLE_VERSION​

SQLSTATE: 22003

Cannot restore table to version <version>. Available versions: [<startVersion>, <endVersion>].

DELTA_CANNOT_RESTORE_TIMESTAMP_EARLIER​

SQLSTATE: 22003

Cannot restore table to timestamp (<requestedTimestamp>) as it is before the earliest version available. Please use a timestamp after (<earliestTimestamp>).

DELTA_CANNOT_RESTORE_TIMESTAMP_GREATER​

SQLSTATE: 22003

Cannot restore table to timestamp (<requestedTimestamp>) as it is after the latest version available. Please use a timestamp before (<latestTimestamp>).

DELTA_CANNOT_SET_COORDINATED_COMMITS_DEPENDENCIES​

SQLSTATE: 42616

<Command> cannot set in-commit timestamp table properties together with coordinated commits, because the latter depends on the former and sets the former internally. Please remove them ("delta.enableInCommitTimestamps", "delta.inCommitTimestampEnablementVersion", "delta.inCommitTimestampEnablementTimestamp") from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_SET_LOCATION_ON_PATH_IDENTIFIER​

SQLSTATE: 42613

Cannot change the location of a path based table.

DELTA_CANNOT_SET_MANAGED_STATS_COLUMNS_PROPERTY​

SQLSTATE: 42616

Cannot set delta.managedDataSkippingStatsColumns on non-Lakeflow Declarative Pipelines table.

DELTA_CANNOT_SET_UC_COMMIT_COORDINATOR_CONF_IN_COMMAND​

SQLSTATE: 42616

When enabling 'unity-catalog' as the commit coordinator, configuration "<configuration>" cannot be set from the command. Please remove it from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_SET_UC_COMMIT_COORDINATOR_CONF_IN_SESSION​

SQLSTATE: 42616

When enabling 'unity-catalog' as the commit coordinator, configuration "<configuration>" cannot be set from the SparkSession configurations. Please unset it by running spark.conf.unset("<configuration>") and then retry the command again.

DELTA_CANNOT_UNSET_COORDINATED_COMMITS_CONFS​

SQLSTATE: 42616

ALTER cannot unset coordinated commits configurations. To downgrade a table from coordinated commits, please try again using ALTER TABLE[table-name]DROP FEATURE 'coordinatedCommits-preview'.

DELTA_CANNOT_UPDATE_ARRAY_FIELD​

SQLSTATE: 429BQ

Cannot update <tableName> field <fieldName> type: update the element by updating <fieldName>.element`.

DELTA_CANNOT_UPDATE_MAP_FIELD​

SQLSTATE: 429BQ

Cannot update <tableName> field <fieldName> type: update a map by updating <fieldName>.keyor<fieldName>.value.

DELTA_CANNOT_UPDATE_OTHER_FIELD​

SQLSTATE: 429BQ

Cannot update <tableName> field of type <typeName>.

DELTA_CANNOT_UPDATE_STRUCT_FIELD​

SQLSTATE: 429BQ

Cannot update <tableName> field <fieldName> type: update struct by adding, deleting, or updating its fields.

DELTA_CANNOT_USE_ALL_COLUMNS_FOR_PARTITION​

SQLSTATE: 428FT

Cannot use all columns for partition columns.

DELTA_CANNOT_VACUUM_LITE​

SQLSTATE: 55000

VACUUM LITE cannot delete all eligible files as some files are not referenced by the Delta log. Please run VACUUM FULL.

DELTA_CANNOT_WRITE_INTO_VIEW​

SQLSTATE: 0A000

<table> is a view. Writes to a view are not supported.

DELTA_CAST_OVERFLOW_IN_TABLE_WRITE​

SQLSTATE: 22003

Failed to write a value of <sourceType> type into the <targetType> type column <columnName> due to an overflow.

Use try_cast on the input value to tolerate overflow and return NULL instead.

If necessary, set <storeAssignmentPolicyFlag> to "LEGACY" to bypass this error or set <updateAndMergeCastingFollowsAnsiEnabledFlag> to true to revert to the old behaviour and follow <ansiEnabledFlag> in UPDATE and MERGE.

DELTA_CDC_NOT_ALLOWED_IN_THIS_VERSION​

SQLSTATE: 0AKDC

Configuration delta.enableChangeDataFeed cannot be set. Change data feed from Delta is not yet available.

DELTA_CDC_READ_NULL_RANGE_BOUNDARY​

SQLSTATE: 22004

CDC read start/end parameters cannot be null. Please provide a valid version or timestamp.

DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_DATA_SCHEMA​

SQLSTATE: 0AKDC

Retrieving table changes between version <start> and <end> failed because of an incompatible data schema.

Your read schema is <readSchema> at version <readVersion>, but we found an incompatible data schema at version <incompatibleVersion>.

If possible, please retrieve the table changes using the end version's schema by setting <config> to endVersion, or contact support.

DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_SCHEMA_CHANGE​

SQLSTATE: 0AKDC

Retrieving table changes between version <start> and <end> failed because of an incompatible schema change.

Your read schema is <readSchema> at version <readVersion>, but we found an incompatible schema change at version <incompatibleVersion>.

If possible, please query table changes separately from version <start> to <incompatibleVersion> - 1, and from version <incompatibleVersion> to <end>.

DELTA_CHANGE_DATA_FILE_NOT_FOUND​

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE statement. This request appears to be targeting Change Data Feed, if that is the case, this error can occur when the change data file is out of the retention period and has been deleted by the VACUUM statement. For more information, see <faqPath>

DELTA_CHANGE_TABLE_FEED_DISABLED​

SQLSTATE: 42807

Cannot write to table with delta.enableChangeDataFeed set. Change data feed from Delta is not available.

DELTA_CHECKPOINT_NON_EXIST_TABLE​

SQLSTATE: 42K03

Cannot checkpoint a non-existing table <path>. Did you manually delete files in the _delta_log directory?

DELTA_CLONE_AMBIGUOUS_TARGET​

SQLSTATE: 42613

Two paths were provided as the CLONE target so it is ambiguous which to use. An external

location for CLONE was provided at <externalLocation> at the same time as the path

<targetIdentifier>.

DELTA_CLONE_INCOMPATIBLE_SOURCE​

SQLSTATE: 0AKDC

The clone source has valid format, but has unsupported feature with Delta.

For more details see DELTA_CLONE_INCOMPATIBLE_SOURCE

DELTA_CLONE_INCOMPLETE_FILE_COPY​

SQLSTATE: 42000

File (<fileName>) not copied completely. Expected file size: <expectedSize>, found: <actualSize>. To continue with the operation by ignoring the file size check set <config> to false.

DELTA_CLONE_UNSUPPORTED_SOURCE​

SQLSTATE: 0AKDC

Unsupported <mode> clone source '<name>', whose format is <format>.

The supported formats are 'delta', 'iceberg' and 'parquet'.

DELTA_CLONE_WITH_HISTORY_UNSUPPORTED_SOURCE​

SQLSTATE: 0AKDC

Unsupported source table:

For more details see DELTA_CLONE_WITH_HISTORY_UNSUPPORTED_SOURCE

DELTA_CLONE_WITH_HISTORY_UNSUPPORTED_TARGET​

SQLSTATE: 0AKDC

Unsupported target table:

For more details see DELTA_CLONE_WITH_HISTORY_UNSUPPORTED_TARGET

DELTA_CLONE_WITH_ROW_TRACKING_WITHOUT_STATS​

SQLSTATE: 22000

Cannot shallow clone a table without statistics and with row tracking enabled.

If you want to enable row tracking you need to first collect statistics on the source table by running:

ANALYZE TABLE table_name COMPUTE DELTA STATISTICS

DELTA_CLUSTERING_CLONE_TABLE_NOT_SUPPORTED​

SQLSTATE: 0A000

CLONE is not supported for Delta table with Liquid clustering for DBR version < 14.0.

DELTA_CLUSTERING_COLUMNS_DATATYPE_NOT_SUPPORTED​

SQLSTATE: 0A000

CLUSTER BY is not supported because the following column(s): <columnsWithDataTypes> don't support data skipping.

DELTA_CLUSTERING_COLUMNS_MISMATCH​

SQLSTATE: 42P10

The provided clustering columns do not match the existing table's.

DELTA_CLUSTERING_COLUMN_MISSING_STATS​

SQLSTATE: 22000

Liquid clustering requires clustering columns to have stats. Couldn't find clustering column(s) '<columns>' in stats schema:

<schema>

DELTA_CLUSTERING_CREATE_EXTERNAL_NON_LIQUID_TABLE_FROM_LIQUID_TABLE​

SQLSTATE: 22000

Creating an external table without liquid clustering from a table directory with liquid clustering is not allowed; path: <path>.

DELTA_CLUSTERING_NOT_SUPPORTED​

SQLSTATE: 42000

'<operation>' does not support clustering.

DELTA_CLUSTERING_PHASE_OUT_FAILED​

SQLSTATE: 0AKDE

Cannot finish the <phaseOutType> of the table with <tableFeatureToAdd> table feature (reason: <reason>). Please try the OPTIMIZE command again.

== Error ==

<error>

DELTA_CLUSTERING_REPLACE_TABLE_WITH_PARTITIONED_TABLE​

SQLSTATE: 42000

REPLACE a Delta table with Liquid clustering with a partitioned table is not allowed.

DELTA_CLUSTERING_SHOW_CREATE_TABLE_WITHOUT_CLUSTERING_COLUMNS​

SQLSTATE: 0A000

SHOW CREATE TABLE is not supported for Delta table with Liquid clustering without any clustering columns.

DELTA_CLUSTERING_TO_PARTITIONED_TABLE_WITH_NON_EMPTY_CLUSTERING_COLUMNS​

SQLSTATE: 42000

Transition a Delta table with Liquid clustering to a partitioned table is not allowed for operation: <operation>, when the existing table has non-empty clustering columns.

Please run ALTER TABLE CLUSTER BY NONE to remove the clustering columns first.

DELTA_CLUSTERING_WITH_DYNAMIC_PARTITION_OVERWRITE​

SQLSTATE: 42000

Dynamic partition overwrite mode is not allowed for Delta table with Liquid clustering.

DELTA_CLUSTERING_WITH_PARTITION_PREDICATE​

SQLSTATE: 0A000

OPTIMIZE command for Delta table with Liquid clustering doesn't support partition predicates. Please remove the predicates: <predicates>.

DELTA_CLUSTERING_WITH_ZORDER_BY​

SQLSTATE: 42613

OPTIMIZE command for Delta table with Liquid clustering cannot specify ZORDER BY. Please remove ZORDER BY (<zOrderBy>).

DELTA_CLUSTER_BY_AUTO_MISMATCH​

SQLSTATE: 42000

The provided clusterByAuto value does not match that of the existing table.

DELTA_CLUSTER_BY_INVALID_NUM_COLUMNS​

SQLSTATE: 54000

CLUSTER BY for Liquid clustering supports up to <numColumnsLimit> clustering columns, but the table has <actualNumColumns> clustering columns. Please remove the extra clustering columns.

DELTA_CLUSTER_BY_SCHEMA_NOT_PROVIDED​

SQLSTATE: 42908

It is not allowed to specify CLUSTER BY when the schema is not defined. Please define schema for table <tableName>.

DELTA_CLUSTER_BY_WITH_BUCKETING​

SQLSTATE: 42613

Clustering and bucketing cannot both be specified. Please remove CLUSTERED BY INTO BUCKETS / bucketBy if you want to create a Delta table with clustering.

DELTA_CLUSTER_BY_WITH_PARTITIONED_BY​

SQLSTATE: 42613

Clustering and partitioning cannot both be specified. Please remove PARTITIONED BY / partitionBy / partitionedBy if you want to create a Delta table with clustering.

DELTA_COLLATIONS_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Collations are not supported in Delta Lake.

DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_PARTITIONED_COLUMN​

SQLSTATE: 0AKDC

Data skipping is not supported for partition column '<column>'.

DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_TYPE​

SQLSTATE: 0AKDC

Data skipping is not supported for column '<column>' of type <type>.

DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET​

SQLSTATE: 42703

The max column id property (<prop>) is not set on a column mapping enabled table.

DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET_CORRECTLY​

SQLSTATE: 42703

The max column id property (<prop>) on a column mapping enabled table is <tableMax>, which cannot be smaller than the max column id for all fields (<fieldMax>).

DELTA_COLUMN_MISSING_DATA_TYPE​

SQLSTATE: 42601

The data type of the column <colName> was not provided.

DELTA_COLUMN_NOT_FOUND​

SQLSTATE: 42703

Unable to find the column <columnName> given [<columnList>].

DELTA_COLUMN_NOT_FOUND_IN_MERGE​

SQLSTATE: 42703

Unable to find the column '<targetCol>' of the target table from the INSERT columns: <colNames>. INSERT clause must specify value for all the columns of the target table.

DELTA_COLUMN_NOT_FOUND_IN_SCHEMA​

SQLSTATE: 42703

Couldn't find column <columnName> in:

<tableSchema>

DELTA_COLUMN_PATH_NOT_NESTED​

SQLSTATE: 42704

Expected <columnPath> to be a nested data type, but found <other>. Was looking for the

index of <column> in a nested field.

Schema:

<schema>

DELTA_COLUMN_STRUCT_TYPE_MISMATCH​

SQLSTATE: 2200G

Struct column <source> cannot be inserted into a <targetType> field <targetField> in <targetTable>.

DELTA_COMMIT_INTERMEDIATE_REDIRECT_STATE​

SQLSTATE: 42P01

Cannot handle commit of table within redirect table state '<state>'.

DELTA_COMPACTION_VALIDATION_FAILED​

SQLSTATE: 22000

The validation of the compaction of path <compactedPath> to <newPath> failed: Please file a bug report.

DELTA_COMPLEX_TYPE_COLUMN_CONTAINS_NULL_TYPE​

SQLSTATE: 22005

Found nested NullType in column <columName> which is of <dataType>. Delta doesn't support writing NullType in complex types.

DELTA_CONCURRENT_APPEND​

SQLSTATE: 2D521

ConcurrentAppendException: Files were added to <partition> by a concurrent update. <retryMsg> <conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_DELETE_DELETE​

SQLSTATE: 2D521

ConcurrentDeleteDeleteException: This transaction attempted to delete one or more files that were deleted (for example <file>) by a concurrent update. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_DELETE_READ​

SQLSTATE: 2D521

ConcurrentDeleteReadException: This transaction attempted to read one or more files that were deleted (for example <file>) by a concurrent update. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_TRANSACTION​

SQLSTATE: 2D521

ConcurrentTransactionException: This error occurs when multiple streaming queries are using the same checkpoint to write into this table. Did you run multiple instances of the same streaming query at the same time?<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_WRITE​

SQLSTATE: 2D521

ConcurrentWriteException: A concurrent transaction has written new data because the current transaction read the table. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONFLICT_SET_COLUMN​

SQLSTATE: 42701

There is a conflict from these SET columns: <columnList>.

DELTA_CONF_OVERRIDE_NOT_SUPPORTED_IN_COMMAND​

SQLSTATE: 42616

During <command>, configuration "<configuration>" cannot be set from the command. Please remove it from the TBLPROPERTIES clause and then retry the command again.

DELTA_CONF_OVERRIDE_NOT_SUPPORTED_IN_SESSION​

SQLSTATE: 42616

During <command>, configuration "<configuration>" cannot be set from the SparkSession configurations. Please unset it by running spark.conf.unset("<configuration>") and then retry the command again.

DELTA_CONSTRAINT_ALREADY_EXISTS​

SQLSTATE: 42710

Constraint '<constraintName>' already exists. Please delete the old constraint first.

Old constraint:

<oldConstraint>

DELTA_CONSTRAINT_DATA_TYPE_MISMATCH​

SQLSTATE: 42K09

Column <columnName> has data type <columnType> and cannot be altered to data type <dataType> because this column is referenced by the following check constraint(s):

<constraints>

DELTA_CONSTRAINT_DEPENDENT_COLUMN_CHANGE​

SQLSTATE: 42K09

Cannot alter column <columnName> because this column is referenced by the following check constraint(s):

<constraints>

DELTA_CONSTRAINT_DOES_NOT_EXIST​

SQLSTATE: 42704

Cannot drop nonexistent constraint <constraintName> from table <tableName>. To avoid throwing an error, provide the parameter IF EXISTS or set the SQL session configuration <config> to <confValue>.

DELTA_CONVERSION_MERGE_ON_READ_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Conversion of Merge-On-Read <format> table is not supported: <path>, <hint>

DELTA_CONVERSION_NO_PARTITION_FOUND​

SQLSTATE: 42KD6

Found no partition information in the catalog for table <tableName>. Have you run "MSCK REPAIR TABLE" on your table to discover partitions?

DELTA_CONVERSION_UNSUPPORTED_COLLATED_PARTITION_COLUMN​

SQLSTATE: 0AKDC

Cannot convert Parquet table with collated partition column <colName> to Delta.

DELTA_CONVERSION_UNSUPPORTED_COLUMN_MAPPING​

SQLSTATE: 0AKDC

The configuration '<config>' cannot be set to <mode> when using CONVERT TO DELTA.

DELTA_CONVERSION_UNSUPPORTED_SCHEMA_CHANGE​

SQLSTATE: 0AKDC

Unsupported schema changes found for <format> table: <path>, <hint>

DELTA_CONVERT_NON_PARQUET_TABLE​

SQLSTATE: 0AKDC

CONVERT TO DELTA only supports parquet tables, but you are trying to convert a <sourceName> source: <tableId>.

DELTA_CONVERT_TO_DELTA_ROW_TRACKING_WITHOUT_STATS​

SQLSTATE: 22000

Cannot enable row tracking without collecting statistics.

If you want to enable row tracking, do the following:

  1. Enable statistics collection by running the command

SET <statisticsCollectionPropertyKey> = true

  1. Run CONVERT TO DELTA without the NO STATISTICS option.

If you do not want to collect statistics, disable row tracking:

  1. Deactivate enabling the table feature by default by running the command:

RESET <rowTrackingTableFeatureDefaultKey>

  1. Deactivate the table property by default by running:

SET <rowTrackingDefaultPropertyKey> = false

DELTA_COPY_INTO_TARGET_FORMAT​

SQLSTATE: 0AKDD

COPY INTO target must be a Delta table.

DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_SCHEMA​

SQLSTATE: 42601

You are trying to create an external table <tableName>

from <path> using Delta, but the schema is not specified when the

input path is empty.

To learn more about Delta, see <docLink>

DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG​

SQLSTATE: 42K03

You are trying to create an external table <tableName> from <path> using Delta, but there is no transaction log present at <logPath>. Check the upstream job to make sure that it is writing using format("delta") and that the path is the root of the table.

To learn more about Delta, see <docLink>

DELTA_CREATE_TABLE_IDENTIFIER_LOCATION_MISMATCH​

SQLSTATE: 0AKDC

Creating path-based Delta table with a different location isn't supported. Identifier: <identifier>, Location: <location>.

DELTA_CREATE_TABLE_MISSING_TABLE_NAME_OR_LOCATION​

SQLSTATE: 42601

Table name or location has to be specified.

DELTA_CREATE_TABLE_SCHEME_MISMATCH​

SQLSTATE: 42KD7

The specified schema does not match the existing schema at <path>.

== Specified ==

<specifiedSchema>

== Existing ==

<existingSchema>

== Differences ==

<schemaDifferences>

If your intention is to keep the existing schema, you can omit the

schema from the create table command. Otherwise please ensure that

the schema matches.

DELTA_CREATE_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED​

SQLSTATE: 42000

Cannot enable <tableFeature> table feature using TBLPROPERTIES. Please use CREATE OR REPLACE TABLE CLUSTER BY to create a Delta table with clustering.

DELTA_CREATE_TABLE_WITH_DIFFERENT_CLUSTERING​

SQLSTATE: 42KD7

The specified clustering columns do not match the existing clustering columns at <path>.

== Specified ==

<specifiedColumns>

== Existing ==

<existingColumns>

DELTA_CREATE_TABLE_WITH_DIFFERENT_PARTITIONING​

SQLSTATE: 42KD7

The specified partitioning does not match the existing partitioning at <path>.

== Specified ==

<specifiedColumns>

== Existing ==

<existingColumns>

DELTA_CREATE_TABLE_WITH_DIFFERENT_PROPERTY​

SQLSTATE: 42KD7

The specified properties do not match the existing properties at <path>.

== Specified ==

<specifiedProperties>

== Existing ==

<existingProperties>

DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION​

SQLSTATE: 42601

Cannot create table ('<tableId>'). The associated location ('<tableLocation>') is not empty and also not a Delta table.

DELTA_DATA_CHANGE_FALSE​

SQLSTATE: 0AKDE

Cannot change table metadata because the 'dataChange' option is set to false. Attempted operation: '<op>'.

DELTA_DELETED_PARQUET_FILE_NOT_FOUND​

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This parquet file may be deleted under Delta's data retention policy.

Default Delta data retention duration: <logRetentionPeriod>. Modification time of the parquet file: <modificationTime>. Deletion time of the parquet file: <deletionTime>. Deleted on Delta version: <deletionVersion>.

DELTA_DELETION_VECTOR_MISSING_NUM_RECORDS​

SQLSTATE: 2D521

It is invalid to commit files with deletion vectors that are missing the numRecords statistic.

DELTA_DISABLE_SOURCE_MATERIALIZATION_IN_INSERT_REPLACE_ON_OR_USING_NOT_ALLOWED​

SQLSTATE: 0AKDC

Disabling source materialization in INSERT REPLACE ON/USING by setting 'spark.databricks.delta.insertReplaceOnOrUsing.materializeSource' to 'none' is not allowed.

DELTA_DISABLE_SOURCE_MATERIALIZATION_IN_MERGE_NOT_ALLOWED​

SQLSTATE: 0AKDC

Disabling source materialization in MERGE by setting 'spark.databricks.delta.merge.materializeSource' to 'none' is not allowed.

DELTA_DOMAIN_METADATA_NOT_SUPPORTED​

SQLSTATE: 0A000

Detected DomainMetadata action(s) for domains <domainNames>, but DomainMetadataTableFeature is not enabled.

DELTA_DROP_COLUMN_AT_INDEX_LESS_THAN_ZERO​

SQLSTATE: 42KD8

Index <columnIndex> to drop column is lower than 0.

DELTA_DROP_COLUMN_ON_SINGLE_FIELD_SCHEMA​

SQLSTATE: 0AKDC

Cannot drop column from a schema with a single column. Schema:

<schema>

DELTA_DUPLICATE_ACTIONS_FOUND​

SQLSTATE: 2D521

File operation '<actionType>' for path <path> was specified several times.

It conflicts with <conflictingPath>.

It is not valid for multiple file operations with the same path to exist in a single commit.

DELTA_DUPLICATE_COLUMNS_FOUND​

SQLSTATE: 42711

Found duplicate column(s) <coltype>: <duplicateCols>.

DELTA_DUPLICATE_COLUMNS_ON_INSERT​

SQLSTATE: 42701

Duplicate column names in INSERT clause.

DELTA_DUPLICATE_COLUMNS_ON_UPDATE_TABLE​

SQLSTATE: 42701

<message>

Please remove duplicate columns before you update your table.

DELTA_DUPLICATE_DATA_SKIPPING_COLUMNS​

SQLSTATE: 42701

Duplicated data skipping columns found: <columns>.

DELTA_DUPLICATE_DOMAIN_METADATA_INTERNAL_ERROR​

SQLSTATE: 42601

Internal error: two DomainMetadata actions within the same transaction have the same domain <domainName>.

DELTA_DUPLICATE_LOG_ENTRIES_FOUND​

SQLSTATE: 55019

The Delta log is in an illegal state: <numDuplicates> paths have duplicate entries in version <version>. RESTORE to a version before the commit that introduced the duplication or contact support for assistance.

DELTA_DV_HISTOGRAM_DESERIALIZATON​

SQLSTATE: 22000

Could not deserialize the deleted record counts histogram during table integrity verification.

DELTA_DYNAMIC_PARTITION_OVERWRITE_DISABLED​

SQLSTATE: 0A000

Dynamic partition overwrite mode is specified by session config or write options, but it is disabled by spark.databricks.delta.dynamicPartitionOverwrite.enabled=false.

DELTA_EMPTY_DATA​

SQLSTATE: 428GU

Data used in creating the Delta table doesn't have any columns.

DELTA_EMPTY_DIRECTORY​

SQLSTATE: 42K03

No file found in the directory: <directory>.

DELTA_EXCEED_CHAR_VARCHAR_LIMIT​

SQLSTATE: 22001

Value "<value>" exceeds char/varchar type length limitation. Failed check: <expr>.

DELTA_FAILED_CAST_PARTITION_VALUE​

SQLSTATE: 22018

Failed to cast partition value <value> to <dataType>.

DELTA_FAILED_FIND_ATTRIBUTE_IN_OUTPUT_COLUMNS​

SQLSTATE: 42703

Could not find <newAttributeName> among the existing target output <targetOutputColumns>.

DELTA_FAILED_INFER_SCHEMA​

SQLSTATE: 42KD9

Failed to infer schema from the given list of files.

DELTA_FAILED_MERGE_SCHEMA_FILE​

SQLSTATE: 42KDA

Failed to merge schema of file <file>:

<schema>

DELTA_FAILED_OPERATION_ON_SHALLOW_CLONE​

SQLSTATE: 42893

Failed to run the operation on the source table <sourceTable> because the shallow clone <targetTable> still exists and the following error occurred in the shallow clone: <message>

SQLSTATE: KD001

Could not read footer for file: <currentFile>.

DELTA_FAILED_RECOGNIZE_PREDICATE​

SQLSTATE: 42601

Cannot recognize the predicate '<predicate>'.

DELTA_FAILED_SCAN_WITH_HISTORICAL_VERSION​

SQLSTATE: KD002

Expect a full scan of the latest version of the Delta source, but found a historical scan of version <historicalVersion>.

DELTA_FAILED_TO_MERGE_FIELDS​

SQLSTATE: 22005

Failed to merge fields '<currentField>' and '<updateField>'.

DELTA_FEATURES_PROTOCOL_METADATA_MISMATCH​

SQLSTATE: KD004

Unable to operate on this table because the following table features are enabled in metadata but not listed in protocol: <features>.

DELTA_FEATURES_REQUIRE_MANUAL_ENABLEMENT​

SQLSTATE: 42000

Your table schema requires manually enablement of the following table feature(s): <unsupportedFeatures>.

To do this, run the following command for each of features listed above:

ALTER TABLE table_name SET TBLPROPERTIES ('delta.feature.feature_name' = 'supported')

Replace "table_name" and "feature_name" with real values.

Current supported feature(s): <supportedFeatures>.

DELTA_FEATURE_CAN_ONLY_DROP_CHECKPOINT_PROTECTION_WITH_HISTORY_TRUNCATION​

SQLSTATE: 55000

Could not drop the Checkpoint Protection feature.

This feature can only be dropped by truncating history.

Please try again with the TRUNCATE HISTORY option:

ALTER TABLE table_name DROP FEATURE checkpointProtection TRUNCATE HISTORY

DELTA_FEATURE_DROP_CHECKPOINT_FAILED​

SQLSTATE: 22KD0

Dropping <featureName> failed due to a failure in checkpoint creation.

Please try again later. It the issue persists, contact Databricks support.

DELTA_FEATURE_DROP_CHECKPOINT_PROTECTION_WAIT_FOR_RETENTION_PERIOD​

SQLSTATE: 22KD0

The operation did not succeed because there are still traces of dropped features

in the table history. CheckpointProtection cannot be dropped until these historical

versions have expired.

To drop CheckpointProtection, please wait for the historical versions to

expire, and then repeat this command. The retention period for historical versions is

currently configured to <truncateHistoryLogRetentionPeriod>.

DELTA_FEATURE_DROP_CONFLICT_REVALIDATION_FAIL​

SQLSTATE: 40000

Cannot drop feature because a concurrent transaction modified the table.

Please try the operation again.

<concurrentCommit>

DELTA_FEATURE_DROP_DEPENDENT_FEATURE​

SQLSTATE: 55000

Cannot drop table feature <feature> because some other features (<dependentFeatures>) in this table depends on <feature>.

Consider dropping them first before dropping this feature.

DELTA_FEATURE_DROP_FEATURE_IS_DELTA_PROPERTY​

SQLSTATE: 42000

Cannot drop <property> from this table because this is a delta table property and not a table feature.

DELTA_FEATURE_DROP_FEATURE_NOT_PRESENT​

SQLSTATE: 55000

Cannot drop <feature> from this table because it is not currently present in the table's protocol.

DELTA_FEATURE_DROP_HISTORICAL_VERSIONS_EXIST​

SQLSTATE: 22KD0

Cannot drop <feature> because the Delta log contains historical versions that use the feature.

Please wait until the history retention period (<logRetentionPeriodKey>=<logRetentionPeriod>)

has passed because the feature was last active.

Alternatively, please wait for the TRUNCATE HISTORY retention period to expire (<truncateHistoryLogRetentionPeriod>)

and then run:

ALTER TABLE table_name DROP FEATURE feature_name TRUNCATE HISTORY

DELTA_FEATURE_DROP_HISTORY_TRUNCATION_NOT_ALLOWED​

SQLSTATE: 42000

The particular feature does not require history truncation.

DELTA_FEATURE_DROP_NONREMOVABLE_FEATURE​

SQLSTATE: 0AKDC

Cannot drop <feature> because dropping this feature is not supported.

Please contact Databricks support.

DELTA_FEATURE_DROP_UNSUPPORTED_CLIENT_FEATURE​

SQLSTATE: 0AKDC

Cannot drop <feature> because it is not supported by this Databricks version.

Consider using Databricks with a higher version.

DELTA_FEATURE_DROP_WAIT_FOR_RETENTION_PERIOD​

SQLSTATE: 22KD0

Dropping <feature> was partially successful.

The feature is now no longer used in the current version of the table. However, the feature

is still present in historical versions of the table. The table feature cannot be dropped

from the table protocol until these historical versions have expired.

To drop the table feature from the protocol, please wait for the historical versions to

expire, and then repeat this command. The retention period for historical versions is

currently configured as <logRetentionPeriodKey>=<logRetentionPeriod>.

Alternatively, please wait for the TRUNCATE HISTORY retention period to expire (<truncateHistoryLogRetentionPeriod>)

and then run:

ALTER TABLE table_name DROP FEATURE feature_name TRUNCATE HISTORY

DELTA_FEATURE_REQUIRES_HIGHER_READER_VERSION​

SQLSTATE: 55000

Unable to enable table feature <feature> because it requires a higher reader protocol version (current <current>). Consider upgrading the table's reader protocol version to <required>, or to a version which supports reader table features. Refer to <docLink> for more information on table protocol versions.

DELTA_FEATURE_REQUIRES_HIGHER_WRITER_VERSION​

SQLSTATE: 55000

Unable to enable table feature <feature> because it requires a higher writer protocol version (current <current>). Consider upgrading the table's writer protocol version to <required>, or to a version which supports writer table features. Refer to <docLink> for more information on table protocol versions.

DELTA_FILE_ALREADY_EXISTS​

SQLSTATE: 42K04

Existing file path <path>.

DELTA_FILE_LIST_AND_PATTERN_STRING_CONFLICT​

SQLSTATE: 42613

Cannot specify both file list and pattern string.

DELTA_FILE_NOT_FOUND​

SQLSTATE: 42K03

File path <path>.

DELTA_FILE_NOT_FOUND_DETAILED​

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This occurs when data has been manually deleted from the file system rather than using the table DELETE statement. For more information, see <faqPath>

DELTA_FILE_OR_DIR_NOT_FOUND​

SQLSTATE: 42K03

No such file or directory: <path>.

DELTA_FILE_TO_OVERWRITE_NOT_FOUND​

SQLSTATE: 42K03

File (<path>) to be rewritten not found among candidate files:

<pathList>

DELTA_FOUND_MAP_TYPE_COLUMN​

SQLSTATE: KD003

A MapType was found. In order to access the key or value of a MapType, specify one

of:

<key> or

<value>

followed by the name of the column (only if that column is a struct type).

e.g. mymap.key.mykey

If the column is a basic type, mymap.key or mymap.value is sufficient.

Schema:

<schema>

DELTA_GENERATED_COLUMNS_DATA_TYPE_MISMATCH​

SQLSTATE: 42K09

Column <columnName> has data type <columnType> and cannot be altered to data type <dataType> because this column is referenced by the following generated column(s):

<generatedColumns>.

DELTA_GENERATED_COLUMNS_DEPENDENT_COLUMN_CHANGE​

SQLSTATE: 42K09

Cannot alter column <columnName> because this column is referenced by the following generated column(s):

<generatedColumns>.

DELTA_GENERATED_COLUMNS_EXPR_TYPE_MISMATCH​

SQLSTATE: 42K09

The expression type of the generated column <columnName> is <expressionType>, but the column type is <columnType>.

DELTA_GENERATED_COLUMN_UPDATE_TYPE_MISMATCH​

SQLSTATE: 42K09

Column <currentName> is a generated column or a column used by a generated column. The data type is <currentDataType> and cannot be converted to data type <updateDataType>.

DELTA_GEOSPATIAL_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Geospatial types are not supported in this version of Delta Lake.

DELTA_GEOSPATIAL_SRID_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Geospatial type has an unsupported srid: <srid>. Delta tables only support non-negative srid values.

DELTA_ICEBERG_COMPAT_VIOLATION​

SQLSTATE: KD00E

The validation of IcebergCompatV<version> has failed.

For more details see DELTA_ICEBERG_COMPAT_VIOLATION

DELTA_ICEBERG_WRITER_COMPAT_VIOLATION​

SQLSTATE: KD00E

The validation of IcebergWriterCompatV<version> has failed.

For more details see DELTA_ICEBERG_WRITER_COMPAT_VIOLATION

DELTA_IDENTITY_COLUMNS_ALTER_COLUMN_NOT_SUPPORTED​

SQLSTATE: 429BQ

ALTER TABLE ALTER COLUMN is not supported for IDENTITY columns.

DELTA_IDENTITY_COLUMNS_ALTER_NON_DELTA_FORMAT​

SQLSTATE: 0AKDD

ALTER TABLE ALTER COLUMN SYNC IDENTITY is only supported by Delta.

DELTA_IDENTITY_COLUMNS_ALTER_NON_IDENTITY_COLUMN​

SQLSTATE: 429BQ

ALTER TABLE ALTER COLUMN SYNC IDENTITY cannot be called on non IDENTITY columns.

DELTA_IDENTITY_COLUMNS_EXPLICIT_INSERT_NOT_SUPPORTED​

SQLSTATE: 42808

Providing values for GENERATED ALWAYS AS IDENTITY column <colName> is not supported.

DELTA_IDENTITY_COLUMNS_ILLEGAL_STEP​

SQLSTATE: 42611

IDENTITY column step cannot be 0.

DELTA_IDENTITY_COLUMNS_NON_DELTA_FORMAT​

SQLSTATE: 0AKDD

IDENTITY columns are only supported by Delta.

DELTA_IDENTITY_COLUMNS_PARTITION_NOT_SUPPORTED​

SQLSTATE: 42601

PARTITIONED BY IDENTITY column <colName> is not supported.

DELTA_IDENTITY_COLUMNS_REPLACE_COLUMN_NOT_SUPPORTED​

SQLSTATE: 429BQ

ALTER TABLE REPLACE COLUMNS is not supported for table with IDENTITY columns.

DELTA_IDENTITY_COLUMNS_UNSUPPORTED_DATA_TYPE​

SQLSTATE: 428H2

DataType <dataType> is not supported for IDENTITY columns.

DELTA_IDENTITY_COLUMNS_UPDATE_NOT_SUPPORTED​

SQLSTATE: 42808

UPDATE on IDENTITY column <colName> is not supported.

DELTA_IDENTITY_COLUMNS_WITH_GENERATED_EXPRESSION​

SQLSTATE: 42613

IDENTITY column cannot be specified with a generated column expression.

DELTA_ILLEGAL_OPTION​

SQLSTATE: 42616

Invalid value '<input>' for option '<name>', <explain>

DELTA_ILLEGAL_USAGE​

SQLSTATE: 42601

The usage of <option> is not allowed when <operation> a Delta table.

DELTA_INCONSISTENT_BUCKET_SPEC​

SQLSTATE: 42000

BucketSpec on Delta bucketed table does not match BucketSpec from metadata. Expected: <expected>. Actual: <actual>.

DELTA_INCONSISTENT_LOGSTORE_CONFS​

SQLSTATE: F0000

(<setKeys>) cannot be set to different values. Please only set one of them, or set them to the same value.

DELTA_INCORRECT_ARRAY_ACCESS​

SQLSTATE: KD003

Incorrectly accessing an ArrayType. Use arrayname.element.elementname position to

add to an array.

DELTA_INCORRECT_ARRAY_ACCESS_BY_NAME​

SQLSTATE: KD003

An ArrayType was found. In order to access elements of an ArrayType, specify

<rightName> instead of <wrongName>.

Schema:

<schema>

DELTA_INCORRECT_GET_CONF​

SQLSTATE: 42000

Use getConf() instead of conf.getConf().

DELTA_INCORRECT_LOG_STORE_IMPLEMENTATION​

SQLSTATE: 0AKDC

The error typically occurs when the default LogStore implementation, that

is, HDFSLogStore, is used to write into a Delta table on a non-HDFS storage system.

In order to get the transactional ACID guarantees on table updates, you have to use the

correct implementation of LogStore that is appropriate for your storage system.

See <docLink> for details.

DELTA_INDEX_LARGER_OR_EQUAL_THAN_STRUCT​

SQLSTATE: 42KD8

Index <index> to drop column equals to or is larger than struct length: <length>.

DELTA_INDEX_LARGER_THAN_STRUCT​

SQLSTATE: 42KD8

Index <index> to add column <columnName> is larger than struct length: <length>.

DELTA_INSERT_COLUMN_ARITY_MISMATCH​

SQLSTATE: 42802

Cannot write to '<tableName>', <columnName>; target table has <numColumns> column(s) but the inserted data has <insertColumns> column(s).

DELTA_INSERT_COLUMN_MISMATCH​

SQLSTATE: 42802

Column <columnName> is not specified in INSERT.

DELTA_INSERT_REPLACE_ON_AMBIGUOUS_COLUMNS_IN_CONDITION​

SQLSTATE: 42702

Column(s) <columnNames> are ambiguous in the condition of INSERT REPLACE ON. Consider specifying an alias for these columns.

DELTA_INSERT_REPLACE_ON_NOT_ENABLED​

SQLSTATE: 0A000

Please contact your Databricks representative to enable the INSERT INTO ... REPLACE ON ... SQL and DataFrame APIs.

DELTA_INSERT_REPLACE_ON_UNRESOLVED_COLUMNS_IN_CONDITION​

SQLSTATE: 42703

Column(s) <columnNames> cannot be resolved in the condition of INSERT REPLACE ON.

DELTA_INVALID_AUTO_COMPACT_TYPE​

SQLSTATE: 22023

Invalid auto-compact type: <value>. Allowed values are: <allowed>.

DELTA_INVALID_BUCKET_COUNT​

SQLSTATE: 22023

Invalid bucket count: <invalidBucketCount>. Bucket count should be a positive number that is power of 2 and at least 8. You can use <validBucketCount> instead.

DELTA_INVALID_BUCKET_INDEX​

SQLSTATE: 22023

Cannot find the bucket column in the partition columns.

DELTA_INVALID_CALENDAR_INTERVAL_EMPTY​

SQLSTATE: 2200P

Interval cannot be null or blank.

DELTA_INVALID_CDC_RANGE​

SQLSTATE: 22003

CDC range from start <start> to end <end> was invalid. End cannot be before start.

DELTA_INVALID_CHARACTERS_IN_COLUMN_NAME​

SQLSTATE: 42K05

Attribute name "<columnName>" contains invalid character(s) among " ,;{}()\\n\\t=". Please use alias to rename it.

DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES​

SQLSTATE: 42K05

Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema.

Invalid column names: <invalidColumnNames>.

Please use other characters and try again.

Alternatively, enable Column Mapping to keep using these characters.

DELTA_INVALID_CLONE_PATH​

SQLSTATE: 22KD1

The target location for CLONE needs to be an absolute path or table name. Use an

absolute path instead of <path>.

DELTA_INVALID_COLUMN_NAMES_WHEN_REMOVING_COLUMN_MAPPING​

SQLSTATE: 42K05

Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema.

Invalid column names: <invalidColumnNames>.

Column mapping cannot be removed when there are invalid characters in the column names.

Please rename the columns to remove the invalid characters and execute this command again.

DELTA_INVALID_FORMAT​

SQLSTATE: 22000

Incompatible format detected.

A transaction log for Delta was found at <deltaRootPath>/_delta_log,

but you are trying to <operation> <path> using format("<format>"). You must use

'format("delta")' when reading and writing to a delta table.

To learn more about Delta, see <docLink>

DELTA_INVALID_GENERATED_COLUMN_REFERENCES​

SQLSTATE: 42621

A generated column cannot use a non-existent column or another generated column.

DELTA_INVALID_IDEMPOTENT_WRITES_OPTIONS​

SQLSTATE: 42616

Invalid options for idempotent Dataframe writes: <reason>

DELTA_INVALID_INTERVAL​

SQLSTATE: 22006

<interval> is not a valid INTERVAL.

DELTA_INVALID_INVENTORY_SCHEMA​

SQLSTATE: 42000

The schema for the specified INVENTORY does not contain all of the required fields. Required fields are: <expectedSchema>

DELTA_INVALID_ISOLATION_LEVEL​

SQLSTATE: 25000

invalid isolation level '<isolationLevel>'.

DELTA_INVALID_LOGSTORE_CONF​

SQLSTATE: F0000

(<classConfig>) and (<schemeConfig>) cannot be set at the same time. Please set only one group of them.

DELTA_INVALID_MANAGED_TABLE_SYNTAX_NO_SCHEMA​

SQLSTATE: 42000

You are trying to create a managed table <tableName>

using Delta, but the schema is not specified.

To learn more about Delta, see <docLink>

DELTA_INVALID_PARTITION_COLUMN​

SQLSTATE: 42996

<columnName> is not a valid partition column in table <tableName>.

DELTA_INVALID_PARTITION_COLUMN_NAME​

SQLSTATE: 42996

Found partition columns having invalid character(s) among " ,;{}()\n\t=". Please change the name to your partition columns. This check can be turned off by setting spark.conf.set("spark.databricks.delta.partitionColumnValidity.enabled", false) however this is not recommended as other features of Delta may not work properly.

DELTA_INVALID_PARTITION_COLUMN_TYPE​

SQLSTATE: 42996

Using column <name> of type <dataType> as a partition column is not supported.

DELTA_INVALID_PARTITION_PATH​

SQLSTATE: 22KD1

A partition path fragment should be the form like part1=foo/part2=bar. The partition path: <path>.

DELTA_INVALID_PROTOCOL_DOWNGRADE​

SQLSTATE: KD004

Protocol version cannot be downgraded from (<oldProtocol>) to (<newProtocol>).

DELTA_INVALID_PROTOCOL_VERSION​

SQLSTATE: KD004

Unsupported Delta protocol version: table "<tableNameOrPath>" requires reader version <readerRequired> and writer version <writerRequired>, but this version of Databricks supports reader versions <supportedReaders> and writer versions <supportedWriters>. Please upgrade to a newer release.

DELTA_INVALID_TABLE_VALUE_FUNCTION​

SQLSTATE: 22000

Function <function> is an unsupported table valued function for CDC reads.

DELTA_INVALID_TIMESTAMP_FORMAT​

SQLSTATE: 22007

The provided timestamp <timestamp> does not match the expected syntax <format>.

DELTA_LOG_ALREADY_EXISTS​

SQLSTATE: 42K04

A Delta log already exists at <path>.

DELTA_LOG_FILE_NOT_FOUND​

SQLSTATE: 42K03

Unable to retrieve the delta log files to construct table version <version> starting from checkpoint version <checkpointVersion> at <logPath>.

DELTA_LOG_FILE_NOT_FOUND_FOR_STREAMING_SOURCE​

SQLSTATE: 42K03

If you never deleted it, it's likely your query is lagging behind. Please delete its checkpoint to restart from scratch. To avoid this happening again, you can update your retention policy of your Delta table.

DELTA_MATERIALIZED_ROW_TRACKING_COLUMN_NAME_MISSING​

SQLSTATE: 22000

Materialized <rowTrackingColumn> column name missing for <tableName>.

DELTA_MAX_ARRAY_SIZE_EXCEEDED​

SQLSTATE: 42000

Please use a limit less than Int.MaxValue - 8.

DELTA_MAX_COMMIT_RETRIES_EXCEEDED​

SQLSTATE: 40000

This commit has failed as it has been tried <numAttempts> times but did not succeed.

This can be caused by the Delta table being committed continuously by many concurrent

commits.

Commit started at version: <startVersion>

Commit failed at version: <failVersion>

Number of actions attempted to commit: <numActions>

Total time spent attempting this commit: <timeSpent> ms

DELTA_MAX_LIST_FILE_EXCEEDED​

SQLSTATE: 42000

File list must have at most <maxFileListSize> entries, had <numFiles>.

DELTA_MERGE_ADD_VOID_COLUMN​

SQLSTATE: 42K09

Cannot add column <newColumn> with type VOID. Please explicitly specify a non-void type.

DELTA_MERGE_INCOMPATIBLE_DATATYPE​

SQLSTATE: 42K09

Failed to merge incompatible data types <currentDataType> and <updateDataType>.

DELTA_MERGE_INCOMPATIBLE_DECIMAL_TYPE​

SQLSTATE: 42806

Failed to merge decimal types with incompatible <decimalRanges>.

DELTA_MERGE_MATERIALIZE_SOURCE_FAILED_REPEATEDLY​

SQLSTATE: 25000

Keeping the source of the MERGE statement materialized has failed repeatedly.

DELTA_MERGE_MISSING_WHEN​

SQLSTATE: 42601

There must be at least one WHEN clause in a MERGE statement.

DELTA_MERGE_RESOLVED_ATTRIBUTE_MISSING_FROM_INPUT​

SQLSTATE: 42601

Resolved attribute(s) <missingAttributes> missing from <input> in operator <merge>.

DELTA_MERGE_SOURCE_CACHED_DURING_EXECUTION​

SQLSTATE: 25000

The MERGE operation failed because (part of) the source plan was cached while the MERGE operation was running.

DELTA_MERGE_UNEXPECTED_ASSIGNMENT_KEY​

SQLSTATE: 22005

Unexpected assignment key: <unexpectedKeyClass> - <unexpectedKeyObject>.

DELTA_MERGE_UNRESOLVED_EXPRESSION​

SQLSTATE: 42601

Cannot resolve <sqlExpr> in <clause> given columns <cols>.

DELTA_METADATA_CHANGED​

SQLSTATE: 2D521

MetadataChangedException: The metadata of the Delta table has been changed by a concurrent update. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_MISSING_CHANGE_DATA​

SQLSTATE: KD002

Error getting change data for range [<startVersion> , <endVersion>] as change data was not

recorded for version [<version>]. If you've enabled change data feed on this table,

use DESCRIBE HISTORY to see when it was first enabled.

Otherwise, to start recording change data, use ALTER TABLE table_name SET TBLPROPERTIES

(<key>=true)`.

DELTA_MISSING_COLUMN​

SQLSTATE: 42703

Cannot find <columnName> in table columns: <columnList>.

DELTA_MISSING_COMMIT_INFO​

SQLSTATE: KD004

This table has the feature <featureName> enabled which requires the presence of the CommitInfo action in every commit. However, the CommitInfo action is missing from commit version <version>.

DELTA_MISSING_COMMIT_TIMESTAMP​

SQLSTATE: KD004

This table has the feature <featureName> enabled which requires the presence of commitTimestamp in the CommitInfo action. However, this field has not been set in commit version <version>.

DELTA_MISSING_DELTA_TABLE​

SQLSTATE: 42P01

<tableName> is not a Delta table.

DELTA_MISSING_DELTA_TABLE_COPY_INTO​

SQLSTATE: 42P01

Table doesn't exist. Create an empty Delta table first using CREATE TABLE <tableName>.

DELTA_MISSING_ICEBERG_CLASS​

SQLSTATE: 56038

Apache Iceberg class was not found. Please ensure Delta Apache Iceberg support is installed.

Please refer to <docLink> for more details.

DELTA_MISSING_NOT_NULL_COLUMN_VALUE​

SQLSTATE: 23502

Column <columnName>, which has a NOT NULL constraint, is missing from the data being written into the table.

DELTA_MISSING_PARTITION_COLUMN​

SQLSTATE: 42KD6

Partition column <columnName> not found in schema <columnList>.

DELTA_MISSING_PART_FILES​

SQLSTATE: 42KD6

Couldn't find all part files of the checkpoint version: <version>.

DELTA_MISSING_PROVIDER_FOR_CONVERT​

SQLSTATE: 0AKDC

CONVERT TO DELTA only supports parquet tables. Please rewrite your target as parquet.<path> if it's a parquet directory.

DELTA_MISSING_SET_COLUMN​

SQLSTATE: 42703

SET column <columnName> not found given columns: <columnList>.

DELTA_MISSING_TRANSACTION_LOG​

SQLSTATE: 42000

Incompatible format detected.

You are trying to <operation> <path> using Delta, but there is no

transaction log present. Check the upstream job to make sure that it is writing

using format("delta") and that you are trying to %1$s the table base path.

To learn more about Delta, see <docLink>

DELTA_MODE_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Specified mode '<mode>' is not supported. Supported modes are: <supportedModes>.

DELTA_MULTIPLE_CDC_BOUNDARY​

SQLSTATE: 42614

Multiple <startingOrEnding> arguments provided for CDC read. Please provide one of either <startingOrEnding>Timestamp or <startingOrEnding>Version.

DELTA_MULTIPLE_CONF_FOR_SINGLE_COLUMN_IN_BLOOM_FILTER​

SQLSTATE: 42614

Multiple bloom filter index configurations passed to command for column: <columnName>.

DELTA_MULTIPLE_SOURCE_ROW_MATCHING_TARGET_ROW_IN_MERGE​

SQLSTATE: 21506

Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge, when multiple source rows match on the same target row, the result may be ambiguous as it is unclear which source row should be used to update or delete the matching target row. You can preprocess the source table to eliminate the possibility of multiple matches. Please refer to

<usageReference>

DELTA_MUST_SET_ALL_COORDINATED_COMMITS_CONFS_IN_COMMAND​

SQLSTATE: 42616

During <command>, either both coordinated commits configurations ("delta.coordinatedCommits.commitCoordinator-preview", "delta.coordinatedCommits.commitCoordinatorConf-preview") are set in the command or neither of them. Missing: "<configuration>". Please specify this configuration in the TBLPROPERTIES clause or remove the other configuration, and then retry the command again.

DELTA_MUST_SET_ALL_COORDINATED_COMMITS_CONFS_IN_SESSION​

SQLSTATE: 42616

During <command>, either both coordinated commits configurations ("coordinatedCommits.commitCoordinator-preview", "coordinatedCommits.commitCoordinatorConf-preview") are set in the SparkSession configurations or neither of them. Missing: "<configuration>". Please set this configuration in the SparkSession or unset the other configuration, and then retry the command again.

DELTA_NAME_CONFLICT_IN_BUCKETED_TABLE​

SQLSTATE: 42000

The following column name(s) are reserved for Delta bucketed table internal usage only: <names>.

DELTA_NESTED_FIELDS_NEED_RENAME​

SQLSTATE: 42K05

The input schema contains nested fields that are capitalized differently than the target table.

They need to be renamed to avoid the loss of data in these fields while writing to Delta.

Fields:

<fields>.

Original schema:

<schema>

DELTA_NESTED_NOT_NULL_CONSTRAINT​

SQLSTATE: 0AKDC

The <nestType> type of the field <parent> contains a NOT NULL constraint. Delta does not support NOT NULL constraints nested within arrays or maps. To suppress this error and silently ignore the specified constraints, set <configKey> = true.

Parsed <nestType> type:

<nestedPrettyJson>

DELTA_NESTED_SUBQUERY_NOT_SUPPORTED​

SQLSTATE: 0A000

Nested subquery is not supported in the <operation> condition.

DELTA_NEW_CHECK_CONSTRAINT_VIOLATION​

SQLSTATE: 23512

<numRows> rows in <tableName> violate the new CHECK constraint (<checkConstraint>).

DELTA_NEW_NOT_NULL_VIOLATION​

SQLSTATE: 23512

<numRows> rows in <tableName> violate the new NOT NULL constraint on <colName>.

DELTA_NON_BOOLEAN_CHECK_CONSTRAINT​

SQLSTATE: 42621

CHECK constraint '<name>' (<expr>) should be a boolean expression.

DELTA_NON_DETERMINISTIC_EXPRESSION_IN_GENERATED_COLUMN​

SQLSTATE: 42621

Found <expr>. A generated column cannot use a non deterministic expression.

DELTA_NON_DETERMINISTIC_FUNCTION_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Non-deterministic functions are not supported in the <operation> <expression>.

DELTA_NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION​

SQLSTATE: 42601

When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.

DELTA_NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION​

SQLSTATE: 42601

When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.

DELTA_NON_LAST_NOT_MATCHED_CLAUSE_OMIT_CONDITION​

SQLSTATE: 42601

When there are more than one NOT MATCHED clauses in a MERGE statement, only the last NOT MATCHED clause can omit the condition.

DELTA_NON_PARSABLE_TAG​

SQLSTATE: 42601

Could not parse tag <tag>.

File tags are: <tagList>.

DELTA_NON_PARTITION_COLUMN_ABSENT​

SQLSTATE: KD005

Data written into Delta needs to contain at least one non-partitioned column.<details>

DELTA_NON_PARTITION_COLUMN_REFERENCE​

SQLSTATE: 42P10

Predicate references non-partition column '<columnName>'. Only the partition columns may be referenced: [<columnList>].

DELTA_NON_PARTITION_COLUMN_SPECIFIED​

SQLSTATE: 42P10

Non-partitioning column(s) <columnList> are specified where only partitioning columns are expected: <fragment>.

DELTA_NON_SINGLE_PART_NAMESPACE_FOR_CATALOG​

SQLSTATE: 42K05

Delta catalog requires a single-part namespace, but <identifier> is multi-part.

DELTA_NON_UC_COMMIT_COORDINATOR_NOT_SUPPORTED_IN_COMMAND​

SQLSTATE: 42616

Setting commit coordinator to '<nonUcCoordinatorName>' from command is not supported, because UC-managed tables can only have 'unity-catalog' as the commit coordinator. Please either change it to 'unity-catalog' or remove all Coordinated Commits table properties from the TBLPROPERTIES clause, and then retry the command again.

DELTA_NON_UC_COMMIT_COORDINATOR_NOT_SUPPORTED_IN_SESSION​

SQLSTATE: 42616

Setting commit coordinator to '<nonUcCoordinatorName>' from SparkSession configurations is not supported, because UC-managed tables can only have 'unity-catalog' as the commit coordinator. Please either change it to 'unity-catalog' by running spark.conf.set("<coordinatorNameDefaultKey>", "unity-catalog"), or remove all Coordinated Commits table properties from the SparkSession configurations by running spark.conf.unset("<coordinatorNameDefaultKey>"), spark.conf.unset("<coordinatorConfDefaultKey>"), spark.conf.unset("<tableConfDefaultKey>"), and then retry the command again.

DELTA_NOT_A_DATABRICKS_DELTA_TABLE​

SQLSTATE: 42000

<table> is not a Delta table. Please drop this table first if you would like to create it with Databricks Delta.

DELTA_NOT_A_DELTA_TABLE​

SQLSTATE: 0AKDD

<tableName> is not a Delta table. Please drop this table first if you would like to recreate it with Delta Lake.

DELTA_NOT_NULL_COLUMN_NOT_FOUND_IN_STRUCT​

SQLSTATE: 42K09

Not nullable column not found in struct: <struct>.

DELTA_NOT_NULL_CONSTRAINT_VIOLATED​

SQLSTATE: 23502

NOT NULL constraint violated for column: <columnName>.

DELTA_NOT_NULL_NESTED_FIELD​

SQLSTATE: 0A000

A non-nullable nested field can't be added to a nullable parent. Please set the nullability of the parent column accordingly.

DELTA_NO_COMMITS_FOUND​

SQLSTATE: KD006

No commits found at <logPath>.

DELTA_NO_RECREATABLE_HISTORY_FOUND​

SQLSTATE: KD006

No recreatable commits found at <logPath>.

DELTA_NO_REDIRECT_RULES_VIOLATED​

SQLSTATE: 42P01

Operation not allowed: <operation> cannot be performed on a table with redirect feature.

The no redirect rules are not satisfied <noRedirectRules>.

DELTA_NO_RELATION_TABLE​

SQLSTATE: 42P01

Table <tableIdent> not found.

DELTA_NO_START_FOR_CDC_READ​

SQLSTATE: 42601

No startingVersion or startingTimestamp provided for CDC read.

DELTA_NULL_SCHEMA_IN_STREAMING_WRITE​

SQLSTATE: 42P18

Delta doesn't accept NullTypes in the schema for streaming writes.

DELTA_ONEOF_IN_TIMETRAVEL​

SQLSTATE: 42601

Please either provide 'timestampAsOf' or 'versionAsOf' for time travel.

DELTA_ONLY_OPERATION​

SQLSTATE: 0AKDD

<operation> is only supported for Delta tables.

DELTA_OPERATION_MISSING_PATH​

SQLSTATE: 42601

Please provide the path or table identifier for <operation>.

DELTA_OPERATION_NOT_ALLOWED​

SQLSTATE: 0AKDC

Operation not allowed: <operation> is not supported for Delta tables.

DELTA_OPERATION_NOT_ALLOWED_DETAIL​

SQLSTATE: 0AKDC

Operation not allowed: <operation> is not supported for Delta tables: <tableName>.

DELTA_OPERATION_NOT_SUPPORTED_FOR_COLUMN_WITH_COLLATION​

SQLSTATE: 0AKDC

<operation> is not supported for column <colName> with non-default collation <collation>.

DELTA_OPERATION_NOT_SUPPORTED_FOR_DATATYPES​

SQLSTATE: 0AKDC

<operation> is not supported for data types: <dataTypeList>.

DELTA_OPERATION_NOT_SUPPORTED_FOR_EXPRESSION_WITH_COLLATION​

SQLSTATE: 0AKDC

<operation> is not supported for expression <exprText> because it uses non-default collation.

DELTA_OPERATION_ON_TEMP_VIEW_WITH_GENERATED_COLS_NOT_SUPPORTED​

SQLSTATE: 0A000

<operation> command on a temp view referring to a Delta table that contains generated columns is not supported. Please run the <operation> command on the Delta table directly.

DELTA_OPERATION_ON_VIEW_NOT_ALLOWED​

SQLSTATE: 0AKDC

Operation not allowed: <operation> cannot be performed on a view.

DELTA_OPTIMIZE_FULL_NOT_SUPPORTED​

SQLSTATE: 42601

OPTIMIZE FULL is only supported for clustered tables with non-empty clustering columns.

DELTA_OVERWRITE_MUST_BE_TRUE​

SQLSTATE: 42000

Copy option overwriteSchema cannot be specified without setting OVERWRITE = 'true'.

DELTA_OVERWRITE_SCHEMA_WITH_DYNAMIC_PARTITION_OVERWRITE​

SQLSTATE: 42613

'overwriteSchema' cannot be used in dynamic partition overwrite mode.

DELTA_PARTITION_COLUMN_CAST_FAILED​

SQLSTATE: 22525

Failed to cast value <value> to <dataType> for partition column <columnName>.

DELTA_PARTITION_COLUMN_NOT_FOUND​

SQLSTATE: 42703

Partition column <columnName> not found in schema [<schemaMap>].

DELTA_PARTITION_SCHEMA_IN_ICEBERG_TABLES​

SQLSTATE: 42613

Partition schema cannot be specified when converting Apache Iceberg tables. It is automatically inferred.

DELTA_PATH_BASED_ACCESS_TO_TABLE_BLOCKED​

SQLSTATE: 42P01

The table at <path> has been migrated to a Unity Catalog managed table and can no longer be accessed by path. Update the client to access the table by name.

DELTA_PATH_DOES_NOT_EXIST​

SQLSTATE: 42K03

<path> doesn't exist, or is not a Delta table.

DELTA_PATH_EXISTS​

SQLSTATE: 42K04

Cannot write to already existent path <path> without setting OVERWRITE = 'true'.

DELTA_POST_COMMIT_HOOK_FAILED​

SQLSTATE: 2DKD0

Committing to the Delta table version <version> succeeded but error while executing post-commit hook <name> <message>

DELTA_PROTOCOL_CHANGED​

SQLSTATE: 2D521

ProtocolChangedException: The protocol version of the Delta table has been changed by a concurrent update. <additionalInfo> <conflictingCommit>

Refer to <docLink> for more details.

DELTA_PROTOCOL_PROPERTY_NOT_INT​

SQLSTATE: 42K06

Protocol property <key> needs to be an integer. Found <value>.

DELTA_READ_FEATURE_PROTOCOL_REQUIRES_WRITE​

SQLSTATE: KD004

Unable to upgrade only the reader protocol version to use table features. Writer protocol version must be at least <writerVersion> to proceed. Refer to <docLink> for more information on table protocol versions.

DELTA_READ_TABLE_WITHOUT_COLUMNS​

SQLSTATE: 428GU

You are trying to read a Delta table <tableName> that does not have any columns.

Write some new data with the option mergeSchema = true to be able to read the table.

DELTA_REDIRECT_TARGET_ROW_FILTER_COLUMN_MASK_UNSUPPORTED​

SQLSTATE: 42000

Redirect to a table with row filter or column mask is not supported. Update your code to reference the target table <tableIdent> directly.

DELTA_REGEX_OPT_SYNTAX_ERROR​

SQLSTATE: 2201B

Please recheck your syntax for '<regExpOption>'.

DELTA_RELATION_PATH_MISMATCH​

SQLSTATE: 2201B

Relation path '<relation>' mismatches with <targetType>'s path '<targetPath>'.

DELTA_REPLACE_WHERE_IN_OVERWRITE​

SQLSTATE: 42613

You can't use replaceWhere in conjunction with an overwrite by filter.

DELTA_REPLACE_WHERE_MISMATCH​

SQLSTATE: 44000

Written data does not conform to partial table overwrite condition or constraint '<replaceWhere>'.

<message>

DELTA_REPLACE_WHERE_WITH_DYNAMIC_PARTITION_OVERWRITE​

SQLSTATE: 42613

A 'replaceWhere' expression and 'partitionOverwriteMode'='dynamic' cannot both be set in the DataFrameWriter options.

DELTA_REPLACE_WHERE_WITH_FILTER_DATA_CHANGE_UNSET​

SQLSTATE: 42613

'replaceWhere' cannot be used with data filters when 'dataChange' is set to false. Filters: <dataFilters>.

DELTA_ROW_ID_ASSIGNMENT_WITHOUT_STATS​

SQLSTATE: 22000

Cannot assign row IDs without row count statistics.

Collect statistics for the table by running the ANALYZE TABLE command:

ANALYZE TABLE tableName COMPUTE DELTA STATISTICS

DELTA_SCHEMA_CHANGED​

SQLSTATE: KD007

Detected schema change:

streaming source schema: <readSchema>

data file schema: <dataSchema>

Please try restarting the query. If this issue repeats across query restarts without

making progress, you have made an incompatible schema change and need to start your

query from scratch using a new checkpoint directory.

DELTA_SCHEMA_CHANGED_WITH_STARTING_OPTIONS​

SQLSTATE: KD007

Detected schema change in version <version>:

streaming source schema: <readSchema>

data file schema: <dataSchema>

Please try restarting the query. If this issue repeats across query restarts without

making progress, you have made an incompatible schema change and need to start your

query from scratch using a new checkpoint directory. If the issue persists after

changing to a new checkpoint directory, you may need to change the existing

'startingVersion' or 'startingTimestamp' option to start from a version newer than

<version> with a new checkpoint directory.

DELTA_SCHEMA_CHANGED_WITH_VERSION​

SQLSTATE: KD007

Detected schema change in version <version>:

streaming source schema: <readSchema>

data file schema: <dataSchema>

Please try restarting the query. If this issue repeats across query restarts without

making progress, you have made an incompatible schema change and need to start your

query from scratch using a new checkpoint directory.

DELTA_SCHEMA_CHANGE_SINCE_ANALYSIS​

SQLSTATE: KD007

The schema of your Delta table has changed in an incompatible way because your DataFrame

or DeltaTable object was created. Please redefine your DataFrame or DeltaTable object.

Changes:

<schemaDiff> <legacyFlagMessage>

DELTA_SCHEMA_NOT_PROVIDED​

SQLSTATE: 42908

Table schema is not provided. Please provide the schema (column definition) of the table when using REPLACE table and an AS SELECT query is not provided.

DELTA_SCHEMA_NOT_SET​

SQLSTATE: KD008

Table schema is not set. Write data into it or use CREATE TABLE to set the schema.

DELTA_SET_LOCATION_SCHEMA_MISMATCH​

SQLSTATE: 42KD7

The schema of the new Delta location is different than the current table schema.

original schema:

<original>

destination schema:

<destination>

If this is an intended change, you may turn this check off by running:

%%sql set <config> = true

DELTA_SHALLOW_CLONE_FILE_NOT_FOUND​

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE statement. This table appears to be a shallow clone, if that is the case, this error can occur when the original table from which this table was cloned has deleted a file that the clone is still using. If you want any clones to be independent of the original table, use a DEEP clone instead.

DELTA_SHARING_CANNOT_MODIFY_RESERVED_RECIPIENT_PROPERTY​

SQLSTATE: 42939

Pre-defined properties that start with <prefix> cannot be modified.

DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED​

SQLSTATE: 42704

The data is restricted by recipient property <property> that do not apply to the current recipient in the session.

For more details see DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED

SQLSTATE: 42887

<operation> cannot be used in Delta Sharing views that are shared cross account.

DELTA_SHARING_INVALID_PROVIDER_AUTH​

SQLSTATE: 28000

Illegal authentication type <authenticationType> for provider <provider>.

DELTA_SHARING_INVALID_RECIPIENT_AUTH​

SQLSTATE: 28000

Illegal authentication type <authenticationType> for recipient <recipient>.

SQLSTATE: 42K05

Invalid name to reference a <type> inside a Share. You can either use <type>'s name inside the share following the format of [schema].[<type>], or you can also use table's original full name following the format of [catalog].[schema].[>type>].

If you are unsure about what name to use, you can run "SHOW ALL IN SHARE [share]" and find the name of the <type> to remove: column "name" is the <type>'s name inside the share and column "shared_object" is the <type>'s original full name.

DELTA_SHARING_MAXIMUM_RECIPIENT_TOKENS_EXCEEDED​

SQLSTATE: 54000

There are more than two tokens for recipient <recipient>.

DELTA_SHARING_RECIPIENT_PROPERTY_NOT_FOUND​

SQLSTATE: 42704

Recipient property <property> does not exist.

DELTA_SHARING_RECIPIENT_TOKENS_NOT_FOUND​

SQLSTATE: 42704

Recipient tokens are missing for recipient <recipient>.

DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_COLUMN​

SQLSTATE: 42P10

Non-partitioning column(s) <badCols> are specified for SHOW PARTITIONS.

DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_TABLE​

SQLSTATE: 42809

SHOW PARTITIONS is not allowed on a table that is not partitioned: <tableName>.

DELTA_SOURCE_IGNORE_DELETE​

SQLSTATE: 0A000

Detected deleted data (for example <removedFile>) from streaming source at version <version>. This is currently not supported. If you'd like to ignore deletes, set the option 'ignoreDeletes' to 'true'. The source table can be found at path <dataPath>.

DELTA_SOURCE_TABLE_IGNORE_CHANGES​

SQLSTATE: 0A000

Detected a data update (for example <file>) in the source table at version <version>. This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option 'skipChangeCommits' to 'true'. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using Lakeflow Declarative Pipelines. If you need to handle these changes, please switch to MVs. The source table can be found at path <dataPath>.

DELTA_STARTING_VERSION_AND_TIMESTAMP_BOTH_SET​

SQLSTATE: 42613

Please either provide '<version>' or '<timestamp>'.

DELTA_STATS_COLLECTION_COLUMN_NOT_FOUND​

SQLSTATE: 42000

<statsType> stats not found for column in Parquet metadata: <columnPath>.

DELTA_STREAMING_CANNOT_CONTINUE_PROCESSING_POST_SCHEMA_EVOLUTION​

SQLSTATE: KD002

We've detected one or more non-additive schema change(s) (<opType>) between Delta version <previousSchemaChangeVersion> and <currentSchemaChangeVersion> in the Delta streaming source.

Changes:

<columnChangeDetails>

Please check if you want to manually propagate the schema change(s) to the sink table before we proceed with stream processing using the finalized schema at version <currentSchemaChangeVersion>.

Once you have fixed the schema of the sink table or have decided there is no need to fix, you can set the following configuration(s) to unblock the non-additive schema change(s) and continue stream processing.

Using dataframe reader option(s):

To unblock for this particular stream just for this series of schema change(s):

<unblockChangeOptions>

To unblock for this particular stream:

<unblockStreamOptions>

Using SQL configuration(s):

To unblock for this particular stream just for this series of schema change(s):

<unblockChangeConfs>

To unblock for this particular stream:

<unblockStreamConfs>

To unblock for all streams:

<unblockAllConfs>

DELTA_STREAMING_CHECK_COLUMN_MAPPING_NO_SNAPSHOT​

SQLSTATE: KD002

Failed to obtain Delta log snapshot for the start version when checking column mapping schema changes. Please choose a different start version, or force enable streaming read at your own risk by setting '<config>' to 'true'.

DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE​

SQLSTATE: 42KD4

Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).

For further information and possible next steps to resolve this issue, please review the documentation at <docLink>

Read schema: <readSchema>. Incompatible data schema: <incompatibleSchema>.

DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_SCHEMA_LOG​

SQLSTATE: 42KD4

Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).

Please provide a 'schemaTrackingLocation' to enable non-additive schema evolution for Delta stream processing.

See <docLink> for more details.

Read schema: <readSchema>. Incompatible data schema: <incompatibleSchema>.

DELTA_STREAMING_METADATA_EVOLUTION​

SQLSTATE: 22000

The schema, table configuration or protocol of your Delta table has changed during streaming.

The schema or metadata tracking log has been updated.

Please restart the stream to continue processing using the updated metadata.

Updated schema: <schema>.

Updated table configurations: <config>.

Updated table protocol: <protocol>

DELTA_STREAMING_SCHEMA_EVOLUTION_UNSUPPORTED_ROW_FILTER_COLUMN_MASKS​

SQLSTATE: 22000

Streaming from source table <tableId> with schema tracking does not support row filters or column masks.

Please drop the row filters or column masks, or disable schema tracking.

DELTA_STREAMING_SCHEMA_LOCATION_CONFLICT​

SQLSTATE: 22000

Detected conflicting schema location '<loc>' while streaming from table or table located at '<table>'.

Another stream may be reusing the same schema location, which is not allowed.

Please provide a new unique schemaTrackingLocation path or streamingSourceTrackingId as a reader option for one of the streams from this table.

DELTA_STREAMING_SCHEMA_LOCATION_NOT_UNDER_CHECKPOINT​

SQLSTATE: 22000

Schema location '<schemaTrackingLocation>' must be placed under checkpoint location '<checkpointLocation>'.

DELTA_STREAMING_SCHEMA_LOG_DESERIALIZE_FAILED​

SQLSTATE: 22000

Incomplete log file in the Delta streaming source schema log at '<location>'.

The schema log may have been corrupted. Please pick a new schema location.

DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_DELTA_TABLE_ID​

SQLSTATE: 22000

Detected incompatible Delta table id when trying to read Delta stream.

Persisted table id: <persistedId>, Table id: <tableId>

The schema log might have been reused. Please pick a new schema location.

DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_PARTITION_SCHEMA​

SQLSTATE: 22000

Detected incompatible partition schema when trying to read Delta stream.

Persisted schema: <persistedSchema>, Delta partition schema: <partitionSchema>

Please pick a new schema location to reinitialize the schema log if you have manually changed the table's partition schema recently.

DELTA_STREAMING_SCHEMA_LOG_INIT_FAILED_INCOMPATIBLE_METADATA​

SQLSTATE: 22000

We could not initialize the Delta streaming source schema log because

we detected an incompatible schema or protocol change while serving a streaming batch from table version <a> to <b>.

DELTA_STREAMING_SCHEMA_LOG_PARSE_SCHEMA_FAILED​

SQLSTATE: 22000

Failed to parse the schema from the Delta streaming source schema log.

The schema log may have been corrupted. Please pick a new schema location.

DELTA_TABLE_ALREADY_CONTAINS_CDC_COLUMNS​

SQLSTATE: 42711

Unable to enable Change Data Capture on the table. The table already contains

reserved columns <columnList> that will

be used internally as metadata for the table's Change Data Feed. To enable

Change Data Feed on the table rename/drop these columns.

DELTA_TABLE_ALREADY_EXISTS​

SQLSTATE: 42P07

Table <tableName> already exists.

DELTA_TABLE_FOR_PATH_UNSUPPORTED_HADOOP_CONF​

SQLSTATE: 0AKDC

Currently DeltaTable.forPath only supports hadoop configuration keys starting with <allowedPrefixes> but got <unsupportedOptions>.

DELTA_TABLE_ID_MISMATCH​

SQLSTATE: KD007

The Delta table at <tableLocation> has been replaced while this command was using the table.

Table id was <oldId> but is now <newId>.

Please retry the current command to ensure it reads a consistent view of the table.

DELTA_TABLE_INVALID_REDIRECT_STATE_TRANSITION​

SQLSTATE: 22023

Unable to update table redirection state: Invalid state transition attempted.

The Delta table '<table>' cannot change from '<oldState>' to '<newState>'.

DELTA_TABLE_INVALID_SET_UNSET_REDIRECT​

SQLSTATE: 22023

Unable to SET or UNSET redirect property on <table>: current property '<currentProperty>' mismatches with new property '<newProperty>'.

DELTA_TABLE_LOCATION_MISMATCH​

SQLSTATE: 42613

The location of the existing table <tableName> is <existingTableLocation>. It doesn't match the specified location <tableLocation>.

DELTA_TABLE_NOT_FOUND​

SQLSTATE: 42P01

Delta table <tableName> doesn't exist.

DELTA_TABLE_NOT_SUPPORTED_IN_OP​

SQLSTATE: 42809

Table is not supported in <operation>. Please use a path instead.

DELTA_TABLE_ONLY_OPERATION​

SQLSTATE: 0AKDD

<tableName> is not a Delta table. <operation> is only supported for Delta tables.

DELTA_TABLE_UNRECOGNIZED_REDIRECT_SPEC​

SQLSTATE: 42704

The Delta log contains unrecognized table redirect spec '<spec>'.

DELTA_TARGET_TABLE_FINAL_SCHEMA_EMPTY​

SQLSTATE: 428GU

Target table final schema is empty.

DELTA_TIMESTAMP_GREATER_THAN_COMMIT​

SQLSTATE: 42816

The provided timestamp (<providedTimestamp>) is after the latest version available to this

table (<tableName>). Please use a timestamp before or at <maximumTimestamp>.

DELTA_TIMESTAMP_INVALID​

SQLSTATE: 42816

The provided timestamp (<expr>) cannot be converted to a valid timestamp.

DELTA_TIME_TRAVEL_INVALID_BEGIN_VALUE​

SQLSTATE: 42604

<timeTravelKey> needs to be a valid begin value.

DELTA_TOO_MUCH_LISTING_MEMORY​

SQLSTATE: 53000

Failed to list files (<numFiles>) in the Delta table due to insufficient memory. Required memory: <estimatedMemory>, available memory: <maxMemory>.

DELTA_TRUNCATED_TRANSACTION_LOG​

SQLSTATE: 42K03

<path>: Unable to reconstruct state at version <version> as the transaction log has been truncated due to manual deletion or the log retention policy (<logRetentionKey>=<logRetention>) and checkpoint retention policy (<checkpointRetentionKey>=<checkpointRetention>).

DELTA_TRUNCATE_TABLE_PARTITION_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Operation not allowed: TRUNCATE TABLE on Delta tables does not support partition predicates; use DELETE to delete specific partitions or rows.

DELTA_UDF_IN_GENERATED_COLUMN​

SQLSTATE: 42621

Found <udfExpr>. A generated column cannot use a user-defined function.

DELTA_UNEXPECTED_ACTION_EXPRESSION​

SQLSTATE: 42601

Unexpected action expression <expression>.

DELTA_UNEXPECTED_NUM_PARTITION_COLUMNS_FROM_FILE_NAME​

SQLSTATE: KD009

Expecting <expectedColsSize> partition column(s): <expectedCols>, but found <parsedColsSize> partition column(s): <parsedCols> from parsing the file name: <path>.

DELTA_UNEXPECTED_PARTIAL_SCAN​

SQLSTATE: KD00A

Expect a full scan of Delta sources, but found a partial scan. Path: <path>.

DELTA_UNEXPECTED_PARTITION_COLUMN_FROM_FILE_NAME​

SQLSTATE: KD009

Expecting partition column <expectedCol>, but found partition column <parsedCol> from parsing the file name: <path>.

DELTA_UNEXPECTED_PARTITION_SCHEMA_FROM_USER​

SQLSTATE: KD009

CONVERT TO DELTA was called with a partition schema different from the partition schema inferred from the catalog, please avoid providing the schema so that the partition schema can be chosen from the catalog.

catalog partition schema:

<catalogPartitionSchema>

provided partition schema:

<userPartitionSchema>

DELTA_UNIFORM_COMPATIBILITY_LOCATION_CANNOT_BE_CHANGED​

SQLSTATE: 0AKDC

delta.universalFormat.compatibility.location cannot be changed.

DELTA_UNIFORM_COMPATIBILITY_LOCATION_NOT_REGISTERED​

SQLSTATE: 42K0I

delta.universalFormat.compatibility.location is not registered in the catalog.

DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION​

SQLSTATE: 42601

Missing or invalid location for Uniform compatibility format. Please set an empty directory for delta.universalFormat.compatibility.location.

Failed reason:

For more details see DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION

DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION​

SQLSTATE: KD00E

Read Apache Iceberg with Delta Uniform has failed.

For more details see DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION

DELTA_UNIFORM_INGRESS_AMBIGUOUS_FORMAT​

SQLSTATE: KD00E

Multiple Delta Uniform ingress formats (<formats>) are found, at most one can be set.

DELTA_UNIFORM_INGRESS_NOT_SUPPORTED​

SQLSTATE: 0A000

Create or Refresh Uniform ingress table is not supported.

DELTA_UNIFORM_INGRESS_NOT_SUPPORTED_FORMAT​

SQLSTATE: 0AKDC

Format <fileFormat> is not supported. Only iceberg and hudi as original file format are supported.

DELTA_UNIFORM_INGRESS_VIOLATION​

SQLSTATE: KD00E

Read Delta Uniform fails:

For more details see DELTA_UNIFORM_INGRESS_VIOLATION

DELTA_UNIFORM_NOT_SUPPORTED​

SQLSTATE: 0AKDC

Universal Format is only supported on Unity Catalog tables.

DELTA_UNIFORM_REFRESH_INVALID_ARGUMENT​

SQLSTATE: 42616

REFRESH TABLE with invalid argument:

For more details see DELTA_UNIFORM_REFRESH_INVALID_ARGUMENT

DELTA_UNIFORM_REFRESH_NOT_SUPPORTED​

SQLSTATE: 0AKDC

REFRESH identifier SYNC UNIFORM is not supported for reason:

For more details see DELTA_UNIFORM_REFRESH_NOT_SUPPORTED

DELTA_UNIFORM_REFRESH_NOT_SUPPORTED_FOR_MANAGED_ICEBERG_TABLE_WITH_METADATA_PATH​

SQLSTATE: 0AKDC

REFRESH TABLE with METADATA_PATH is not supported for managed Apache Iceberg tables.

DELTA_UNIVERSAL_FORMAT_CONVERSION_FAILED​

SQLSTATE: KD00E

Failed to convert the table version <version> to the universal format <format>. <message>

DELTA_UNIVERSAL_FORMAT_VIOLATION​

SQLSTATE: KD00E

The validation of Universal Format (<format>) has failed: <violation>.

DELTA_UNKNOWN_CONFIGURATION​

SQLSTATE: F0000

Unknown configuration was specified: <config>.

To disable this check, set <disableCheckConfig>=true in the Spark session configuration.

DELTA_UNKNOWN_PRIVILEGE​

SQLSTATE: 42601

Unknown privilege: <privilege>.

DELTA_UNKNOWN_READ_LIMIT​

SQLSTATE: 42601

Unknown ReadLimit: <limit>.

DELTA_UNRECOGNIZED_COLUMN_CHANGE​

SQLSTATE: 42601

Unrecognized column change <otherClass>. You may be running an out-of-date Delta Lake version.

DELTA_UNRECOGNIZED_INVARIANT​

SQLSTATE: 56038

Unrecognized invariant. Please upgrade your Spark version.

DELTA_UNRECOGNIZED_LOGFILE​

SQLSTATE: KD00B

Unrecognized log file <filename>.

DELTA_UNSET_NON_EXISTENT_PROPERTY​

SQLSTATE: 42616

Attempted to unset non-existent property '<property>' in table <tableName>.

DELTA_UNSUPPORTED_ABS_PATH_ADD_FILE​

SQLSTATE: 0AKDC

<path> does not support adding files with an absolute path.

DELTA_UNSUPPORTED_ALTER_TABLE_CHANGE_COL_OP​

SQLSTATE: 0AKDC

ALTER TABLE CHANGE COLUMN is not supported for changing column <fieldPath> from <oldField> to <newField>.

DELTA_UNSUPPORTED_ALTER_TABLE_REPLACE_COL_OP​

SQLSTATE: 0AKDC

Unsupported ALTER TABLE REPLACE COLUMNS operation. Reason: <details>

Failed to change schema from:

<oldSchema>

to:

<newSchema>

DELTA_UNSUPPORTED_CLONE_REPLACE_SAME_TABLE​

SQLSTATE: 0AKDC

You tried to REPLACE an existing table (<tableName>) with CLONE. This operation is

unsupported. Try a different target for CLONE or delete the table at the current target.

DELTA_UNSUPPORTED_COLUMN_MAPPING_MODE_CHANGE​

SQLSTATE: 0AKDC

Changing column mapping mode from '<oldMode>' to '<newMode>' is not supported.

DELTA_UNSUPPORTED_COLUMN_MAPPING_OPERATIONS_ON_COLUMNS_WITH_BLOOM_FILTER_INDEX​

SQLSTATE: 0AKDC

Failed to perform Column Mapping operation <opName> on column(s) <quotedColumnNames>

because these column(s) have Bloom Filter Index(es).

If you want to perform Column Mapping operation on column(s)

with Bloom Filter Index(es),

please remove the Bloom Filter Index(es) first:

DROP BLOOMFILTER INDEX ON TABLE tableName FOR COLUMNS(<columnNames>)

If you want instead to remove all Bloom Filter Indexes on the table, use:

DROP BLOOMFILTER INDEX ON TABLE tableName

DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL​

SQLSTATE: KD004

Your current table protocol version does not support changing column mapping modes

using <config>.

Required Delta protocol version for column mapping:

<requiredVersion>

Your table's current Delta protocol version:

<currentVersion>

<advice>

DELTA_UNSUPPORTED_COLUMN_MAPPING_SCHEMA_CHANGE​

SQLSTATE: 0AKDC

Schema change is detected:

old schema:

<oldTableSchema>

new schema:

<newTableSchema>

Schema changes are not allowed during the change of column mapping mode.

DELTA_UNSUPPORTED_COLUMN_MAPPING_WRITE​

SQLSTATE: 0AKDC

Writing data with column mapping mode is not supported.

DELTA_UNSUPPORTED_COLUMN_TYPE_IN_BLOOM_FILTER​

SQLSTATE: 0AKDC

Creating a bloom filter index on a column with type <dataType> is unsupported: <columnName>.

SQLSTATE: 0AKDC

Can't add a comment to <fieldPath>. Adding a comment to a map key/value or array element is not supported.

DELTA_UNSUPPORTED_DATA_TYPES​

SQLSTATE: 0AKDC

Found columns using unsupported data types: <dataTypeList>. You can set '<config>' to 'false' to disable the type check. Disabling this type check may allow users to create unsupported Delta tables and should only be used when trying to read/write legacy tables.

DELTA_UNSUPPORTED_DATA_TYPE_IN_GENERATED_COLUMN​

SQLSTATE: 42621

<dataType> cannot be the result of a generated column.

DELTA_UNSUPPORTED_DEEP_CLONE​

SQLSTATE: 0A000

Deep clone is not supported by this Delta version.

DELTA_UNSUPPORTED_DESCRIBE_DETAIL_VIEW​

SQLSTATE: 42809

<view> is a view. DESCRIBE DETAIL is only supported for tables.

DELTA_UNSUPPORTED_DROP_CLUSTERING_COLUMN​

SQLSTATE: 0AKDC

Dropping clustering columns (<columnList>) is not allowed.

DELTA_UNSUPPORTED_DROP_COLUMN​

SQLSTATE: 0AKDC

DROP COLUMN is not supported for your Delta table. <advice>

DELTA_UNSUPPORTED_DROP_NESTED_COLUMN_FROM_NON_STRUCT_TYPE​

SQLSTATE: 0AKDC

Can only drop nested columns from StructType. Found <struct>.

DELTA_UNSUPPORTED_DROP_PARTITION_COLUMN​

SQLSTATE: 0AKDC

Dropping partition columns (<columnList>) is not allowed.

DELTA_UNSUPPORTED_EXPRESSION​

SQLSTATE: 0A000

Unsupported expression type(<expType>) for <causedBy>. The supported types are [<supportedTypes>].

DELTA_UNSUPPORTED_EXPRESSION_GENERATED_COLUMN​

SQLSTATE: 42621

<expression> cannot be used in a generated column.

DELTA_UNSUPPORTED_FEATURES_FOR_READ​

SQLSTATE: 56038

Unsupported Delta read feature: table "<tableNameOrPath>" requires reader table feature(s) that are unsupported by this version of Databricks: <unsupported>. Please refer to <link> for more information on Delta Lake feature compatibility.

DELTA_UNSUPPORTED_FEATURES_FOR_WRITE​

SQLSTATE: 56038

Unsupported Delta write feature: table "<tableNameOrPath>" requires writer table feature(s) that are unsupported by this version of Databricks: <unsupported>. Please refer to <link> for more information on Delta Lake feature compatibility.

DELTA_UNSUPPORTED_FEATURES_IN_CONFIG​

SQLSTATE: 56038

Table feature(s) configured in the following Spark configs or Delta table properties are not recognized by this version of Databricks: <configs>.

DELTA_UNSUPPORTED_FEATURE_STATUS​

SQLSTATE: 0AKDE

Expecting the status for table feature <feature> to be "supported", but got "<status>".

DELTA_UNSUPPORTED_FIELD_UPDATE_NON_STRUCT​

SQLSTATE: 0AKDC

Updating nested fields is only supported for StructType, but you are trying to update a field of <columnName>, which is of type: <dataType>.

DELTA_UNSUPPORTED_FSCK_WITH_DELETION_VECTORS​

SQLSTATE: 0A000

The 'FSCK REPAIR TABLE' command is not supported on table versions with missing deletion vector files.

Please contact support.

DELTA_UNSUPPORTED_GENERATE_WITH_DELETION_VECTORS​

SQLSTATE: 0A000

The 'GENERATE symlink_format_manifest' command is not supported on table versions with deletion vectors.

In order to produce a version of the table without deletion vectors, run 'REORG TABLE table APPLY (PURGE)'. Then re-run the 'GENERATE' command.

Make sure that no concurrent transactions are adding deletion vectors again between REORG and GENERATE.

If you need to generate manifests regularly, or you cannot prevent concurrent transactions, consider disabling deletion vectors on this table using 'ALTER TABLE table SET TBLPROPERTIES (delta.enableDeletionVectors = false)'.

DELTA_UNSUPPORTED_INVARIANT_NON_STRUCT​

SQLSTATE: 0AKDC

Invariants on nested fields other than StructTypes are not supported.

DELTA_UNSUPPORTED_IN_SUBQUERY​

SQLSTATE: 0AKDC

In subquery is not supported in the <operation> condition.

DELTA_UNSUPPORTED_LIST_KEYS_WITH_PREFIX​

SQLSTATE: 0A000

listKeywithPrefix not available.

DELTA_UNSUPPORTED_MANIFEST_GENERATION_WITH_COLUMN_MAPPING​

SQLSTATE: 0AKDC

Manifest generation is not supported for tables that leverage column mapping, as external readers cannot read these Delta tables. See Delta documentation for more details.

DELTA_UNSUPPORTED_MERGE_SCHEMA_EVOLUTION_WITH_CDC​

SQLSTATE: 0A000

MERGE INTO operations with schema evolution do not currently support writing CDC output.

DELTA_UNSUPPORTED_MULTI_COL_IN_PREDICATE​

SQLSTATE: 0AKDC

Multi-column In predicates are not supported in the <operation> condition.

DELTA_UNSUPPORTED_NESTED_COLUMN_IN_BLOOM_FILTER​

SQLSTATE: 0AKDC

Creating a bloom filer index on a nested column is currently unsupported: <columnName>.

DELTA_UNSUPPORTED_NESTED_FIELD_IN_OPERATION​

SQLSTATE: 0AKDC

Nested field is not supported in the <operation> (field = <fieldName>).

DELTA_UNSUPPORTED_NON_EMPTY_CLONE​

SQLSTATE: 0AKDC

The clone destination table is non-empty. Please TRUNCATE or DELETE FROM the table before running CLONE.

DELTA_UNSUPPORTED_OUTPUT_MODE​

SQLSTATE: 0AKDC

Data source <dataSource> does not support <mode> output mode.

DELTA_UNSUPPORTED_PARTITION_COLUMN_IN_BLOOM_FILTER​

SQLSTATE: 0AKDC

Creating a bloom filter index on a partitioning column is unsupported: <columnName>.

DELTA_UNSUPPORTED_RENAME_COLUMN​

SQLSTATE: 0AKDC

Column rename is not supported for your Delta table. <advice>

DELTA_UNSUPPORTED_SCHEMA_DURING_READ​

SQLSTATE: 0AKDC

Delta does not support specifying the schema at read time.

DELTA_UNSUPPORTED_SORT_ON_BUCKETED_TABLES​

SQLSTATE: 0A000

SORTED BY is not supported for Delta bucketed tables.

DELTA_UNSUPPORTED_SOURCE​

SQLSTATE: 0AKDD

<operation> destination only supports Delta sources.

<plan>

DELTA_UNSUPPORTED_STATIC_PARTITIONS​

SQLSTATE: 0AKDD

Specifying static partitions in the partition spec is currently not supported during inserts.

DELTA_UNSUPPORTED_STRATEGY_NAME​

SQLSTATE: 22023

Unsupported strategy name: <strategy>.

DELTA_UNSUPPORTED_SUBQUERY​

SQLSTATE: 0AKDC

Subqueries are not supported in the <operation> (condition = <cond>).

DELTA_UNSUPPORTED_SUBQUERY_IN_PARTITION_PREDICATES​

SQLSTATE: 0AKDC

Subquery is not supported in partition predicates.

DELTA_UNSUPPORTED_TIME_TRAVEL_MULTIPLE_FORMATS​

SQLSTATE: 42613

Cannot specify time travel in multiple formats.

DELTA_UNSUPPORTED_TIME_TRAVEL_VIEWS​

SQLSTATE: 0AKDC

Cannot time travel views, subqueries, streams or change data feed queries.

DELTA_UNSUPPORTED_TRUNCATE_SAMPLE_TABLES​

SQLSTATE: 0A000

Truncate sample tables is not supported.

DELTA_UNSUPPORTED_TYPE_CHANGE_IN_SCHEMA​

SQLSTATE: 0AKDC

Unable to operate on this table because an unsupported type change was applied. Field <fieldName> was changed from <fromType> to <toType>.

DELTA_UNSUPPORTED_TYPE_CHANGE_ON_COLUMNS_WITH_BLOOM_FILTER_INDEX​

SQLSTATE: 0AKDC

Failed to change data type of column(s) <quotedColumnNames>

because these columns have Bloom Filter Index(es).

If you want to change the data type of column(s) with Bloom Filter Index(es),

please remove the Bloom Filter Index(es) first:

DROP BLOOMFILTER INDEX ON TABLE tableName FOR COLUMNS(<columnNames>)

If you want instead to remove all Bloom Filter Indexes on the table, use:

DROP BLOOMFILTER INDEX ON TABLE tableName

DELTA_UNSUPPORTED_VACUUM_SPECIFIC_PARTITION​

SQLSTATE: 0AKDC

Please provide the base path (<baseDeltaPath>) when Vacuuming Delta tables. Vacuuming specific partitions is currently not supported.

DELTA_UNSUPPORTED_WRITES_STAGED_TABLE​

SQLSTATE: 42807

Table implementation does not support writes: <tableName>.

DELTA_UNSUPPORTED_WRITES_WITHOUT_COORDINATOR​

SQLSTATE: 0AKDC

You are trying to perform writes on a table which has been registered with the commit coordinator <coordinatorName>. However, no implementation of this coordinator is available in the current environment and writes without coordinators are not allowed.

DELTA_UNSUPPORTED_WRITE_SAMPLE_TABLES​

SQLSTATE: 0A000

Write to sample tables is not supported.

DELTA_UPDATE_SCHEMA_MISMATCH_EXPRESSION​

SQLSTATE: 42846

Cannot cast <fromCatalog> to <toCatalog>. All nested columns must match.

DELTA_V2_CHECKPOINTS_REQUIRED_FOR_OPERATION​

SQLSTATE: 55019

CHECKPOINT operation requires V2 Checkpoints to be enabled on table.

DELTA_VACUUM_COPY_INTO_STATE_FAILED​

SQLSTATE: 22000

VACUUM on data files succeeded, but COPY INTO state garbage collection failed.

DELTA_VERSIONS_NOT_CONTIGUOUS​

SQLSTATE: KD00C

Versions (<versionList>) are not contiguous.

A gap in the delta log between versions <startVersion> and <endVersion> was detected while trying to load version <versionToLoad>.

For more details see DELTA_VERSIONS_NOT_CONTIGUOUS

DELTA_VERSION_INVALID​

SQLSTATE: 42815

The provided version (<version>) is not a valid version.

DELTA_VIOLATE_CONSTRAINT_WITH_VALUES​

SQLSTATE: 23001

CHECK constraint <constraintName> <expression> violated by row with values:

<values>.

DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED​

SQLSTATE: 0A000

The validation of the properties of table <table> has been violated:

For more details see DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED

DELTA_WRITE_INTO_VIEW_NOT_SUPPORTED​

SQLSTATE: 0A000

<viewIdentifier> is a view. You may not write data into a view.

DELTA_ZORDERING_COLUMN_DOES_NOT_EXIST​

SQLSTATE: 42703

Z-Ordering column <columnName> does not exist in data schema.

DELTA_ZORDERING_ON_COLUMN_WITHOUT_STATS​

SQLSTATE: KD00D

Z-Ordering on <cols> will be ineffective, because we currently do not collect stats for these columns. Please refer to <link> for more information on data skipping and z-ordering. You can disable this check by setting

SET <zorderColStatKey> = false

DELTA_ZORDERING_ON_PARTITION_COLUMN​

SQLSTATE: 42P10

<colName> is a partition column. Z-Ordering can only be performed on data columns.

Delta Sharing​ DELTA_SHARING_ACTIVATION_NONCE_DOES_NOT_EXIST​

SQLSTATE: none assigned

Activation nonce not found. The activation link you used is invalid or has expired. Regenerate the activation link and try again.

SQLSTATE: none assigned

Sharing between <regionHint> regions and regions outside of it is not supported.

DELTA_SHARING_GET_RECIPIENT_PROPERTIES_INVALID_DEPENDENT​

SQLSTATE: none assigned

The view defined with the current_recipient function is for sharing only and can only be queried from the data recipient side. The provided securable with id <securableId> is not a Delta Sharing View.

DELTA_SHARING_MUTABLE_SECURABLE_KIND_NOT_SUPPORTED​

SQLSTATE: none assigned

The provided securable kind <securableKind> does not support mutability in Delta Sharing.

DELTA_SHARING_ROTATE_TOKEN_NOT_AUTHORIZED_FOR_MARKETPLACE​

SQLSTATE: none assigned

The provided securable kind <securableKind> does not support rotate token action initiated by Marketplace service.

DS_AUTH_TYPE_NOT_AVAILABLE​

SQLSTATE: none assigned

<dsError>: Authentication type not available in provider entity <providerEntity>.

DS_CDF_NOT_ENABLED​

SQLSTATE: none assigned

<dsError>: Unable to access change data feed for <tableName>. CDF is not enabled on the original delta table for version <version>. Please contact your data provider.

SQLSTATE: none assigned

<dsError>: Unable to access change data feed for <tableName>. CDF is not shared on the table. Please contact your data provider.

DS_CDF_RPC_INVALID_PARAMETER​

SQLSTATE: none assigned

<dsError>: <message>

DS_CLIENT_AUTH_ERROR_FOR_DB_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_CLIENT_ERROR_FOR_DB_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_CLIENT_OAUTH_TOKEN_EXCHANGE_FAILURE​

SQLSTATE: none assigned

<dsError>: <message>

DS_CLIENT_OAUTH_TOKEN_EXCHANGE_UNAUTHORIZED​

SQLSTATE: none assigned

<dsError>: <message>

DS_CLOUD_VENDOR_UNAVAILABLE​

SQLSTATE: none assigned

<dsError>: Cloud vendor is temporarily unavailable for <rpcName>, please retry.<traceId>

DS_DATA_MATERIALIZATION_COMMAND_FAILED​

SQLSTATE: none assigned

<dsError>: Data materialization task run <runId> from org <orgId> failed at command <command>

DS_DATA_MATERIALIZATION_COMMAND_NOT_SUPPORTED​

SQLSTATE: none assigned

<dsError>: Data materialization task run <runId> from org <orgId> does not support command <command>

DS_DATA_MATERIALIZATION_NOT_SUPPORTED_WITHOUT_SERVERLESS​

SQLSTATE: none assigned

<dsError>: <featureName> is not supported because serverless is not supported or enabled in the provider workspace. Please contact your data provider to enable serverless.

DS_DATA_MATERIALIZATION_NO_VALID_NAMESPACE​

SQLSTATE: none assigned

<dsError>: Could not find valid namespace to create materialization for <tableName>. Please contact your data provider to fix this.

DS_DATA_MATERIALIZATION_RUN_DOES_NOT_EXIST​

SQLSTATE: none assigned

<dsError>: Data materialization task run <runId> from org <orgId> does not exist

DS_DELTA_ILLEGAL_STATE​

SQLSTATE: none assigned

<dsError>: <message>

DS_DELTA_MISSING_CHECKPOINT_FILES​

SQLSTATE: none assigned

<dsError>: Couldn't find all part files of the checkpoint at version: <version>. <suggestion>

DS_DELTA_NULL_POINTER​

SQLSTATE: none assigned

<dsError>: <message>

DS_DELTA_RUNTIME_EXCEPTION​

SQLSTATE: none assigned

<dsError>: <message>

DS_EXPIRE_TOKEN_NOT_AUTHORIZED_FOR_MARKETPLACE​

SQLSTATE: none assigned

<dsError>: The provided securable kind <securableKind> does not support expire token action initiated by Marketplace service.

DS_FAILED_REQUEST_TO_OPEN_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_FAILED_REQUEST_TO_SAP_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_FEATURE_ONLY_FOR_DATABRICKS_TO_DATABRICKS​

SQLSTATE: none assigned

<dsError>: <feature> is only enabled for Databricks to Databricks Delta sharing.

DS_FILE_LISTING_EXCEPTION​

SQLSTATE: none assigned

<dsError>: <storage>: <message>

DS_FILE_SIGNING_EXCEPTION​

SQLSTATE: none assigned

<dsError>: <message>

DS_FOREIGN_TABLE_METADATA_REFRESH_FAILED​

SQLSTATE: none assigned

<dsError>: <message>

DS_HADOOP_CONFIG_NOT_SET​

SQLSTATE: none assigned

<dsError>: <key> is not set by the caller.

DS_ILLEGAL_STATE​

SQLSTATE: none assigned

<dsError>: <message>

DS_INTERNAL_ERROR_FROM_DB_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_AZURE_PATH​

SQLSTATE: none assigned

<dsError>: Invalid Azure path: <path>.

DS_INVALID_DELTA_ACTION_OPERATION​

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_FIELD​

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_ITERATOR_OPERATION​

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_PARAMETER_VALUE​

SQLSTATE: none assigned

<dsError>: Invalid parameter for <rpcName> due to <cause>.

DS_INVALID_PARTITION_SPEC​

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_RESPONSE_FROM_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_MATERIALIZATION_QUERY_FAILED​

SQLSTATE: none assigned

<dsError>: Query failed for <schema>.<table> from Share <share>.

DS_MATERIALIZATION_QUERY_TIMEDOUT​

SQLSTATE: none assigned

<dsError>: Query timed out for <schema>.<table> from Share <share> after <timeoutInSec> seconds.

DS_MIGRATED_MANAGEMENT_API_CALLED​

SQLSTATE: none assigned

<dsError>: The UC RPC <rpcName> failed.

DS_MISSING_IDEMPOTENCY_KEY​

SQLSTATE: none assigned

<dsError>: Idempotency key is require when query <schema>.<table> from Share <share> asynchronously.

DS_MORE_THAN_ONE_RPC_PARAMETER_SET​

SQLSTATE: none assigned

<dsError>: Please only provide one of: <parameters>.

DS_NETWORK_CONNECTION_CLOSED​

SQLSTATE: none assigned

<dsError>: Network connection closed for <rpcName> due to <errorCause>, please retry.<traceId>

DS_NETWORK_CONNECTION_TIMEOUT​

SQLSTATE: none assigned

<dsError>: Network connection timeout for <rpcName> due to <errorCause>, please retry.<traceId>

DS_NETWORK_ERROR​

SQLSTATE: none assigned

<dsError>: Network error for <rpcName> due to <errorCause>, please retry.<traceId>

DS_NO_METASTORE_ASSIGNED​

SQLSTATE: none assigned

<dsError>: No metastore assigned for the current workspace (workspaceId: <workspaceId>).

DS_O2D_OIDC_WORKLOAD_IDENTITY_TOKEN_GENERATION_FAILED​

SQLSTATE: none assigned

<dsError>: Generating workload identity token for O2D OIDC provider failed: <message>.

DS_PAGINATION_AND_QUERY_ARGS_MISMATCH​

SQLSTATE: none assigned

<dsError>: Pagination or query arguments mismatch.

DS_PARTITION_COLUMNS_RENAMED​

SQLSTATE: none assigned

<dsError>: Partition column [<renamedColumns>] renamed on the shared table. Please contact your data provider to fix this.

DS_QUERY_BEFORE_START_VERSION​

SQLSTATE: none assigned

<dsError>: You can only query table data since version <startVersion>.

DS_QUERY_END_VERSION_AFTER_LATEST_VERSION​

SQLSTATE: none assigned

<dsError>: Provided end version(<endVersion>) for reading data is invalid. Ending version cannot be greater than the latest version of the table(<latestVersion>).

DS_QUERY_START_VERSION_AFTER_LATEST_VERSION​

SQLSTATE: none assigned

<dsError>: Provided start version(<startVersion>) for reading data is invalid. Starting version cannot be greater than the latest version of the table(<latestVersion>).

DS_QUERY_TIMEOUT_ON_SERVER​

SQLSTATE: none assigned

<dsError>: A timeout occurred when processing <queryType> on <tableName> after <numActions> updates across <numIter> iterations.<progressUpdate> <suggestion> <traceId>

DS_RATE_LIMIT_ON_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_RECIPIENT_RPC_INVALID_PARAMETER​

SQLSTATE: none assigned

<dsError>: <message>

DS_RECON_FAILED_ON_UC_WRITE_RPC​

SQLSTATE: none assigned

<dsError>: UC RPC <rpcName> failed, converting to INTERNAL_ERROR.

DS_RESOURCE_ALREADY_EXIST_ON_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_RESOURCE_EXHAUSTED​

SQLSTATE: none assigned

<dsError>: The <resource> exceeded limit: [<limitSize>]<suggestion>.<traceId>

DS_RESOURCE_NOT_FOUND_ON_DS_SERVER​

SQLSTATE: none assigned

<dsError>: <message>

DS_SAP_UNSUPPORTED_DBR_VERSION​

SQLSTATE: none assigned

<dsError>: Delta Sharing SAP connector is not supported on DBR <dbrVersion>. The minimum supported versions are: <supportedVersions>.

DS_SCHEMA_NAME_CONFLICT_FOUND​

SQLSTATE: none assigned

<dsError>: Catalog <catalogName> already contains schema names that are found in share <shareName>. <description> of conflicting schema names: <schemaNamesInCommon>.

DS_SERVER_TIMEOUT​

SQLSTATE: none assigned

<dsError>: Server timeout for <rpcName> due to <errorCause>, please retry.<traceId>

DS_SERVICE_DENIED​

SQLSTATE: none assigned

<dsError>: The request was denied as the service is under too much load. Please try again later after a while.

SQLSTATE: none assigned

<dsError>: Share <shareName> from provider <providerName> is already mounted to catalog <catalogName>.

DS_SYSTEM_WORKSPACE_GROUP_PERMISSION_UNSUPPORTED​

SQLSTATE: none assigned

Cannot grant privileges on <securableType> to system generated group <principal>.

DS_TIME_TRAVEL_NOT_PERMITTED​

SQLSTATE: none assigned

<dsError>: Time travel query is not permitted unless history is shared on <tableName>. Please contact your data provider.

DS_UNAUTHORIZED​

SQLSTATE: none assigned

<dsError>: Unauthorized.

DS_UNAUTHORIZED_D2O_OIDC_RECIPIENT​

SQLSTATE: none assigned

<dsError>: Unauthorized D2O OIDC Recipient: <message>.

DS_UNKNOWN_EXCEPTION​

SQLSTATE: none assigned

<dsError>: <traceId>

DS_UNKNOWN_QUERY_ID​

SQLSTATE: none assigned

<dsError>: Unknown query id <queryID> for <schema>.<table> from Share <share>.

DS_UNKNOWN_QUERY_STATUS​

SQLSTATE: none assigned

<dsError>: Unknown query status for query id <queryID> for <schema>.<table> from Share <share>.

DS_UNKNOWN_RPC​

SQLSTATE: none assigned

<dsError>: Unknown rpc <rpcName>.

DS_UNSUPPORTED_DELTA_READER_VERSION​

SQLSTATE: none assigned

<dsError>: Delta protocol reader version <tableReaderVersion> is higher than <supportedReaderVersion> and cannot be supported in the delta sharing server.

DS_UNSUPPORTED_DELTA_TABLE_FEATURES​

SQLSTATE: none assigned

<dsError>: Table features <tableFeatures> are found in table<versionStr> <historySharingStatusStr> <optionStr>

DS_UNSUPPORTED_OPERATION​

SQLSTATE: none assigned

<dsError>: <message>

DS_UNSUPPORTED_STORAGE_SCHEME​

SQLSTATE: none assigned

<dsError>: Unsupported storage scheme: <scheme>.

DS_UNSUPPORTED_TABLE_TYPE​

SQLSTATE: none assigned

<dsError>: Could not retrieve <schema>.<table> from Share <share> because table with type [<tableType>] is currently unsupported in <queryType> queries.

DS_USER_CONTEXT_ERROR​

SQLSTATE: none assigned

<dsError>: <message>

DS_VIEW_SHARING_FUNCTIONS_NOT_ALLOWED​

SQLSTATE: none assigned

<dsError>: The following function(s): <functions> are not allowed in the view sharing query.

DS_WORKSPACE_DOMAIN_NOT_SET​

SQLSTATE: none assigned

<dsError>: Workspace <workspaceId> domain is not set.

DS_WORKSPACE_NOT_FOUND​

SQLSTATE: none assigned

<dsError>: Workspace <workspaceId> was not found.

Autoloader​ CF_ADD_NEW_NOT_SUPPORTED​

SQLSTATE: 0A000

Schema evolution mode <addNewColumnsMode> is not supported when the schema is specified. To use this mode, you can provide the schema through cloudFiles.schemaHints instead.

CF_AMBIGUOUS_AUTH_OPTIONS_ERROR​

SQLSTATE: 42000

Found notification-setup authentication options for the (default) directory

listing mode:

<options>

If you wish to use the file notification mode, please explicitly set:

.option("cloudFiles.<useNotificationsKey>", "true")

Alternatively, if you want to skip the validation of your options and ignore these

authentication options, you can set:

.option("cloudFiles.ValidateOptionsKey>", "false")

CF_AMBIGUOUS_INCREMENTAL_LISTING_MODE_ERROR​

SQLSTATE: 42000

Incremental listing mode (cloudFiles.<useIncrementalListingKey>)

and file notification (cloudFiles.<useNotificationsKey>)

have been enabled at the same time.

Please make sure that you select only one.

CF_AZURE_AUTHENTICATION_MISSING_OPTIONS​

SQLSTATE: 42000

Please provide either a Databricks service credential or both a clientId and clientSecret for authenticating with Azure.

CF_AZURE_AUTHENTICATION_MULTIPLE_OPTIONS​

SQLSTATE: 42000

When a Databricks service credential is provided, no other credential options (e.g. clientId, clientSecret, or connectionString) should be provided.

CF_AZURE_STORAGE_SUFFIXES_REQUIRED​

SQLSTATE: 42000

Require adlsBlobSuffix and adlsDfsSuffix for Azure

CF_BUCKET_MISMATCH​

SQLSTATE: 22000

The <storeType> in the file event <fileEvent> is different from expected by the source: <source>.

CF_CANNOT_EVOLVE_SCHEMA_LOG_EMPTY​

SQLSTATE: 22000

Cannot evolve schema when the schema log is empty. Schema log location: <logPath>

CF_CANNOT_PARSE_QUEUE_MESSAGE​

SQLSTATE: 22000

Cannot parse the following queue message: <message>

CF_CANNOT_RESOLVE_CONTAINER_NAME​

SQLSTATE: 22000

Cannot resolve container name from path: <path>, Resolved uri: <uri>

CF_CANNOT_RUN_DIRECTORY_LISTING​

SQLSTATE: 22000

Cannot run directory listing when there is an async backfill thread running

CF_CLEAN_SOURCE_ALLOW_OVERWRITES_BOTH_ON​

SQLSTATE: 42000

Cannot turn on cloudFiles.cleanSource and cloudFiles.allowOverwrites at the same time.

CF_CLEAN_SOURCE_CANNOT_MOVE_FILES_INSIDE_SOURCE_PATH​

SQLSTATE: 42000

Moving files to a directory under the path that is being ingested from is not supported.

CF_CLEAN_SOURCE_NOT_ENABLED​

SQLSTATE: 0A000

CleanSource has not been enabled for this workspace. Please contact Databricks support for assistance.

CF_CLEAN_SOURCE_UNAUTHORIZED_WRITE_PERMISSION​

SQLSTATE: 42501

Auto Loader cannot archive processed files because it does not have write permissions to the source directory or the move destination.

<reason>

To fix you can either:

  1. Grant write permissions to the source directory and move destination OR

  2. Set cleanSource to 'OFF'

You could also unblock your stream by setting the SQLConf spark.databricks.cloudFiles.cleanSource.disabledDueToAuthorizationErrors to 'true'.

CF_DUPLICATE_COLUMN_IN_DATA​

SQLSTATE: 22000

There was an error when trying to infer the partition schema of your table. You have the same column duplicated in your data and partition paths. To ignore the partition value, please provide your partition columns explicitly by using: .option("cloudFiles.<partitionColumnsKey>", "{comma-separated-list}")

CF_EMPTY_DIR_FOR_SCHEMA_INFERENCE​

SQLSTATE: 42000

Cannot infer schema when the input path <path> is empty. Please try to start the stream when there are files in the input path, or specify the schema.

CF_EVENT_GRID_AUTH_ERROR​

SQLSTATE: 22000

Failed to create an Event Grid subscription. Please make sure that your service

principal has <permissionType> Event Grid Subscriptions. See more details at:

<docLink>

CF_EVENT_GRID_CREATION_FAILED​

SQLSTATE: 22000

Failed to create event grid subscription. Please ensure that Microsoft.EventGrid is

registered as resource provider in your subscription. See more details at:

<docLink>

CF_EVENT_GRID_NOT_FOUND_ERROR​

SQLSTATE: 22000

Failed to create an Event Grid subscription. Please make sure that your storage

account (<storageAccount>) is under your resource group (<resourceGroup>) and that

the storage account is a "StorageV2 (general purpose v2)" account. See more details at:

<docLink>

CF_EVENT_NOTIFICATION_NOT_SUPPORTED​

SQLSTATE: 0A000

Auto Loader event notification mode is not supported for <cloudStore>.

CF_FAILED_TO_CHECK_STREAM_NEW​

SQLSTATE: 22000

Failed to check if the stream is new

CF_FAILED_TO_CREATED_PUBSUB_SUBSCRIPTION​

SQLSTATE: 22000

Failed to create subscription: <subscriptionName>. A subscription with the same name already exists and is associated with another topic: <otherTopicName>. The desired topic is <proposedTopicName>. Either delete the existing subscription or create a subscription with a new resource suffix.

CF_FAILED_TO_CREATED_PUBSUB_TOPIC​

SQLSTATE: 22000

Failed to create topic: <topicName>. A topic with the same name already exists.<reason> Remove the existing topic or try again with another resource suffix

CF_FAILED_TO_DELETE_GCP_NOTIFICATION​

SQLSTATE: 22000

Failed to delete notification with id <notificationId> on bucket <bucketName> for topic <topicName>. Please retry or manually remove the notification through the GCP console.

CF_FAILED_TO_DESERIALIZE_PERSISTED_SCHEMA​

SQLSTATE: 22000

Failed to deserialize persisted schema from string: '<jsonSchema>'

CF_FAILED_TO_EVOLVE_SCHEMA​

SQLSTATE: 22000

Cannot evolve schema without a schema log.

CF_FAILED_TO_FIND_PROVIDER​

SQLSTATE: 42000

Failed to find provider for <fileFormatInput>

CF_FAILED_TO_INFER_SCHEMA​

SQLSTATE: 22000

Failed to infer schema for format <fileFormatInput> from existing files in input path <path>.

For more details see CF_FAILED_TO_INFER_SCHEMA

CF_FAILED_TO_WRITE_TO_SCHEMA_LOG​

SQLSTATE: 22000

Failed to write to the schema log at location <path>.

CF_FILE_FORMAT_REQUIRED​

SQLSTATE: 42000

Could not find required option: cloudFiles.format.

CF_FOUND_MULTIPLE_AUTOLOADER_PUBSUB_SUBSCRIPTIONS​

SQLSTATE: 22000

Found multiple (<num>) subscriptions with the Auto Loader prefix for topic <topicName>:

<subscriptionList>

There should only be one subscription per topic. Please manually ensure that your topic does not have multiple subscriptions.

CF_GCP_AUTHENTICATION​

SQLSTATE: 42000

Please either provide all of the following: <clientEmail>, <client>,

<privateKey>, and <privateKeyId> or provide <serviceCredential> to use your Databricks service credential.

Alternatively, provide none of them in order to use the default GCP credential provider chain for authenticating with GCP resources.

CF_GCP_LABELS_COUNT_EXCEEDED​

SQLSTATE: 22000

Received too many labels (<num>) for GCP resource. The maximum label count per resource is <maxNum>.

CF_GCP_RESOURCE_TAGS_COUNT_EXCEEDED​

SQLSTATE: 22000

Received too many resource tags (<num>) for GCP resource. The maximum resource tag count per resource is <maxNum>, as resource tags are stored as GCP labels on resources, and Databricks specific tags consume some of this label quota.

CF_INCOMPLETE_LOG_FILE_IN_SCHEMA_LOG​

SQLSTATE: 22000

Incomplete log file in the schema log at path <path>

CF_INCOMPLETE_METADATA_FILE_IN_CHECKPOINT​

SQLSTATE: 22000

Incomplete metadata file in the Auto Loader checkpoint

CF_INCORRECT_BATCH_USAGE​

SQLSTATE: 42887

CloudFiles is a streaming source. Please use spark.readStream instead of spark.read. To disable this check, set <cloudFilesFormatValidationEnabled> to false.

CF_INCORRECT_SQL_PARAMS​

SQLSTATE: 42000

The cloud_files method accepts two required string parameters: the path to load from, and the file format. File reader options must be provided in a string key-value map. e.g. cloud_files("path", "json", map("option1", "value1")). Received: <params>

CF_INCORRECT_STREAM_USAGE​

SQLSTATE: 42887

To use 'cloudFiles' as a streaming source, please provide the file format with the option 'cloudFiles.format', and use .load() to create your DataFrame. To disable this check, set <cloudFilesFormatValidationEnabled> to false.

CF_INTERNAL_ERROR​

SQLSTATE: 42000

Internal error.

For more details see CF_INTERNAL_ERROR

CF_INVALID_ARN​

SQLSTATE: 42000

Invalid ARN: <arn>

CF_INVALID_AZURE_CERTIFICATE​

SQLSTATE: 42000

The private key provided with option cloudFiles.certificate cannot be parsed. Please provide a valid public key in PEM format.

CF_INVALID_AZURE_CERT_PRIVATE_KEY​

SQLSTATE: 42000

The private key provided with option cloudFiles.certificatePrivateKey cannot be parsed. Please provide a valid private key in PEM format.

CF_INVALID_CHECKPOINT​

SQLSTATE: 42000

This checkpoint is not a valid CloudFiles source

CF_INVALID_CLEAN_SOURCE_MODE​

SQLSTATE: 42000

Invalid mode for clean source option <value>.

CF_INVALID_GCP_RESOURCE_TAG_KEY​

SQLSTATE: 42000

Invalid resource tag key for GCP resource: <key>. Keys must start with a lowercase letter, be within 1 to 63 characters long, and contain only lowercase letters, numbers, underscores (_), and hyphens (-).

CF_INVALID_GCP_RESOURCE_TAG_VALUE​

SQLSTATE: 42000

Invalid resource tag value for GCP resource: <value>. Values must be within 0 to 63 characters long and must contain only lowercase letters, numbers, underscores (_), and hyphens (-).

CF_INVALID_MANAGED_FILE_EVENTS_OPTION_KEYS​

SQLSTATE: 42000

Auto Loader does not support the following options when used with managed file events:

<optionList>

We recommend that you remove these options and then restart the stream.

CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE​

SQLSTATE: 22000

Invalid response from managed file events service. Please contact Databricks support for assistance.

For more details see CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE

CF_INVALID_SCHEMA_EVOLUTION_MODE​

SQLSTATE: 42000

cloudFiles.<schemaEvolutionModeKey> must be one of

CF_INVALID_SCHEMA_HINTS_OPTION​

SQLSTATE: 42000

Schema hints can only specify a particular column once.

In this case, redefining column: <columnName>

multiple times in schemaHints:

<schemaHints>

CF_INVALID_SCHEMA_HINT_COLUMN​

SQLSTATE: 42000

Schema hints can not be used to override maps' and arrays' nested types.

Conflicted column: <columnName>

CF_LATEST_OFFSET_READ_LIMIT_REQUIRED​

SQLSTATE: 22000

latestOffset should be called with a ReadLimit on this source.

CF_LOG_FILE_MALFORMED​

SQLSTATE: 22000

Log file was malformed: failed to read correct log version from <fileName>.

CF_MANAGED_FILE_EVENTS_BACKFILL_IN_PROGRESS​

SQLSTATE: 22000

You have requested Auto Loader to ignore existing files in your external location by setting includeExistingFiles to false. However, the managed file events service is still discovering existing files in your external location. Please try again after managed file events has completed discovering all files in your external location.

CF_MANAGED_FILE_EVENTS_ENDPOINT_NOT_FOUND​

SQLSTATE: 42000

You are using Auto Loader with managed file events, but it appears that the external location for your input path '<path>' does not have file events enabled or the input path is invalid. Please request your Databricks Administrator to enable file events on the external location for your input path.

CF_MANAGED_FILE_EVENTS_ENDPOINT_PERMISSION_DENIED​

SQLSTATE: 42000

You are using Auto Loader with managed file events, but you do not have access to the external location or volume for input path '<path>' or the input path is invalid. Please request your Databricks Administrator to grant read permissions for the external location or volume or provide a valid input path within an existing external location or volume.

CF_MANAGED_FILE_EVENTS_IS_PREVIEW​

SQLSTATE: 56038

Auto Loader with managed file events is preview functionality. To continue, please contact Databricks Support or turn off the cloudFiles.useManagedFileEvents option.

CF_MAX_MUST_BE_POSITIVE​

SQLSTATE: 42000

max must be positive

CF_METADATA_FILE_CONCURRENTLY_USED​

SQLSTATE: 22000

Multiple streaming queries are concurrently using <metadataFile>

CF_MISSING_METADATA_FILE_ERROR​

SQLSTATE: 42000

The metadata file in the streaming source checkpoint directory is missing. This metadata

file contains important default options for the stream, so the stream cannot be restarted

right now. Please contact Databricks support for assistance.

CF_MISSING_PARTITION_COLUMN_ERROR​

SQLSTATE: 42000

Partition column <columnName> does not exist in the provided schema:

<schema>

CF_MISSING_SCHEMA_IN_PATHLESS_MODE​

SQLSTATE: 42000

Please specify a schema using .schema() if a path is not provided to the CloudFiles source while using file notification mode. Alternatively, to have Auto Loader to infer the schema please provide a base path in .load().

CF_MULTIPLE_PUBSUB_NOTIFICATIONS_FOR_TOPIC​

SQLSTATE: 22000

Found existing notifications for topic <topicName> on bucket <bucketName>:

notification,id

<notificationList>

To avoid polluting the subscriber with unintended events, please delete the above notifications and retry.

CF_NEW_PARTITION_ERROR​

SQLSTATE: 22000

New partition columns were inferred from your files: [<filesList>]. Please provide all partition columns in your schema or provide a list of partition columns which you would like to extract values for by using: .option("cloudFiles.partitionColumns", "{comma-separated-list|empty-string}")

CF_PARTITON_INFERENCE_ERROR​

SQLSTATE: 22000

There was an error when trying to infer the partition schema of the current batch of files. Please provide your partition columns explicitly by using: .option("cloudFiles.<partitionColumnOption>", "{comma-separated-list}")

CF_PATH_DOES_NOT_EXIST_FOR_READ_FILES​

SQLSTATE: 42000

Cannot read files when the input path <path> does not exist. Please make sure the input path exists and re-try.

CF_PERIODIC_BACKFILL_NOT_SUPPORTED​

SQLSTATE: 0A000

Periodic backfill is not supported if asynchronous backfill is disabled. You can enable asynchronous backfill/directory listing by setting spark.databricks.cloudFiles.asyncDirListing to true

CF_PREFIX_MISMATCH​

SQLSTATE: 22000

Found mismatched event: key <key> doesn't have the prefix: <prefix>

CF_PROTOCOL_MISMATCH​

SQLSTATE: 22000

<message>

If you don't need to make any other changes to your code, then please set the SQL

configuration: '<sourceProtocolVersionKey> = <value>'

to resume your stream. Please refer to:

<docLink>

for more details.

CF_REGION_NOT_FOUND_ERROR​

SQLSTATE: 42000

Could not get default AWS Region. Please specify a region using the cloudFiles.region option.

CF_RESOURCE_SUFFIX_EMPTY​

SQLSTATE: 42000

Failed to create notification services: the resource suffix cannot be empty.

CF_RESOURCE_SUFFIX_INVALID_CHAR_AWS​

SQLSTATE: 42000

Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-) and underscores (_).

CF_RESOURCE_SUFFIX_INVALID_CHAR_AZURE​

SQLSTATE: 42000

Failed to create notification services: the resource suffix can only have lowercase letter, number, and dash (-).

CF_RESOURCE_SUFFIX_INVALID_CHAR_GCP​

SQLSTATE: 42000

Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-), underscores (_), periods (.), tildes (~) plus signs (+), and percent signs (<percentSign>).

CF_RESOURCE_SUFFIX_LIMIT​

SQLSTATE: 42000

Failed to create notification services: the resource suffix cannot have more than <limit> characters.

CF_RESOURCE_SUFFIX_LIMIT_GCP​

SQLSTATE: 42000

Failed to create notification services: the resource suffix must be between <lowerLimit> and <upperLimit> characters.

CF_RESTRICTED_GCP_RESOURCE_TAG_KEY​

SQLSTATE: 22000

Found restricted GCP resource tag key (<key>). The following GCP resource tag keys are restricted for Auto Loader: [<restrictedKeys>]

CF_RETENTION_GREATER_THAN_MAX_FILE_AGE​

SQLSTATE: 42000

cloudFiles.cleanSource.retentionDuration cannot be greater than cloudFiles.maxFileAge.

CF_SAME_PUB_SUB_TOPIC_NEW_KEY_PREFIX​

SQLSTATE: 22000

Failed to create notification for topic: <topic> with prefix: <prefix>. There is already a topic with the same name with another prefix: <oldPrefix>. Try using a different resource suffix for setup or delete the existing setup.

CF_SCHEMA_LOG_DEEP_CLONE_FAILED​

SQLSTATE: 42000

Failed to clone and migrate any schema log entries from the source schema log.

CF_SFTP_MISSING_PASSWORD_OR_KEY_FILE​

SQLSTATE: 42000

Either password or key file must be specified for SFTP.

Please specify the password in the source uri or via <passwordOption>, or specify the key file content via <keyFileOption>.

CF_SFTP_NOT_ENABLED​

SQLSTATE: 0A000

Accessing SFTP files is not enabled. Please contact Databricks support for assistance.

CF_SFTP_REQUIRE_FULL_PATH​

SQLSTATE: 0A000

Specify the full path components for an SFTP source in the form of sftp://$user@$host:$port/$path to ensure accurate UC connection look up.

CF_SFTP_REQUIRE_UC_CLUSTER​

SQLSTATE: 0A000

An UC-enabled cluster is required to access SFTP files. Please contact Databricks support for assistance.

CF_SFTP_USERNAME_NOT_FOUND​

SQLSTATE: 42000

Username must be specified for SFTP.

Please provide the username in the source uri or via <option>.

CF_SOURCE_DIRECTORY_PATH_REQUIRED​

SQLSTATE: 42000

Please provide the source directory path with option path

CF_SOURCE_UNSUPPORTED​

SQLSTATE: 0A000

The cloud files source only supports S3, Azure Blob Storage (wasb/wasbs) and Azure Data Lake Gen1 (adl) and Gen2 (abfs/abfss) paths right now. path: '<path>', resolved uri: '<uri>'

CF_STATE_INCORRECT_SQL_PARAMS​

SQLSTATE: 42000

The cloud_files_state function accepts a string parameter representing the checkpoint directory of a cloudFiles stream or a multi-part tableName identifying a streaming table, and an optional second integer parameter representing the checkpoint version to load state for. The second parameter may also be 'latest' to read the latest checkpoint. Received: <params>

CF_STATE_INVALID_CHECKPOINT_PATH​

SQLSTATE: 42000

The input checkpoint path <path> is invalid. Either the path does not exist or there are no cloud_files sources found.

CF_STATE_INVALID_VERSION​

SQLSTATE: 42000

The specified version <version> does not exist, or was removed during analysis.

CF_THREAD_IS_DEAD​

SQLSTATE: 22000

<threadName> thread is dead.

CF_UNABLE_TO_DERIVE_STREAM_CHECKPOINT_LOCATION​

SQLSTATE: 42000

Unable to derive the stream checkpoint location from the source checkpoint location: <checkPointLocation>

CF_UNABLE_TO_DETECT_FILE_FORMAT​

SQLSTATE: 42000

Unable to detect the source file format from <fileSize> sampled file(s), found <formats>. Please specify the format.

SQLSTATE: 42000

Unable to extract bucket information. Path: '<path>', resolved uri: '<uri>'.

SQLSTATE: 42000

Unable to extract key information. Path: '<path>', resolved uri: '<uri>'.

SQLSTATE: 42000

Unable to extract storage account information; path: '<path>', resolved uri: '<uri>'

CF_UNABLE_TO_LIST_EFFICIENTLY​

SQLSTATE: 22000

Received a directory rename event for the path <path>, but we are unable to list this directory efficiently. In order for the stream to continue, set the option 'cloudFiles.ignoreDirRenames' to true, and consider enabling regular backfills with cloudFiles.backfillInterval for this data to be processed.

CF_UNEXPECTED_READ_LIMIT​

SQLSTATE: 22000

Unexpected ReadLimit: <readLimit>

CF_UNKNOWN_OPTION_KEYS_ERROR​

SQLSTATE: 42000

Found unknown option keys:

<optionList>

Please make sure that all provided option keys are correct. If you want to skip the

validation of your options and ignore these unknown options, you can set:

.option("cloudFiles.<validateOptions>", "false")

CF_UNKNOWN_READ_LIMIT​

SQLSTATE: 22000

Unknown ReadLimit: <readLimit>

CF_UNSUPPORTED_CLEAN_SOURCE_MOVE​

SQLSTATE: 0A000

cleanSource 'move' mode configuration unsupported.

For more details see CF_UNSUPPORTED_CLEAN_SOURCE_MOVE

CF_UNSUPPORTED_CLOUD_FILES_SQL_FUNCTION​

SQLSTATE: 0A000

The SQL function 'cloud_files' to create an Auto Loader streaming source is supported only in a DLTs pipeline. See more details at:

<docLink>

CF_UNSUPPORTED_FORMAT_FOR_SCHEMA_INFERENCE​

SQLSTATE: 0A000

Schema inference is not supported for format: <format>. Please specify the schema.

CF_UNSUPPORTED_LOG_VERSION​

SQLSTATE: 0A000

UnsupportedLogVersion: maximum supported log version is v<maxVersion>, but encountered v<version>. The log file was produced by a newer version of DBR and cannot be read by this version. Please upgrade.

CF_UNSUPPORTED_SCHEMA_EVOLUTION_MODE​

SQLSTATE: 0A000

Schema evolution mode <mode> is not supported for format: <format>. Please set the schema evolution mode to 'none'.

CF_USE_DELTA_FORMAT​

SQLSTATE: 42000

Reading from a Delta table is not supported with this syntax. If you would like to consume data from Delta, please refer to the docs: read a Delta table (<deltaDocLink>), or read a Delta table as a stream source (<streamDeltaDocLink>). The streaming source from Delta is already optimized for incremental consumption of data.

Geospatial​ EWKB_PARSE_ERROR​

SQLSTATE: 22023

Error parsing EWKB: <parseError> at position <pos>

GEOJSON_PARSE_ERROR​

SQLSTATE: 22023

Error parsing GeoJSON: <parseError> at position <pos>

For more details see GEOJSON_PARSE_ERROR

GEO_ENCODER_SRID_MISMATCH_ERROR​

SQLSTATE: 42K09

Failed to encode <type> value because provided SRID <valueSrid> of a value to encode does not match type SRID: <typeSrid>.

H3_INVALID_CELL_ID​

SQLSTATE: 22023

<h3Cell> is not a valid H3 cell ID

For more details see H3_INVALID_CELL_ID

H3_INVALID_GRID_DISTANCE_VALUE​

SQLSTATE: 22023

H3 grid distance <k> must be non-negative

For more details see H3_INVALID_GRID_DISTANCE_VALUE

H3_INVALID_RESOLUTION_VALUE​

SQLSTATE: 22023

H3 resolution <r> must be between <minR> and <maxR>, inclusive

For more details see H3_INVALID_RESOLUTION_VALUE

H3_NOT_ENABLED​

SQLSTATE: 0A000

<h3Expression> is disabled or unsupported. Consider switching to a tier that supports H3 expressions

For more details see H3_NOT_ENABLED

H3_PENTAGON_ENCOUNTERED_ERROR​

SQLSTATE: 22023

A pentagon was encountered while computing the hex ring of <h3Cell> with grid distance <k>

H3_UNDEFINED_GRID_DISTANCE​

SQLSTATE: 22023

H3 grid distance between <h3Cell1> and <h3Cell2> is undefined

ST_DIFFERENT_SRID_VALUES​

SQLSTATE: 22023

Arguments to "<sqlFunction>" must have the same SRID value. SRID values found: <srid1>, <srid2>

ST_INVALID_ARGUMENT​

SQLSTATE: 22023

"<sqlFunction>": <reason>

ST_INVALID_ARGUMENT_TYPE​

SQLSTATE: 22023

Argument to "<sqlFunction>" must be of type <validTypes>

ST_INVALID_CRS_TRANSFORMATION_ERROR​

SQLSTATE: 22023

<sqlFunction>: Invalid or unsupported CRS transformation from SRID <srcSrid> to SRID <trgSrid>

ST_INVALID_ENDIANNESS_VALUE​

SQLSTATE: 22023

Endianness '<e>' must be either 'NDR' (little-endian) or 'XDR' (big-endian)

ST_INVALID_GEOHASH_VALUE​

SQLSTATE: 22023

<sqlFunction>: Invalid geohash value: '<geohash>'. Geohash values must be valid lowercase base32 strings as described in https://en.wikipedia.org/wiki/Geohash#Textual_representation

ST_INVALID_INDEX_VALUE​

SQLSTATE: 22023

<sqlFunction>: Invalid index <index> for the provided geospatial value.

ST_INVALID_PRECISION_VALUE​

SQLSTATE: 22023

Precision <p> must be between <minP> and <maxP>, inclusive

ST_INVALID_SRID_VALUE​

SQLSTATE: 22023

Invalid or unsupported SRID <srid>

ST_NOT_ENABLED​

SQLSTATE: 0A000

<stExpression> is disabled or unsupported. Consider switching to a tier that supports ST expressions

ST_UNSUPPORTED_RETURN_TYPE​

SQLSTATE: 0A000

The GEOGRAPHY and GEOMETRY data types cannot be returned in queries. Use one of the following SQL expressions to convert them to standard interchange formats: <projectionExprs>.

WKB_PARSE_ERROR​

SQLSTATE: 22023

Error parsing WKB: <parseError> at position <pos>

For more details see WKB_PARSE_ERROR

WKT_PARSE_ERROR​

SQLSTATE: 22023

Error parsing WKT: <parseError> at position <pos>

For more details see WKT_PARSE_ERROR

Unity Catalog​ CONFLICTING_COLUMN_NAMES_ERROR​

SQLSTATE: 42711

Column <columnName> conflicts with another column with the same name but with/without trailing whitespaces (for example, an existing column named <columnName>). Please rename the column with a different name.

CONNECTION_CREDENTIALS_NOT_SUPPORTED_FOR_ONLINE_TABLE_CONNECTION​

SQLSTATE: 0AKUC

Invalid request to get connection-level credentials for connection of type <connectionType>. Such credentials are only available for managed PostgreSQL connections.

CONNECTION_TYPE_NOT_ENABLED​

SQLSTATE: 56038

Connection type '<connectionType>' is not enabled. Please enable the connection to use it.

DELTA_SHARING_PROVISIONING_STATE_NOT_ACTIVE​

SQLSTATE: 55019

The shared entity '<securableFullName>' is currently not usable, because it has not been fully synchronized with the corresponding source entity yet. Caused by: <reason>.

DELTA_SHARING_READ_ONLY_RECIPIENT_EXISTS​

SQLSTATE: 42710

There is already a Recipient object '<existingRecipientName>' with the same sharing identifier '<existingMetastoreId>'.

DELTA_SHARING_READ_ONLY_SECURABLE_KIND​

SQLSTATE: 42501

Data of a Delta Sharing Securable Kind <securableKindName> are read-only and can not be created, modified or deleted.

EXTERNAL_ACCESS_DISABLED_ON_METASTORE​

SQLSTATE: 0A000

Credential vending is rejected for non Databricks Compute environment due to External Data Access being disabled for metastore <metastoreName>. Please contact your metastore admin to enable 'External Data Access' configuration on the metastore.

EXTERNAL_ACCESS_NOT_ALLOWED_FOR_TABLE​

SQLSTATE: 0AKUC

Table with id <tableId> cannot be accessed from outside of Databricks Compute Environment due to its kind being <securableKind>. Only 'TABLE_EXTERNAL', 'TABLE_DELTA_EXTERNAL' and 'TABLE_DELTA' table kinds can be accessed externally.

EXTERNAL_USE_SCHEMA_ASSIGNED_TO_INCORRECT_SECURABLE_TYPE​

SQLSTATE: 22023

Privilege EXTERNAL USE SCHEMA is not applicable to this entity <assignedSecurableType> and can only be assigned to a schema or catalog. Please remove the privilege from the <assignedSecurableType> object and assign it to a schema or catalog instead.

EXTERNAL_WRITE_NOT_ALLOWED_FOR_TABLE​

SQLSTATE: 0AKUC

Table with id <tableId> cannot be written from outside of Databricks Compute Environment due to its kind being <securableKind>. Only 'TABLE_EXTERNAL' and 'TABLE_DELTA_EXTERNAL' table kinds can be written externally.

FOREIGN_CATALOG_STORAGE_ROOT_MUST_SUPPORT_WRITES​

SQLSTATE: 42501

The storage location for a foreign catalog of type <catalogType> will be used for unloading data and can not be read-only.

HMS_SECURABLE_OVERLAP_LIMIT_EXCEEDED​

SQLSTATE: 54000

The number of <resourceType>s for input path <url> exceeds the allowed limit (<overlapLimit>) for overlapping HMS <resourceType>s.

INVALID_RESOURCE_NAME_DELTA_SHARING​

SQLSTATE: 0A000

Delta Sharing requests are not supported using resource names

INVALID_RESOURCE_NAME_ENTITY_TYPE​

SQLSTATE: 42809

The provided resource name references entity type <provided> but expected <expected>

INVALID_RESOURCE_NAME_METASTORE_ID​

SQLSTATE: 42000

The provided resource name references a metastore that is not in scope for the current request

LOCATION_OVERLAP​

SQLSTATE: 22023

Input path url '<path>' overlaps with <overlappingLocation> within '<caller>' call. <conflictingSummary>. <permissionErrorSuggestion>

MONGO_DB_SRV_CONNECTION_STRING_DOES_NOT_ALLOW_PORT​

SQLSTATE: 42616

Creating or updating a MongoDB connection is not allowed because MongoDB SRV connection string does not require port.

Please remove port, because SRV connection string does not require port.

REDSHIFT_FOREIGN_CATALOG_STORAGE_ROOT_MUST_BE_S3​

SQLSTATE: 22KD1

The storage root for Redshift foreign catalog has to be AWS S3.

SECURABLE_KIND_DOES_NOT_SUPPORT_LAKEHOUSE_FEDERATION​

SQLSTATE: 0AKUC

Securable with kind <securableKind> does not support Lakehouse Federation.

SECURABLE_KIND_NOT_ENABLED​

SQLSTATE: 56038

Securable kind '<securableKind>' is not enabled. If this is a securable kind associated with a preview feature, please enable it in workspace settings.

SECURABLE_TYPE_DOES_NOT_SUPPORT_LAKEHOUSE_FEDERATION​

SQLSTATE: 0AKUC

Securable with type <securableType> does not support Lakehouse Federation.

SOURCE_TABLE_COLUMN_COUNT_EXCEEDS_LIMIT​

SQLSTATE: 54000

The source table has more than <columnCount> columns. Please reduce the number of columns to <columnLimitation> or fewer.

UC_AAD_TOKEN_LIFETIME_TOO_SHORT​

SQLSTATE: 22003

Exchanged AAD token lifetime is <lifetime> which is configured too short. Please check your Azure AD setting to make sure temporary access token has at least an hour lifetime. https://learn.microsoft.com/azure/active-directory/develop/active-directory-configurable-token-lifetimes

UC_ABAC_DEPENDENCY_DIFFERING_RF_CM​

SQLSTATE: 55000

Dependency '<dependency>' is referenced multiple times and resulted in differing ABAC row filters or column masks.

UC_ABAC_EVALUATION_USER_ERROR​

SQLSTATE: 42000

Error evaluating ABAC policies on '<resource>'. Policy '<policyName>' failed with message: <message>.

UC_ACCESS_REQUIRES_WORKSPACE​

SQLSTATE: 42501

Unable to access securable because it resides in a workspace-bound catalog, and the user is not assigned to the associated workspace.

UC_ALTER_DLT_VIEW_OUTSIDE_DEFINING_PIPELINE​

SQLSTATE: 0A000

Altering the view '<viewFullName>' outside of the pipeline that defined it is not allowed. Instead, update the view definition from the pipeline that defined it (Pipeline ID: <owningPipelineId>).

UC_ATLAS_NOT_ENABLED_FOR_CALLING_SERVICE​

SQLSTATE: 0A000

Atlas is not enabled for the calling service <serviceName>.

UC_AUTHZ_ACTION_NOT_SUPPORTED​

SQLSTATE: 0AKUC

Authorizing <actionName> is not supported; please check that the RPC invoked is implemented for this resource type

UC_BUILTIN_HMS_CONNECTION_CREATION_PERMISSION_DENIED​

SQLSTATE: 42501

Cannot create a connection for a builtin hive metastore because user: <userId> is not the admin of the workspace: <workspaceId>

UC_BUILTIN_HMS_CONNECTION_MODIFY_RESTRICTED_FIELD​

SQLSTATE: 22023

Attempt to modify a restricted field in built-in HMS connection '<connectionName>'. Only 'warehouse_directory' can be updated.

UC_CANNOT_RENAME_PARTITION_FILTERING_COLUMN​

SQLSTATE: 428FR

Failed to rename table column <originalLogicalColumn> because it's used for partition filtering in <sharedTableName>. In order to proceed, you can remove the table from the share, rename the column, and share it with the desired partition filtering columns again. Though, this may break the streaming query for your recipient.

UC_CHILD_CREATION_FORBIDDEN_FOR_NON_UC_CLUSTER​

SQLSTATE: 0AKUC

Cannot create <securableType> '<securable>' under <parentSecurableType> '<parentSecurable>' because the request is not from a UC cluster.

UC_CONFLICTING_CONNECTION_OPTIONS​

SQLSTATE: 42616

Cannot create a connection with both username/password and oauth authentication options. Please choose one.

UC_CONNECTION_CREDENTIALS_MAXIMUM_REACHED​

SQLSTATE: 53400

The maximum number of credentials for connection name '<connectionName>' has been reached. Please delete existing credentials before creating a new one.

UC_CONNECTION_CREDENTIALS_NOT_EXIST_FOR_USER_WITH_LOGIN​

SQLSTATE: 42517

Credential for user identity('<userIdentity>') is not found for the connection '<connectionName>'. Please login first to the connection by visiting <connectionPage>

UC_CONNECTION_CREDENTIALS_TYPE_NOT_SUPPORTED​

SQLSTATE: 42809

Creating credentials for securable type '<securableType>' is not supported. Supported securable types: <allowedSecurableType>.

UC_CONNECTION_EXISTS_FOR_CREDENTIAL​

SQLSTATE: 42893

Credential '<credentialName>' has one or more dependent connections. You may use force option to continue to update or delete the credential, but the connections using this credential may not work anymore.

UC_CONNECTION_EXPIRED_ACCESS_TOKEN​

SQLSTATE: 42KDI

The access token associated with the connection is expired. Please update the connection to restart the OAuth flow to retrieve a token.

UC_CONNECTION_EXPIRED_REFRESH_TOKEN​

SQLSTATE: 42KDI

The refresh token associated with the connection is expired. Please update the connection to restart the OAuth flow to retrieve a fresh token.

UC_CONNECTION_IN_FAILED_STATE​

SQLSTATE: 55000

The connection is in the FAILED state. Please update the connection with valid credentials to reactivate it.

UC_CONNECTION_MISSING_OPTION​

SQLSTATE: 22023

Connections of securable type '<securableType>' must include the following option(s): <requiredOptions>.

UC_CONNECTION_MISSING_REFRESH_TOKEN​

SQLSTATE: 42KDI

There is no refresh token associated with the connection. Please update the OAuth client integration in your identity provider to return refresh tokens, and update or recreate the connection to restart the OAuth flow and retrieve the necessary tokens.

UC_CONNECTION_OAUTH_EXCHANGE_FAILED​

SQLSTATE: 42KDI

The OAuth token exchange failed with HTTP status code <httpStatus>. The returned server response or exception message is: <response>

UC_CONNECTION_OPTION_NOT_SUPPORTED​

SQLSTATE: 42616

Connections of securable type '<securableType>' do not support the following option(s): <optionsNotSupported>. Supported options: <allowedOptions>.

UC_COORDINATED_COMMITS_NOT_ENABLED​

SQLSTATE: 56038

Supports for coordinated commits is not enabled. Please contact Databricks support.

UC_CREATE_FORBIDDEN_UNDER_INACTIVE_SECURABLE​

SQLSTATE: 55019

Cannot create <securableType> '<securableName>' because it is under a <parentSecurableType> '<parentSecurableName>' that is not active. Please delete the parent securable and recreate the parent.

UC_CREDENTIAL_ACCESS_CONNECTOR_PARSING_FAILED​

SQLSTATE: 22KD1

Failed to parse the provided access connector ID: <accessConnectorId>. Please verify its formatting and try again.

UC_CREDENTIAL_FAILED_TO_OBTAIN_VALIDATION_TOKEN​

SQLSTATE: 58000

Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.

UC_CREDENTIAL_INVALID_CLOUD_PERMISSIONS​

SQLSTATE: none assigned

Registering a credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>. Please contact your account admin.

UC_CREDENTIAL_INVALID_CREDENTIAL_TYPE_FOR_PURPOSE​

SQLSTATE: 22023

Credential type '<credentialType>' is not supported for purpose '<credentialPurpose>'

UC_CREDENTIAL_PERMISSION_DENIED​

SQLSTATE: none assigned

Only the account admin can create or update a credential with type <storageCredentialType>.

UC_CREDENTIAL_TRUST_POLICY_IS_OPEN​

SQLSTATE: 22023

The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem (https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

UC_CREDENTIAL_WORKSPACE_API_PROHIBITED​

SQLSTATE: 0A000

Creating or updating a credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.

UC_DBR_TRUST_VERSION_TOO_OLD​

SQLSTATE: 0A000

The Databricks Runtime being used no longer supports this operation. Please use the latest version (you may just need to restart your cluster).

UC_DELTA_UNIVERSAL_FORMAT_CANNOT_PARSE_ICEBERG_VERSION​

SQLSTATE: 22P02

Unable to parse Apache Iceberg table version from metadata location <metadataLocation>.

UC_DELTA_UNIVERSAL_FORMAT_CONCURRENT_WRITE​

SQLSTATE: 40001

A concurrent update to the same iceberg metadata version was detected.

UC_DELTA_UNIVERSAL_FORMAT_INVALID_METADATA_LOCATION​

SQLSTATE: 22KD1

The committed metadata location <metadataLocation> is invalid. It is not a subdirectory of the table's root directory <tableRoot>.

UC_DELTA_UNIVERSAL_FORMAT_MISSING_FIELD_CONSTRAINT​

SQLSTATE: 42601

The provided delta iceberg format conversion information is missing required fields.

UC_DELTA_UNIVERSAL_FORMAT_NON_CREATE_CONSTRAINT​

SQLSTATE: 0A000

Setting delta iceberg format information on create is unsupported.

UC_DELTA_UNIVERSAL_FORMAT_TOO_LARGE_CONSTRAINT​

SQLSTATE: 54000

The provided delta iceberg format conversion information is too large.

UC_DELTA_UNIVERSAL_FORMAT_UPDATE_INVALID​

SQLSTATE: 0A000

Uniform metadata can only be updated on Delta tables with uniform enabled.

UC_DEPENDENCY_DEPTH_LIMIT_EXCEEDED​

SQLSTATE: 54000

<resourceType> '<ref>' depth exceeds limit (or has a circular reference).

UC_DEPENDENCY_DOES_NOT_EXIST​

SQLSTATE: 42704

<resourceType> '<ref>' is invalid because one of the underlying resources does not exist. <cause>

UC_DEPENDENCY_PERMISSION_DENIED​

SQLSTATE: 42501

<resourceType> '<ref>' does not have sufficient privilege to execute because the owner of one of the underlying resources failed an authorization check. <cause>

UC_DUPLICATE_CONNECTION​

SQLSTATE: 42710

A connection called '<connectionName>' with the same URL already exists. Please ask the owner for permission to use that connection instead of creating a duplicate.

UC_DUPLICATE_FABRIC_CATALOG_CREATION​

SQLSTATE: 42710

Attempted to create a Fabric catalog with url '<storageLocation>' that matches an existing catalog, which is not allowed.

UC_DUPLICATE_TAG_ASSIGNMENT_CREATION​

SQLSTATE: 42710

Tag assignment with tag key <tagKey> already exists

UC_ENTITY_DOES_NOT_HAVE_CORRESPONDING_ONLINE_CLUSTER​

SQLSTATE: 57000

Entity <securableType> <entityId> does not have a corresponding online cluster.

UC_EXCEEDS_MAX_FILE_LIMIT​

SQLSTATE: 22023

There are more than <maxFileResults> files. Please specify [max_results] to limit the number of files returned.

UC_EXTERNAL_LOCATION_OP_NOT_ALLOWED​

SQLSTATE: 22023

Cannot <opName> <extLoc> <reason>. <suggestion>.

UC_FEATURE_DISABLED​

SQLSTATE: 56038

<featureName> is currently disabled in UC.

UC_FOREIGN_CATALOG_FOR_CONNECTION_TYPE_NOT_SUPPORTED​

SQLSTATE: 42809

Creation of a foreign catalog for connection type '<connectionType>' is not supported. This connection type can only be used to create managed ingestion pipelines. Please reference Databricks documentation for more information.

UC_FOREIGN_CREDENTIAL_CHECK_ONLY_FOR_READ_OPERATIONS​

SQLSTATE: 0AKUC

Only READ credentials can be retrieved for foreign tables.

UC_FOREIGN_HMS_SHALLOW_CLONE_MISMATCH​

SQLSTATE: 22023

The base table and clone table must be in the same catalog for shallow clones created in foreign Hive Metastore catalogs. Base table '<baseTableName>' is in catalog '<baseCatalogName>' and clone table '<cloneTableName>' is in catalog '<cloneCatalogName>'.

UC_FOREIGN_KEY_CHILD_COLUMN_LENGTH_MISMATCH​

SQLSTATE: 42830

Foreign key <constraintName> child columns and parent columns are of different sizes.

UC_FOREIGN_KEY_COLUMN_MISMATCH​

SQLSTATE: 42830

The foreign key parent columns do not match the referenced primary key child columns. Foreign key parent columns are (<parentColumns>) and primary key child columns are (<primaryKeyChildColumns>).

UC_FOREIGN_KEY_COLUMN_TYPE_MISMATCH​

SQLSTATE: 42830

The foreign key child column type does not match the parent column type. Foreign key child column <childColumnName> has type <childColumnType> and parent column <parentColumnName> has type <parentColumnType>.

UC_GCP_INVALID_PRIVATE_KEY​

SQLSTATE: 42501

Access denied. Cause: service account private key is invalid.

UC_GCP_INVALID_PRIVATE_KEY_JSON_FORMAT​

SQLSTATE: 22032

Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from 'KEYS' section of service account details page.

UC_GCP_INVALID_PRIVATE_KEY_JSON_FORMAT_MISSING_FIELDS​

SQLSTATE: 22032

Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from 'KEYS' section of service account details page. Missing fields are <missingFields>

UC_IAM_ROLE_NON_SELF_ASSUMING​

SQLSTATE: 22023

The IAM role for this storage credential was found to be non self-assuming. Please check your role's trust and IAM policies to ensure that your IAM role can assume itself according to the Unity Catalog storage credential documentation.

UC_ICEBERG_COMMIT_CONFLICT​

SQLSTATE: 40001

Cannot commit <tableName>: metadata location <baseMetadataLocation> has changed from <catalogMetadataLocation>.

UC_ICEBERG_COMMIT_INVALID_TABLE​

SQLSTATE: 42809

Cannot perform Managed Apache Iceberg commit to a non Managed Apache Iceberg table: <tableName>.

UC_ICEBERG_COMMIT_MISSING_FIELD_CONSTRAINT​

SQLSTATE: 42601

The provided Managed Apache Iceberg commit information is missing required fields.

UC_ID_MISMATCH​

SQLSTATE: 40001

The <type> <name> does not have ID <wrongId>. Please retry the operation.

UC_INVALID_ACCESS_BRICKSTORE_PG_CONNECTION​

SQLSTATE: 0A000

Invalid access to database instance. <reason>

UC_INVALID_ACCESS_DBFS_ENTITY​

SQLSTATE: 42501

Invalid access of <securableType> <securableName> in the federated catalog <catalogName>. <reason>

UC_INVALID_CLOUDFLARE_ACCOUNT_ID​

SQLSTATE: 22023

Invalid Cloudflare account ID.

UC_INVALID_CREDENTIAL_CLOUD​

SQLSTATE: 42616

Invalid credential cloud provider '<cloud>'. Allowed cloud provider '<allowedCloud>'.

UC_INVALID_CREDENTIAL_PURPOSE_VALUE​

SQLSTATE: 42616

Invalid value '<value>' for credential's 'purpose'. Allowed values '<allowedValues>'.

UC_INVALID_CREDENTIAL_TRANSITION​

SQLSTATE: 42616

Cannot update a connection from <startingCredentialType> to <endingCredentialType>. The only valid transition is from a username/password based connection to an OAuth token based connection.

UC_INVALID_CRON_STRING_FABRIC​

SQLSTATE: 22P02

Invalid cron string. Found: '<cronString>' with parse exception: '<message>'

UC_INVALID_DIRECT_ACCESS_MANAGED_TABLE​

SQLSTATE: 22023

Invalid direct access managed table <tableName>. Make sure source table & pipeline definition are not defined.

UC_INVALID_EMPTY_STORAGE_LOCATION​

SQLSTATE: 55000

Unexpected empty storage location for <securableType> '<securableName>' in catalog '<catalogName>'. In order to fix this error, please run DESCRIBE SCHEMA <catalogName>.<securableName> and refresh this page.

UC_INVALID_OPTIONS_UPDATE​

SQLSTATE: 42601

Invalid options provided for update. Invalid options: <invalidOptions>. Allowed options: <allowedOptions>.

UC_INVALID_OPTION_VALUE​

SQLSTATE: 42616

Invalid value '<value>' for '<option>'. Allowed values '<allowedValues>'.

UC_INVALID_OPTION_VALUE_EMPTY​

SQLSTATE: 42616

'<option>' cannot be empty. Please enter a non-empty value.

UC_INVALID_POLICY_CONDITION​

SQLSTATE: 42611

Invalid condition in policy '<policyName>'. Compilation error with message '<message>'.

UC_INVALID_R2_ACCESS_KEY_ID​

SQLSTATE: 22023

Invalid R2 access key ID.

UC_INVALID_R2_SECRET_ACCESS_KEY​

SQLSTATE: 22023

Invalid R2 secret access key.

UC_INVALID_UPDATE_ON_SYSTEM_WORKSPACE_ADMIN_GROUP_OWNED_SECURABLE​

SQLSTATE: 42501

Cannot update <securableType> '<securableName>' as it's owned by an internal group. Please contact Databricks support for additional details.

UC_INVALID_WASBS_EXTERNAL_LOCATION_STORAGE_CREDENTIAL​

SQLSTATE: 22023

Provided Storage Credential <storageCredentialName> is not associated with DBFS Root, creation of wasbs External Location is prohibited.

UC_LOCATION_INVALID_SCHEME​

SQLSTATE: 22KD1

Storage location has invalid URI scheme: <scheme>.

UC_MALFORMED_OAUTH_SERVER_RESPONSE​

SQLSTATE: 58000

The response from the token server was missing the field <missingField>. The returned server response is: <response>

UC_METASTORE_ASSIGNMENT_STATUS_INVALID​

SQLSTATE: 22023

'<metastoreAssignmentStatus>' cannot be assigned. Only MANUALLY_ASSIGNABLE and AUTO_ASSIGNMENT_ENABLED are supported.

UC_METASTORE_CERTIFICATION_NOT_ENABLED​

SQLSTATE: 0A000

Metastore certification is not enabled.

UC_METASTORE_HAS_ACTIVE_MANAGED_ONLINE_CATALOGS​

SQLSTATE: none assigned

The metastore <metastoreId> has <numberManagedOnlineCatalogs> managed online catalog(s). Please explicitly delete them, then retry the metastore deletion.

UC_METASTORE_STORAGE_ROOT_CREDENTIAL_UPDATE_INVALID​

SQLSTATE: 22023

Metastore root credential cannot be defined when updating the metastore root location. The credential will be fetched from the metastore parent external location.

UC_METASTORE_STORAGE_ROOT_DELETION_INVALID​

SQLSTATE: 42501

Deletion of metastore storage root location failed. <reason>

UC_METASTORE_STORAGE_ROOT_READ_ONLY_INVALID​

SQLSTATE: 22023

The root <securableType> for a metastore cannot be read-only.

UC_METASTORE_STORAGE_ROOT_UPDATE_INVALID​

SQLSTATE: 22023

Metastore storage root cannot be updated once it is set.

UC_MODEL_INVALID_STATE​

SQLSTATE: 55000

Cannot generate temporary '<opName>' credentials for model version <modelVersion> with status <modelVersionStatus>. '<opName>' credentials can only be generated for model versions with status <validStatus>

UC_NO_ORG_ID_IN_CONTEXT​

SQLSTATE: 58000

Attempted to access org ID (or workspace ID), but context has none.

UC_ONLINE_CATALOG_NOT_MUTABLE​

SQLSTATE: none assigned

The <rpcName> request updates <fieldName>. Use the online store compute tab to modify anything other than comment, owner and isolationMode of an online catalog.

UC_ONLINE_CATALOG_QUOTA_EXCEEDED​

SQLSTATE: 53400

Cannot create more than <quota> online stores in the metastore and there is already <currentCount>. You may not have access to any existing online stores. Contact your metastore admin to be granted access or for further instructions.

UC_ONLINE_INDEX_CATALOG_INVALID_CRUD​

SQLSTATE: none assigned

online index catalogs must be <action> via the /vector-search API.

UC_ONLINE_INDEX_CATALOG_NOT_MUTABLE​

SQLSTATE: none assigned

The <rpcName> request updates <fieldName>. Use the /vector-search API to modify anything other than comment, owner and isolationMode of an online index catalog.

UC_ONLINE_INDEX_CATALOG_QUOTA_EXCEEDED​

SQLSTATE: 53400

Cannot create more than <quota> online index catalogs in the metastore and there is already <currentCount>. You may not have access to any existing online index catalogs. Contact your metastore admin to be granted access or for further instructions.

UC_ONLINE_INDEX_INVALID_CRUD​

SQLSTATE: 0A000

online indexes must be <action> via the /vector-search API.

UC_ONLINE_STORE_INVALID_CRUD​

SQLSTATE: none assigned

online stores must be <action> via the online store compute tab.

UC_ONLINE_TABLE_COLUMN_NAME_TOO_LONG​

SQLSTATE: 54000

The source table column name <columnName> is too long. The maximum length is <maxLength> characters.

UC_ONLINE_TABLE_PRIMARY_KEY_COLUMN_NOT_IN_SOURCE_TABLE_PRIMARY_KEY_CONSTRAINT​

SQLSTATE: 42P16

Column <columnName> cannot be used as a primary key column of the online table because it is not part of the existing PRIMARY KEY constraint of the source table. For details, please see <docLink>

UC_ONLINE_TABLE_TIMESERIES_KEY_NOT_IN_SOURCE_TABLE_PRIMARY_KEY_CONSTRAINT​

SQLSTATE: 42P16

Column <columnName> cannot be used as a timeseries key of the online table because it is not a timeseries column of the existing PRIMARY KEY constraint of the source table. For details, please see <docLink>

UC_ONLINE_VIEWS_PER_SOURCE_TABLE_QUOTA_EXCEEDED​

SQLSTATE: 53400

Cannot create more than <quota> online table(s) per source table.

UC_ONLINE_VIEW_ACCESS_DENIED​

SQLSTATE: 0AKUC

Accessing resource <resourceName> requires use of a Serverless SQL warehouse. Please ensure the warehouse being used to execute a query or view a database catalog in the UI is serverless. For details, please see <docLink>

UC_ONLINE_VIEW_CONTINUOUS_QUOTA_EXCEEDED​

SQLSTATE: 53400

Cannot create more than <quota> continuous online views in the online store, and there is already <currentCount>. You may not have access to any existing online views. Contact your online store admin to be granted access or for further instructions.

UC_ONLINE_VIEW_DOES_NOT_SUPPORT_DMK​

SQLSTATE: none assigned

<tableKind> can not be created under storage location with Databricks Managed Keys. Please choose a different schema/catalog in a storage location without Databricks Managed Keys encryption.

UC_ONLINE_VIEW_INVALID_CATALOG​

SQLSTATE: 42809

Invalid catalog <catalogName> with kind <catalogKind> to create <tableKind> within. <tableKind> can only be created under catalogs of kinds: <validCatalogKinds>.

UC_ONLINE_VIEW_INVALID_SCHEMA​

SQLSTATE: 42809

Invalid schema <schemaName> with kind <schemaKind> to create <tableKind> within. <tableKind> can only be created under schemas of kinds: <validSchemaKinds>.

UC_ONLINE_VIEW_INVALID_TTL_TIME_COLUMN_TYPE​

SQLSTATE: 42809

Column <columnName> of type <columnType> cannot be used as a TTL time column. Allowed types are <supportedTypes>.

UC_OPERATION_NOT_SUPPORTED​

SQLSTATE: 0AKUC

Operation not supported by Unity Catalog. <detailedMessage>

UC_OUT_OF_AUTHORIZED_PATHS_SCOPE​

SQLSTATE: 22023

Authorized Path Error. The <securableType> location <location> is not defined within the authorized paths for catalog: <catalogName>. Please ask the catalog owner to add the path to the list of authorized paths defined on the catalog.

UC_OVERLAPPED_AUTHORIZED_PATHS​

SQLSTATE: 22023

The 'authorized_paths' option contains overlapping paths: <overlappingPaths>. Ensure each path is unique and does not intersect with others in the list.

UC_PAGINATION_AND_QUERY_ARGS_MISMATCH​

SQLSTATE: 22023

The query argument '<arg>' is set to '<received>' which is different to the value used in the first pagination call ('<expected>')

UC_PATH_FILTER_ALLOWLIST_VIOLATION​

SQLSTATE: 42501

The credential '<credentialName>' is a workspace default credential that is only allowed to access data in the following paths: '<allowlist>'. Please ensure that any path accessed using this credential is under one of these paths.

UC_PATH_FILTER_DENYLIST_VIOLATION​

SQLSTATE: 42501

The credential '<credentialName>' is a workspace default credential that cannot access data in the following restricted path: '<targetPath>'.

UC_PATH_TOO_LONG​

SQLSTATE: 54000

The input path is too long. Allowed length: <maxLength>. Input length: <inputLength>. Input: <path>...

UC_PER_METASTORE_DATABASE_CONCURRENCY_LIMIT_EXCEEDED​

SQLSTATE: 54000

Concurrency limit exceeded for metastore <metastoreId>. Please try again later. If the problem persists, please reach out to support. Error code #UC-<errorCodeArbitraryFiveLettersUpperCase>

UC_POSTGRESQL_ONLINE_VIEWS_PER_SOURCE_TABLE_QUOTA_EXCEEDED​

SQLSTATE: 53400

Cannot create more than <quota> synced database table(s) per source table.

UC_PRIMARY_KEY_ON_NULLABLE_COLUMN​

SQLSTATE: 42831

Cannot create the primary key <constraintName> because its child column(s) <childColumnNames> is nullable. Please change the column nullability and retry.

UC_REQUEST_TIMEOUT​

SQLSTATE: 57KD0

This operation took too long.

UC_ROOT_STORAGE_S3_BUCKET_NAME_CONTAINS_DOT​

SQLSTATE: 22KD1

Root storage S3 bucket name containing dots is not supported by Unity Catalog: <uri>

UC_SCHEMA_EMPTY_STORAGE_LOCATION​

SQLSTATE: 22KD1

Unexpected empty storage location for schema '<schemaName>' in catalog '<catalogName>'. Please make sure the schema uses a path scheme of <validPathSchemesListStr>.

UC_SERVERLESS_UNTRUSTED_DOMAIN_STORAGE_TOKEN_MINTING​

SQLSTATE: 42501

Serverless notebooks cannot retrieve temporary storage credentials from Unity Catalog.

UC_STORAGE_CREDENTIALS_WITH_EXTERNAL_LOCATION_DELETION_DENIED​

SQLSTATE: 42893

Storage credential '<credentialName>' has <extTablesCount> directly dependent external table(s) and <extLocationsCount> dependent storage location(s). You may use force option to delete it but the managed storage data using this storage credential cannot be purged by Unity Catalog anymore.

UC_STORAGE_CREDENTIALS_WITH_EXTERNAL_LOCATION_UPDATE_DENIED​

SQLSTATE: 42893

Storage credential '<credentialName>' has <extTablesCount> directly dependent external table(s) and <extLocationsCount> dependent storage location(s); use force option to update anyway.

UC_STORAGE_CREDENTIAL_ACCESS_CONNECTOR_PARSING_FAILED​

SQLSTATE: 22KD1

Failed to parse the provided access connector ID: <accessConnectorId>. Please verify its formatting and try again.

UC_STORAGE_CREDENTIAL_DBFS_ROOT_CREATION_PERMISSION_DENIED​

SQLSTATE: 42501

Cannot create a storage credential for DBFS root because user: <userId> is not the admin of the workspace: <workspaceId>

UC_STORAGE_CREDENTIAL_DBFS_ROOT_PRIVATE_DBFS_ENABLED​

SQLSTATE: 0AKUC

DBFS root storage credential is not yet supported for workspaces with Firewall-enabled DBFS

UC_STORAGE_CREDENTIAL_DBFS_ROOT_PRIVATE_NOT_SUPPORTED​

SQLSTATE: 0A000

DBFS root storage credential for current workspace is not yet supported

UC_STORAGE_CREDENTIAL_DBFS_ROOT_WORKSPACE_DISABLED​

SQLSTATE: 0A000

DBFS root is not enabled for workspace <workspaceId>

UC_STORAGE_CREDENTIAL_FAILED_TO_OBTAIN_VALIDATION_TOKEN​

SQLSTATE: 58000

Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.

UC_STORAGE_CREDENTIAL_INVALID_CLOUD_PERMISSIONS​

SQLSTATE: 42501

Registering a storage credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>. Please contact your account admin.

UC_STORAGE_CREDENTIAL_METASTORE_ROOT_DELETION_DENIED​

SQLSTATE: 55000

Storage credential '<credentialName>' cannot be deleted because it is configured as this metastore's root credential. Please update the metastore's root credential before attempting deletion.

UC_STORAGE_CREDENTIAL_PERMISSION_DENIED​

SQLSTATE: 42501

Only the account admin can create or update a storage credential with type <storageCredentialType>.

UC_STORAGE_CREDENTIAL_SERVICE_PRINCIPAL_MISSING_VALIDATION_TOKEN​

SQLSTATE: 42KDI

Missing validation token for service principal. Please provide a valid ARM-scoped Entra ID token in the 'X-Databricks-Azure-SP-Management-Token' request header and retry. For details, check https://docs.databricks.com/api/workspace/storagecredentials

UC_STORAGE_CREDENTIAL_TRUST_POLICY_IS_OPEN​

SQLSTATE: 22023

The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem (https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

UC_STORAGE_CREDENTIAL_WASBS_NOT_DBFS_ROOT​

SQLSTATE: 22KD1

Location <location> is not inside the DBFS root, because of that we can't create an storage credential <storageCredentialName>

UC_STORAGE_CREDENTIAL_WORKSPACE_API_PROHIBITED​

SQLSTATE: 0AKD1

Creating or updating a storage credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.

UC_STORAGE_CREDENTIAL_WS_BUCKET_INVALID_LOCATION​

SQLSTATE: 22KD1

Location <requestedLocation> is not inside the allowed context <allowedLocation>

UC_STORAGE_CREDENTIAL_WS_INTERNAL_CATALOG_NOT_SUPPORTED​

SQLSTATE: 56038

Workspace internal catalog storage credential for current workspace is not yet supported

UC_SYSTEM_WORKSPACE_GROUP_PERMISSION_UNSUPPORTED​

SQLSTATE: 22023

Cannot grant privileges on <securableType> to system generated group <principal>.

UC_TABLE_IS_NOT_CATALOG_OWNED​

SQLSTATE: 0A000

Request to perform commit/getCommits for table '<tableId>' requires enabling Catalog owned feature on the table.

UC_TAG_ASSIGNMENT_WITH_KEY_DOES_NOT_EXIST​

SQLSTATE: 42704

Tag assignment with tag key <tagKey> does not exist

UC_TEMPORARY_CREDENTIAL_OPERATION_NOT_SUPPORTED​

SQLSTATE: 0A000

Temporary credential operation is not supported.

UC_UNDROP_RESOURCE_ID_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot undrop '<resourceType>' because a '<resourceType>' with id '<resourceId>' already exists.

UC_UNDROP_RESOURCE_NAME_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot undrop '<resourceType>' because a '<resourceType>' with name '<resourceName>' already exists.

UC_UNDROP_RESOURCE_NOT_READY​

SQLSTATE: 55000

Cannot undrop '<resourceType>' because the '<resourceType>' with id '<resourceId>' is not ready to be restored, please retry later.

UC_UNDROP_RESOURCE_PAST_RECOVERABLE_WINDOW​

SQLSTATE: 55019

Cannot undrop '<resourceType>' because the '<resourceType>' with id '<resourceId>' is beyond the supported restoration period of '<maxRestorationPeriodDay>' days.

UC_UNIQUE_CONSTRAINT_CHILD_COLUMNS_ALREADY_EXIST​

SQLSTATE: 42891

Failed to create UNIQUE constraint <newConstraintName>: table <tableName> already has a UNIQUE constraint: <existingConstraintName>, that has the same set of child columns.

UC_UNIQUE_CONSTRAINT_NOT_SUPPORTED​

SQLSTATE: 56038

Failed to operate with UNIQUE constraint <constraintName>: UNIQUE constraint is not enabled.

UC_UNSUPPORTED_HTTP_CONNECTION_BASE_PATH​

SQLSTATE: 22KD1

Invalid base path provided, base path should be something like /api/resources/v1. Unsupported path: <path>

UC_UNSUPPORTED_HTTP_CONNECTION_HOST​

SQLSTATE: 22KD1

Invalid host name provided, host name should be something like https://www.databricks.com without path suffix. Unsupported host: <host>

UC_UNSUPPORTED_LATIN_CHARACTER_IN_PATH​

SQLSTATE: 22KD1

Only basic Latin/Latin-1 ASCII chars are supported in external location/volume/table paths. Unsupported path: <path>

UC_UPDATE_FORBIDDEN_FOR_PROVISIONING_SECURABLE​

SQLSTATE: 55019

Cannot update <securableType> '<securableName>' because it is being provisioned.

UC_WRITE_CONFLICT​

SQLSTATE: 40001

The <type> <name> has been modified by another request. Please retry the operation.

UNITY_CATALOG_EXTERNAL_COORDINATED_COMMITS_REQUEST_DENIED​

SQLSTATE: 42517

Request to perform commit/getCommits for table '<tableId>' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_CREATE_STAGING_TABLE_REQUEST_DENIED​

SQLSTATE: 42517

Request to create staging table '<tableFullName>' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_CREATE_TABLE_FAILED_VALIDATION_COLUMN_MASKS_DEFINED​

SQLSTATE: 42517

Request to create table '<tableFullName>' from outside of Databricks Unity Catalog enabled compute environment is denied. Column masks are not supported when creating tables externally.

UNITY_CATALOG_EXTERNAL_CREATE_TABLE_FAILED_VALIDATION_DATA_SOURCE_FORMAT_NOT_DELTA​

SQLSTATE: 42517

Request to create table '<tableFullName>' from outside of Databricks Unity Catalog enabled compute environment is denied. Data source format must be DELTA to create table externally, instead it is '<dataSourceFormat>'.

UNITY_CATALOG_EXTERNAL_CREATE_TABLE_FAILED_VALIDATION_TABLE_TYPE_NOT_EXTERNAL​

SQLSTATE: 42517

Request to create table '<tableFullName>' from outside of Databricks Unity Catalog enabled compute environment is denied. Table type must be EXTERNAL to create table externally, instead it is '<tableType>'.

UNITY_CATALOG_EXTERNAL_CREATE_TABLE_INFO_CONTAINS_INVALID_FIELDS​

SQLSTATE: 22023

External table creation only allows these fields: [name, catalogName, schemaName, tableType, dataSourceFormat, columnInfos, storageLocation, propertiesKvpairs].

UNITY_CATALOG_EXTERNAL_CREATE_TABLE_REQUEST_FOR_NON_EXTERNAL_TABLE_DENIED​

SQLSTATE: 42517

Request to create non-external table '<tableFullName>' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_DELETE_TABLE_REQUEST_FOR_NON_EXTERNAL_TABLE_DENIED​

SQLSTATE: 42517

Request to delete non-external table '<tableFullName>' from outside of Databricks Unity Catalog enabled compute environment is not supported.

UNITY_CATALOG_EXTERNAL_GENERATE_PATH_CREDENTIALS_DENIED​

SQLSTATE: 42517

Request to generate access credential for path '<path>' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_GENERATE_TABLE_CREDENTIALS_DENIED​

SQLSTATE: 42517

Request to generate access credential for table '<tableId>' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_GENERATE_TEMP_PATH_CRED_FAILED_VALIDATION_CREDENTIAL_ID_DEFINED​

SQLSTATE: 42517

Request to generate temporary path credential from outside of Databricks Unity Catalog enabled compute environment is denied. Credential ID must not be defined when generating temporary path credentials externally.

UNITY_CATALOG_EXTERNAL_GENERATE_TEMP_PATH_CRED_FAILED_VALIDATION_MAKE_PATH_ONLY_PARENT_DEFINED​

SQLSTATE: 42517

Request to generate temporary path credential from outside of Databricks Unity Catalog enabled compute environment is denied. make_path_only_parent must not be defined when generating temporary path credentials externally.

UNITY_CATALOG_EXTERNAL_GET_FOREIGN_CREDENTIALS_DENIED​

SQLSTATE: 42517

Request to get foreign credentials for securables from outside of Databricks Unity Catalog enabled compute environment is denied for security.

UNITY_CATALOG_EXTERNAL_UPDATA_METADATA_SNAPSHOT_DENIED​

SQLSTATE: 42517

Request to update metadata snapshots from outside of Databricks Unity Catalog enabled compute environment is denied for security.

WRITE_CREDENTIALS_NOT_SUPPORTED_FOR_LEGACY_MANAGED_ONLINE_TABLE​

SQLSTATE: 0AKUC

Invalid request to get write credentials for managed online table in an online catalog.

Files API​ FILES_API_API_IS_NOT_ENABLED​

SQLSTATE: none assigned

<api_name> API is not enabled

FILES_API_API_IS_NOT_ENABLED_FOR_CLOUD_PATHS​

SQLSTATE: none assigned

Requested method of Files API is not supported for cloud paths

FILES_API_AWS_ACCESS_DENIED​

SQLSTATE: none assigned

Access to the storage bucket is denied by AWS.

FILES_API_AWS_ALL_ACCESS_DISABLED​

SQLSTATE: none assigned

All access to the storage bucket has been disabled in AWS.

FILES_API_AWS_BUCKET_DOES_NOT_EXIST​

SQLSTATE: none assigned

The storage bucket does not exist in AWS.

FILES_API_AWS_FORBIDDEN​

SQLSTATE: none assigned

Access to the storage bucket is forbidden by AWS.

SQLSTATE: none assigned

The workspace is misconfigured: it must be in the same region as the AWS workspace root storage bucket.

FILES_API_AWS_INVALID_BUCKET_NAME​

SQLSTATE: none assigned

The storage bucket name is invalid.

FILES_API_AWS_KMS_KEY_DISABLED​

SQLSTATE: none assigned

The configured KMS keys to access the storage bucket are disabled in AWS.

FILES_API_AWS_UNAUTHORIZED​

SQLSTATE: none assigned

Access to AWS resource is unauthorized.

FILES_API_AZURE_ACCOUNT_IS_DISABLED​

SQLSTATE: none assigned

The storage account is disabled in Azure.

FILES_API_AZURE_AUTHORIZATION_PERMISSION_MISMATCH​

SQLSTATE: none assigned

The authorization permission mismatch.

FILES_API_AZURE_CONTAINER_DOES_NOT_EXIST​

SQLSTATE: none assigned

The Azure container does not exist.

FILES_API_AZURE_FORBIDDEN​

SQLSTATE: none assigned

Access to the storage container is forbidden by Azure.

FILES_API_AZURE_HAS_A_LEASE​

SQLSTATE: none assigned

Azure responded that there is currently a lease on the resource. Try again later.

FILES_API_AZURE_INSUFFICIENT_ACCOUNT_PERMISSION​

SQLSTATE: none assigned

The account being accessed does not have sufficient permissions to execute this operation.

FILES_API_AZURE_INVALID_STORAGE_ACCOUNT_CONFIGURATION​

SQLSTATE: none assigned

The configuration of the account being accessed is not supported.

FILES_API_AZURE_INVALID_STORAGE_ACCOUNT_NAME​

SQLSTATE: none assigned

Cannot access storage account in Azure: invalid storage account name.

FILES_API_AZURE_KEY_BASED_AUTHENTICATION_NOT_PERMITTED​

SQLSTATE: none assigned

The key vault vault is not found in Azure. Check your customer-managed keys settings.

FILES_API_AZURE_KEY_VAULT_KEY_NOT_FOUND​

SQLSTATE: none assigned

The Azure key vault key is not found in Azure. Check your customer-managed keys settings.

FILES_API_AZURE_KEY_VAULT_VAULT_NOT_FOUND​

SQLSTATE: none assigned

The key vault vault is not found in Azure. Check your customer-managed keys settings.

FILES_API_AZURE_MI_ACCESS_CONNECTOR_NOT_FOUND​

SQLSTATE: none assigned

Azure Managed Identity Credential with Access Connector not found. This could be because IP access controls rejected your request.

FILES_API_AZURE_OPERATION_TIMEOUT​

SQLSTATE: none assigned

The operation could not be completed within the permitted time.

FILES_API_AZURE_PATH_INVALID​

SQLSTATE: none assigned

The requested path is not valid for Azure.

FILES_API_AZURE_PATH_IS_IMMUTABLE​

SQLSTATE: none assigned

The requested path is immutable.

SQLSTATE: none assigned

One of the headers specified in operation is not supported.

FILES_API_CANNOT_PARSE_URL_PARAMETER​

SQLSTATE: none assigned

The URL parameter cannot be parsed.

FILES_API_CATALOG_NOT_FOUND​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_CLOUD_RESOURCE_EXHAUSTED​

SQLSTATE: none assigned

<message>

FILES_API_CLOUD_STORAGE_PROVIDER_CONNECTION_ISSUE​

SQLSTATE: none assigned

There's a connection issue reaching <cloud_storage_provider>. Please try again later.

FILES_API_COLON_IS_NOT_SUPPORTED_IN_PATH​

SQLSTATE: none assigned

the ':' character is not supported in paths

FILES_API_CONSUMER_NETWORK_ZONE_NOT_ALLOWED​

SQLSTATE: none assigned

Consumer network zone "<consumer_network_zone>" is not allowed from requester network zone "<requester_network_zone>".

FILES_API_CONTROL_PLANE_NETWORK_ZONE_NOT_ALLOWED​

SQLSTATE: none assigned

Databricks Control plane network zone not allowed.

FILES_API_CREDENTIAL_NOT_FOUND​

SQLSTATE: none assigned

The credential was not found.

FILES_API_DIRECTORIES_CANNOT_HAVE_BODIES​

SQLSTATE: none assigned

A body was provided but directories cannot have a file body

FILES_API_DIRECTORY_IS_NOT_EMPTY​

SQLSTATE: none assigned

The directory is not empty. This operation is not supported on non-empty directories.

FILES_API_DIRECTORY_IS_NOT_FOUND​

SQLSTATE: none assigned

The directory being accessed is not found.

FILES_API_DMK_ENCRYPTION_ROOT_KEY_DISABLED​

SQLSTATE: none assigned

The root key for the customer-managed encryption is disabled.

SQLSTATE: none assigned

The request contained multiple copies of a header that is only allowed once.

FILES_API_DUPLICATE_QUERY_PARAMETER​

SQLSTATE: none assigned

Query parameter '<parameter_name>' must be present exactly once but was provided multiple times.

FILES_API_EMPTY_BUCKET_NAME​

SQLSTATE: none assigned

The DBFS bucket name is empty.

FILES_API_ENCRYPTION_KEY_PERMISSION_DENIED​

SQLSTATE: none assigned

User does not have access to the encryption key.

FILES_API_ENCRYPTION_KEY_RETRIEVAL_OPERATION_TIMEOUT​

SQLSTATE: none assigned

The operation to retrieve encryption key could not be completed within the permitted time.

FILES_API_ENTITY_TOO_LARGE​

SQLSTATE: none assigned

Your object exceeds the maximum allowed object size.

FILES_API_ERROR_EXPIRED_TTL​

SQLSTATE: none assigned

The TTL has expired.

FILES_API_ERROR_INVALID_TTL​

SQLSTATE: none assigned

The TTL is invalid.

FILES_API_ERROR_KEY_FOR_WORKSPACE_IS_NOT_FOUND​

SQLSTATE: none assigned

The key for the workspace is not found.

FILES_API_ERROR_MISSING_REQUIRED_PARAM​

SQLSTATE: none assigned

The URL is missing a required parameter.

FILES_API_ERROR_TTL_IN_THE_FUTURE​

SQLSTATE: none assigned

The TTL is in the future.

FILES_API_ERROR_URL_INVALID_ISSUER_SHARD_NAME​

SQLSTATE: none assigned

The issuer shard name is invalid.

FILES_API_EXPIRATION_TIME_MUST_BE_PRESENT​

SQLSTATE: none assigned

expiration time must be present

FILES_API_EXPIRED_TOKEN​

SQLSTATE: none assigned

The provided token has expired.

FILES_API_EXPIRE_TIME_MUST_BE_IN_THE_FUTURE​

SQLSTATE: none assigned

ExpireTime must be in the future

FILES_API_EXPIRE_TIME_TOO_FAR_IN_FUTURE​

SQLSTATE: none assigned

Requested TTL is longer than supported (1 hour)

FILES_API_EXTERNAL_LOCATION_PATH_OVERLAP_OTHER_UC_STORAGE_ENTITY​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_FILE_ALREADY_EXISTS​

SQLSTATE: none assigned

The file being created already exists.

FILES_API_FILE_NOT_FOUND​

SQLSTATE: none assigned

The file being accessed is not found.

FILES_API_FILE_OR_DIRECTORY_ENDS_IN_DOT​

SQLSTATE: none assigned

Files or directories ending in the '.' character are not supported.

FILES_API_FILE_SIZE_EXCEEDED​

SQLSTATE: none assigned

File size shouldn't exceed <max_download_size_in_bytes> bytes, but <size_in_bytes> bytes were found.

FILES_API_GCP_ACCOUNT_IS_DISABLED​

SQLSTATE: none assigned

Access to the storage bucket has been disabled in GCP.

FILES_API_GCP_BUCKET_DOES_NOT_EXIST​

SQLSTATE: none assigned

The storage bucket does not exist in GCP.

FILES_API_GCP_FORBIDDEN​

SQLSTATE: none assigned

Access to the bucket is forbidden by GCP.

FILES_API_GCP_KEY_DISABLED_OR_DESTROYED​

SQLSTATE: none assigned

The customer-managed encryption key configured for that location is either disabled or destroyed.

FILES_API_GCP_REQUEST_IS_PROHIBITED_BY_POLICY​

SQLSTATE: none assigned

The GCP requests to the bucket are prohibited by policy, check the VPC service controls.

FILES_API_HOST_TEMPORARILY_NOT_AVAILABLE​

SQLSTATE: none assigned

Cloud provider host is temporarily not available; please try again later.

FILES_API_INVALID_CONTENT_LENGTH​

SQLSTATE: none assigned

The value of the content-length header must be an integer greater than or equal to 0.

FILES_API_INVALID_CONTINUATION_TOKEN​

SQLSTATE: none assigned

The provided page token is not valid.

FILES_API_INVALID_HOSTNAME​

SQLSTATE: none assigned

The hostname is invalid.

FILES_API_INVALID_HTTP_METHOD​

SQLSTATE: none assigned

Invalid http method. Expected '<expected>' but got '<actual>'.

SQLSTATE: none assigned

The metastore ID header is invalid.

FILES_API_INVALID_PAGE_TOKEN​

SQLSTATE: none assigned

invalid page token

FILES_API_INVALID_PATH​

SQLSTATE: none assigned

Invalid path: <validation_error>

FILES_API_INVALID_RANGE​

SQLSTATE: none assigned

The range header is invalid.

FILES_API_INVALID_RESOURCE_FULL_NAME​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_INVALID_SESSION_TOKEN​

SQLSTATE: none assigned

Invalid session token

FILES_API_INVALID_SESSION_TOKEN_TYPE​

SQLSTATE: none assigned

Invalid session token type. Expected '<expected>' but got '<actual>'.

FILES_API_INVALID_TIMESTAMP​

SQLSTATE: none assigned

The timestamp is invalid.

FILES_API_INVALID_UPLOAD_TYPE​

SQLSTATE: none assigned

Invalid upload type. Expected '<expected>' but got '<actual>'.

FILES_API_INVALID_URL​

SQLSTATE: none assigned

Invalid URL

FILES_API_INVALID_URL_PARAMETER​

SQLSTATE: none assigned

The URL passed as parameter is invalid

FILES_API_INVALID_VALUE_FOR_OVERWRITE_QUERY_PARAMETER​

SQLSTATE: none assigned

Query parameter 'overwrite' must be one of: true,false but was: <got_values>

FILES_API_INVALID_VALUE_FOR_QUERY_PARAMETER​

SQLSTATE: none assigned

Query parameter '<parameter_name>' must be one of: <expected> but was: <actual>

FILES_API_MALFORMED_REQUEST_BODY​

SQLSTATE: none assigned

Malformed request body

FILES_API_MANAGED_CATALOG_FEATURE_DISABLED​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_METASTORE_NOT_FOUND​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_METHOD_IS_NOT_ENABLED_FOR_JOBS_BACKGROUND_COMPUTE_ARTIFACT_STORAGE​

SQLSTATE: none assigned

Requested method of Files API is not supported for Jobs Background Compute Artifact Storage.

FILES_API_MISSING_CONTENT_LENGTH​

SQLSTATE: none assigned

The content-length header is required in the request.

FILES_API_MISSING_QUERY_PARAMETER​

SQLSTATE: none assigned

Query parameter '<parameter_name>' is required but is missing from the request.

FILES_API_MISSING_REQUIRED_PARAMETER_IN_REQUEST​

SQLSTATE: none assigned

The request is missing a required parameter.

FILES_API_MLFLOW_PERMISSION_DENIED​

SQLSTATE: none assigned

<mlflow_error_message>

FILES_API_MODEL_VERSION_IS_NOT_READY​

SQLSTATE: none assigned

Model version is not ready yet

FILES_API_MULTIPART_UPLOAD_ABORT_PRESIGNED_URL_NOT_SUPPORTED​

SQLSTATE: none assigned

Presigned URLs for aborting multipart uploads are not supported for files stored on <cloud_storage_provider>.

FILES_API_MULTIPART_UPLOAD_EMPTY_PART_LIST​

SQLSTATE: none assigned

The list of parts must have at least one element but was empty.

FILES_API_MULTIPART_UPLOAD_INVALID_PART​

SQLSTATE: none assigned

One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.

FILES_API_MULTIPART_UPLOAD_INVALID_PART_NUMBER​

SQLSTATE: none assigned

Part numbers must be greater than or equal to <min> and smaller than or equal to <max>.

FILES_API_MULTIPART_UPLOAD_MISSING_ETAG​

SQLSTATE: none assigned

ETags must be provided for files stored on <cloud_storage_provider>. At least one ETag was not provided or empty.

FILES_API_MULTIPART_UPLOAD_MODIFIED_STORAGE_ENTITY_STATE​

SQLSTATE: none assigned

The internal state of the storage entity has been modified because the upload was initiated, e.g. because the file path does not point to the same underlying cloud storage location. Proceed by initiating a new upload session.

FILES_API_MULTIPART_UPLOAD_NON_TRAILING_PARTS_WITH_DIFFERENT_SIZES​

SQLSTATE: none assigned

The parts uploaded as part of a multipart upload session must be of the same size for files stored on <cloud_storage_provider>, except for the last part which can be smaller.

FILES_API_MULTIPART_UPLOAD_PART_SIZE_OUT_OF_RANGE​

SQLSTATE: none assigned

The size of the parts uploaded as part of a multipart upload session must be greater than or equal to <min> and smaller than or equal to <max>.

FILES_API_MULTIPART_UPLOAD_SESSION_NOT_FOUND​

SQLSTATE: none assigned

The upload session is not found. It may have been aborted or completed.

FILES_API_MULTIPART_UPLOAD_UNORDERED_PART_LIST​

SQLSTATE: none assigned

The list of part must be ordered by the part number but was unordered.

FILES_API_NON_ZERO_CONTENT_LENGTH​

SQLSTATE: none assigned

The content-length header must be zero for this request.

FILES_API_NOT_ENABLED_FOR_PLACE​

SQLSTATE: none assigned

Files API for <place> is not enabled for this workspace/account

FILES_API_NOT_SUPPORTED_FOR_INTERNAL_WORKSPACE_STORAGE​

SQLSTATE: none assigned

Requested method of Files API is not supported for Internal Workspace Storage

FILES_API_OPERATION_MUST_BE_PRESENT​

SQLSTATE: none assigned

operation must be present

FILES_API_OPERATION_TIMEOUT​

SQLSTATE: none assigned

The operation timed out.

FILES_API_PAGE_SIZE_MUST_BE_GREATER_OR_EQUAL_TO_ZERO​

SQLSTATE: none assigned

page_size must be greater than or equal to 0

FILES_API_PATH_BASED_ACCESS_TO_TABLE_WITH_FILTER_NOT_ALLOWED​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_PATH_END_WITH_A_SLASH​

SQLSTATE: none assigned

Paths ending in the '/' character represent directories. This API does not support operations on directories.

FILES_API_PATH_IS_A_DIRECTORY​

SQLSTATE: none assigned

The given path points to an existing directory. This API does not support operations on directories.

FILES_API_PATH_IS_A_FILE​

SQLSTATE: none assigned

The given path points to an existing file. This API does not support operations on files.

FILES_API_PATH_IS_NOT_A_VALID_UTF8_ENCODED_URL​

SQLSTATE: none assigned

the given path was not a valid UTF-8 encoded URL

FILES_API_PATH_IS_NOT_ENABLED_FOR_DATAPLANE_PROXY​

SQLSTATE: none assigned

Given path is not enabled for data plane proxy

FILES_API_PATH_MUST_BE_PRESENT​

SQLSTATE: none assigned

path must be present

FILES_API_PATH_NOT_SUPPORTED​

SQLSTATE: none assigned

<rejection_message>

FILES_API_PATH_TOO_LONG​

SQLSTATE: none assigned

Provided file path is too long.

FILES_API_PRECONDITION_FAILED​

SQLSTATE: none assigned

The request failed due to a precondition.

FILES_API_PRESIGNED_URLS_FOR_MODELS_NOT_SUPPORTED​

SQLSTATE: none assigned

Files API for presigned URLs for models are not supported at the moment

FILES_API_R2_CREDENTIALS_DISABLED​

SQLSTATE: none assigned

R2 is unsupported at the moment.

FILES_API_RANGE_NOT_SATISFIABLE​

SQLSTATE: none assigned

The range requested is not satisfiable.

FILES_API_RECURSIVE_LIST_IS_NOT_SUPPORTED​

SQLSTATE: none assigned

Recursively listing files is not supported.

FILES_API_REQUESTER_NETWORK_ZONE_UNKNOWN​

SQLSTATE: none assigned

Can't infer requester network zone.

FILES_API_REQUEST_GOT_ROUTED_INCORRECTLY​

SQLSTATE: none assigned

Request got routed incorrectly

FILES_API_REQUEST_MUST_INCLUDE_ACCOUNT_INFORMATION​

SQLSTATE: none assigned

Request must include account information

FILES_API_REQUEST_MUST_INCLUDE_RECIPIENT_INFORMATION​

SQLSTATE: none assigned

Request must include recipient information.

FILES_API_REQUEST_MUST_INCLUDE_USER_INFORMATION​

SQLSTATE: none assigned

Request must include user information

FILES_API_REQUEST_MUST_INCLUDE_WORKSPACE_INFORMATION​

SQLSTATE: none assigned

Request must include workspace information

FILES_API_RESOURCE_IS_READONLY​

SQLSTATE: none assigned

Resource is read-only.

FILES_API_RESOURCE_NOT_FOUND​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_SCHEMA_NOT_FOUND​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_SECURE_URL_CANT_BE_ACCESSED​

SQLSTATE: none assigned

The URL can't be accessed.

FILES_API_SIGNATURE_VERIFICATION_FAILED​

SQLSTATE: none assigned

The signature verification failed.

FILES_API_STORAGE_ACCESS_CONTEXT_INVALID​

SQLSTATE: none assigned

Storage access context is invalid.

FILES_API_STORAGE_CONTEXT_IS_NOT_SET​

SQLSTATE: none assigned

Storage configuration for this workspace is not accessible.

FILES_API_STORAGE_CREDENTIAL_NOT_FOUND​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_TABLE_TYPE_NOT_SUPPORTED​

SQLSTATE: none assigned

Files API is not supported for <table_type>

FILES_API_UC_AUTHENTICATION_FAILURE​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_IAM_ROLE_NON_SELF_ASSUMING​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_MODEL_INVALID_STATE​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_PERMISSION_DENIED​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_RESOURCE_EXHAUSTED​

SQLSTATE: none assigned

<message>

FILES_API_UC_UNSUPPORTED_LATIN_CHARACTER_IN_PATH​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_VOLUME_NAME_CHANGED​

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UNABLE_TO_RENEW_AZURE_MANAGED_IDENTITIES​

SQLSTATE: none assigned

Unable to renew Azure managed identity.

FILES_API_UNEXPECTED_ERROR_WHILE_PARSING_URI​

SQLSTATE: none assigned

Unexpected error when parsing the URI

FILES_API_UNEXPECTED_QUERY_PARAMETERS​

SQLSTATE: none assigned

Unexpected query parameters: <unexpected_query_parameters>

FILES_API_UNKNOWN_METHOD​

SQLSTATE: none assigned

Unknown method <method>

FILES_API_UNKNOWN_SERVER_ERROR​

SQLSTATE: none assigned

Unknown server error.

FILES_API_UNKNOWN_URL_HOST​

SQLSTATE: none assigned

The URL host is unknown.

FILES_API_UNSUPPORTED_AUTHENTICATION_METHOD​

SQLSTATE: none assigned

The request was not authenticated correctly.

FILES_API_UNSUPPORTED_HTTP_METHOD​

SQLSTATE: none assigned

The httpMethod is not supported.

FILES_API_UNSUPPORTED_PARAMETERS_COMBINATION​

SQLSTATE: none assigned

The combination of parameters is not supported.

FILES_API_UNSUPPORTED_PATH​

SQLSTATE: none assigned

The provided path is not supported by the Files API. Make sure the provided path does not contain instances of '../' or './' sequences. Make sure the provided path does not use multiple consecutive slashes (e.g. '///').

FILES_API_UPLOAD_PART_URLS_COUNT_MUST_BE_GREATER_THAN_ZERO​

SQLSTATE: none assigned

Input parameter 'count' must be greater than 0.

FILES_API_URL_GENERATION_DISABLED​

SQLSTATE: none assigned

Presigned URL generation is not enabled for <cloud>.

FILES_API_VOLUME_TYPE_NOT_SUPPORTED​

SQLSTATE: none assigned

Files API is not supported for <volume_type>.

FILES_API_WORKSPACE_IS_CANCELED​

SQLSTATE: none assigned

The workspace has been canceled.

FILES_API_WORKSPACE_IS_NOT_FOUND​

SQLSTATE: none assigned

Storage configuration for this workspace is not accessible.

Lakeflow Declarative Pipelines​ ABAC_POLICIES_NOT_SUPPORTED​

SQLSTATE: 0A000

ABAC policies are not supported on tables defined within a pipeline.

Remove the policies or contact Databricks support.

ACTIVE_UPDATE_EXISTS_FOR_LINKED_PIPELINE​

SQLSTATE: 42000

An active update '<updateId>' already exists for linked pipeline '<linkedPipelineId>'.

ACTIVE_UPDATE_EXISTS_FOR_PIPELINE​

SQLSTATE: 42000

An active update '<updateId>' already exists for pipeline '<pipelineId>'.

ALTER_NOT_ALLOWED_FOR_PIPELINE_TABLE​

SQLSTATE: 0A000

ALTER is not allowed on tables managed by a

For more details see ALTER_NOT_ALLOWED_FOR_PIPELINE_TABLE

ALTER_SCHEDULE_SCHEDULE_DOES_NOT_EXIST​

SQLSTATE: 42704

Cannot alter <type> on a table without an existing schedule or trigger. Please add a schedule or trigger to the table before attempting to alter it.

API_NOT_SUPPORTED_FOR_WORKSPACE_PIPELINE​

SQLSTATE: 42000

<apiName> is not supported for pipelines in Workspace.

API_QUOTA_EXCEEDED​

SQLSTATE: KD000

You have exceeded the API quota for the data source <sourceName>.

For more details see API_QUOTA_EXCEEDED

APPLY_CHANGES_ERROR​

SQLSTATE: 42000

An error occurred during the AUTO CDC operation.

For more details see APPLY_CHANGES_ERROR

APPLY_CHANGES_FROM_SNAPSHOT_ERROR​

SQLSTATE: 22000

An error occurred during the AUTO CDC FROM SNAPSHOT operation.

For more details see APPLY_CHANGES_FROM_SNAPSHOT_ERROR

APPLY_CHANGES_FROM_SNAPSHOT_EXPECTATIONS_NOT_SUPPORTED​

SQLSTATE: 0A000

Dataset <datasetName> has expectations defined, but expectations are not currently supported for datasets using AUTO CDC FROM SNAPSHOT.

Remove the expectations to resolve this error. As an alternative, consider using the following structure to apply expectations by combining AUTO CDC and AUTO CDC FROM SNAPSHOT:

  1. Apply changes from snapshot using SCD type 1 to an intermediate table without expectations.

  2. Read changes from the intermediate table using spark.readStream.option("readChangeFeed", "true").table.

  3. Apply changes from the intermediate table to the final target table using dlt.create_auto_cdc_flow with the following parameters in addition to those currently used with dlt.acreate_auto_cdc_flow_from_snapshot:

  1. Include the current set of expectations on the final target table used for dlt.create_auto_cdc_flow.
APPLY_CHANGES_PARTIAL_UPDATES_NOT_ENABLED​

SQLSTATE: 0A000

Feature COLUMNS TO UPDATE is in preview and not enabled for your workspace. Please reach out to your Databricks representative to learn more about this feature and access the private preview.

ARCHITECTURE_MIGRATION_FAILURE​

SQLSTATE: 55000

For more details see ARCHITECTURE_MIGRATION_FAILURE

ARCLIGHT_REQUIRES_SERVERLESS​

SQLSTATE: 42000

Pipelines targeting catalogs using Default Storage must use serverless compute. If you don't have access to serverless compute, please contact Databricks to enable this feature for your workspace.

BIGQUERY_DATA_CONNECTOR_SOURCE_CATALOG_MISSING​

SQLSTATE: KD000

An error occurred in the BigQuery data connector.

The 'source catalog' (GCP Project ID) is missing in your ingestion specification.

Please specify the source catalog (project ID) in your ingestion specification to resolve this issue.

CANNOT_ADD_COLUMN_TO_INCLUDE_COLUMNS​

SQLSTATE: 42000

New columns (<columnNames>) are added to include_columns field in pipeline definition for the table <qualifiedTableName>. Please full refresh the table to avoid potential data loss or remove these columns from include_columns.

CANNOT_ADD_STORAGE_LOCATION​

SQLSTATE: 42000

Cannot add storage location to an existing pipeline with defined catalog, if you want to set a storage location create a new pipeline.

Existing catalog: '<catalog>'.

Requested storage location: <storageLocation>.

CANNOT_COMPOSE_DECORATOR​

SQLSTATE: none assigned

The @<decoratorName>decorator cannot be composed with<otherDecorators>.

CANNOT_EXCLUDE_NON_EXISTENT_COLUMN​

SQLSTATE: 42703

Pipeline definition excludes columns (<columnNames>) that do not exist in the table <qualifiedTableName> of source <sourceType>. Please remove these columns from the exclude_columns field.

CANNOT_FILTER_OUT_REQUIRED_COLUMN​

SQLSTATE: 42000

Pipeline definition does not include required columns (<columnNames>) in the table <qualifiedTableName> of source <sourceType> for ingestion. Please add them to include_columns or remove them from exclude_columns.

CANNOT_INCLUDE_NON_EXISTENT_COLUMN​

SQLSTATE: 42703

Pipeline definition includes columns (<columnNames>) that do not exist in the table <qualifiedTableName> of source <sourceType>. Please remove these columns from the include_columns field.

CANNOT_INGEST_TABLE_WITHOUT_PRIMARY_KEY​

SQLSTATE: 42000

The table <qualifiedTableName> in the <sourceType> source doesn't have a primary key.

Please specify a primary key in connector configuration to ingest the table if it exists.

CANNOT_MODIFY_PIPELINE_OWNER_FOR_UC_PIPELINES​

SQLSTATE: 42000

Changing owner for non-UC pipelines is not supported yet.

CANNOT_MODIFY_PIPELINE_OWNER_PERMISSION_DENIED​

SQLSTATE: 42000

Only workspace admins can change pipeline owner.

CANNOT_MODIFY_PIPELINE_OWNER_WHEN_MISSING​

SQLSTATE: 42000

New owner does not exist.

CANNOT_MODIFY_PIPELINE_TYPE​

SQLSTATE: 42000

pipeline_type cannot be updated.

Current pipeline_type: <currentPipelineType>.

Updated pipeline_type: <requestedPipelineType>.

CANNOT_MODIFY_STORAGE_LOCATION​

SQLSTATE: 42000

Cannot modify storage location of an existing pipeline.

Existing storage location: '<existingStorageLocation>'.

Requested storage location: <requestedStorageLocation>.

CANNOT_REMOVE_COLUMN_FROM_EXCLUDE_COLUMNS​

SQLSTATE: 42000

Columns (<columnNames>) are removed from exclude_columns field in pipeline definition for the table <qualifiedTableName>. Please full refresh the table to avoid potential data loss or add these columns back to exclude_columns.

CANNOT_SET_CATALOG_FOR_HMS_PIPELINE​

SQLSTATE: 42613

Cannot add catalog to an existing pipeline with defined storage location, if you want to use UC create a new pipeline and set catalog.

Existing storage location: '<storageLocation>'

Requested catalog: '<catalog>'

CANNOT_SET_LINKED_PIPELINE_ID​

SQLSTATE: 42000

Pipeline ids are the same, setting the linked pipeline ids will cause deadlock.

CANNOT_SET_SCHEMA_FOR_EXISTING_PIPELINE​

SQLSTATE: 0AKD0

Specified 'schema' field in the pipeline settings for pipeline '<pipelineName>' is illegal. Reason:

For more details see CANNOT_SET_SCHEMA_FOR_EXISTING_PIPELINE

CANNOT_SET_TOGETHER​

SQLSTATE: none assigned

<argList> are mutually exclusive and can't be set together.

CANNOT_SPECIFY_BOTH_INCLUDE_EXCLUDE_COLUMNS​

SQLSTATE: 42000

Pipeline definition specifies both include_columns and exclude_columns for <identifier>. Please remove one of them.

CANNOT_UPDATE_CLUSTERING_COLUMNS​

SQLSTATE: 42000

Cannot update clustering columns for table <tableName> because it is using partition columns. A table can either use partition columns or clustering columns, but not both.

To switch between liquid clustering and partitioning, trigger a full refresh of this table.

CANNOT_UPDATE_PARTITION_COLUMNS​

SQLSTATE: 42000

Cannot update partition columns for streaming table <tableName>.

Current: <existingPartitionColumns>,

Requested: <requestedPartitionColumns>

To apply this partition change, trigger a full refresh of this table and any other streaming tables that have updated partition columns.

Alternatively revert this change to continue using the existing partition columns.

CANNOT_UPDATE_TABLE_SCHEMA​

SQLSTATE: 42KD9

Failed to merge the current and new schemas for table <tableName>.

To proceed with this schema change, you can trigger a full refresh of this table.

Depending on your use case and the schema changes, you may be able to obviate the schema change -- you can update your queries so the output schema is compatible with the existing schema (e.g, by explicitly casting columns to the correct data type).

CANNOT_WRITE_TO_INACTIVE_COLUMNS​

SQLSTATE: 55000

<details>

However, the destination table(s) already has inactive column(s) with this name; the columns are inactive because they were previously deleted from the source tables.

To proceed with the update, run a FULL REFRESH on the tables or drop these inactive columns using the ALTER TABLE DROP COLUMN command.

CANNOT_WRITE_TO_INACTIVE_TABLES​

SQLSTATE: 55000

The following tables in the destination are inactive and conflict with the current source tables: <tables>.

These tables remain inactive because they were previously deleted from the source or unselected from the connector.

To proceed with the update, please perform a FULL REFRESH on the tables or or drop these inactive tables from the destination in Catalog Explorer or via DROP TABLE command, and retry the update.

CANNOT_WRITE_TO_TABLES_PENDING_RESET​

SQLSTATE: 55000

The following tables in the destination are not reset correctly in the previous full refresh: <tables>.

Please trigger a full refresh on them to recover.

CATALOG_MAPPING_NOT_AVAILABLE_IN_UC​

SQLSTATE: 3D000

UC catalog doesn't have the mapping for the catalog '<sourceCatalog>'.

Please check if catalog is registered in the UC catalog.

CATALOG_SCHEMA_MISMATCH_WITH_DESTINATION_PIPELINE​

SQLSTATE: 55000

The catalog <destPipelineCatalog> and schema <destPipelineSchema> of the destination pipeline with ID '<destPipelineId>' do not match the catalog and schema of the table <tableName>. The destination pipeline should have the same catalog and schema as the table or the destination pipeline must be using the Direct Publishing Mode.

CATALOG_SPEC_UNSUPPORTED​

SQLSTATE: 0A000

CatalogSpecs are not supported by the database connectors currently. Please remove the catalog spec.

CDC_APPLIER_COLUMN_UOID_NOT_SUPPORTED​

SQLSTATE: 0A000

Columns with UOID <columnNames> for table <tableName> are not supported in CDC managed ingestion pipelines.

Please check if the ingestion pipeline supports column UOID, or request a full refresh.

CDC_APPLIER_FATAL_FAILURE_FROM_GATEWAY​

SQLSTATE: 42000

The Gateway Pipeline encountered a fatal error:

<errorMessage>

Review the pipeline update failure here: <link>.

CDC_APPLIER_REPLICATED_TABLE_METADATA_NOT_READY​

SQLSTATE: 42KD4

Replicated table metadata for table <tableName> is not ready.

The existing job request timestamp is <existingRequestTs>, but we expect <expectedRequestTs> or later.

CDC_APPLIER_REQUIRES_ALL_DESTINATION_TABLES_FULL_REFRESH​

SQLSTATE: 42000

Full refresh of one destination table and normal update of another destination table from the same source is not supported.

Please full refresh both tables to continue if possible.

Full refresh tables: <fullRefreshTables>

Destination tables for source <sourceTable>: <destinationTables>

CDC_APPLIER_SCHEMA_CHANGED_DURING_STREAMING​

SQLSTATE: 42KD4

Schema version <dataSchemaVersion> is different from read schema version <readSchemaVersion>.

DLT will retry the update.

CDC_APPLIER_SEQUENCE_BY_COLUMN_NOT_FOUND​

SQLSTATE: 42704

The column <columnName> for table <tableName> specified in sequenceBy does not exist in <columns>.

CDC_APPLIER_SEQUENCE_BY_INVALID_TYPE​

SQLSTATE: 0A000

The column <columnName> for table <tableName> specified in sequenceBy is of type <typeName> that is not supported.

Supported types for sequenceBy columns are <allowedTypes>.

CDC_APPLIER_SEQUENCE_BY_MULTIPLE_COLUMNS_NOT_SUPPORTED​

SQLSTATE: 0A000

At most one column can be specified in sequenceBy of CDC managed ingestion pipeline.

Specified columns: <columns> for table <tableName>.

CDC_INCOMPATIBLE_SCHEMA_CHANGES​

SQLSTATE: 42KD4

We encountered an incompatible schema change (<cause>) from schema version <previousSchemaVersion> to <currentSchemaVersion>.

Therefore we cannot proceed on applying changes for <tableName>. Please request a full refresh of the table.

Previous schema: <previousSchema>

Current schema: <currentSchema>

CDC_MULTIPLE_SOURCE_TABLES_MAPPED_TO_SAME_DESTINATION_TABLE​

SQLSTATE: 42000

Found multiple source tables: <source_tables> mapped to same destination table <destination_table>.

Please map them to different destination table name or to different destination schema.

CDC_POTENTIAL_DATA_GAPS​

SQLSTATE: 55000

Following tables [<needFullRefreshTableList>] does not have a successful update for <retention> days.

Please do a full refresh on these tables or the whole pipeline.

CDC_SAME_TABLE_FROM_MULTIPLE_SOURCES​

SQLSTATE: 42000

Found the same table name <table> from multiple sources: <sources>.

Please split them into different pipelines to avoid conflict.

CDC_TABLE_NOT_FOUND_IN_ALL_TABLES​

SQLSTATE: 42P01

Table <table> is not found in the all tables snapshot of the source database.

Table spec details:

<tableSpec>

CHANGES_HMS_PIPELINE_TO_UC_NOT_ALLOWED​

SQLSTATE: 0AKD0

Changing a HMS pipeline to a UC pipeline is not allowed.

CHANGES_UC_PIPELINE_TO_HMS_NOT_ALLOWED​

SQLSTATE: 0AKD0

Changing a UC pipeline to a HMS pipeline is not allowed.

CHANGING_CATALOG_NOT_ALLOWED​

SQLSTATE: 0AKD0

Cannot modify catalog of an existing pipeline. Existing catalog: '<existingCatalog>'. Requested catalog: '<requestedCatalog>'.

CHANGING_TARGET_SCHEMA_NOT_ALLOWED​

SQLSTATE: 0AKD0

Changing target schema is not allowed. Reason: <reason>.

CLUSTER_CREATION_BUDGET_POLICY_LIMIT_EXCEEDED​

SQLSTATE: 57000

Failed to create a cluster because the pipeline's budget policy has exceeded the limit. Please use a different policy or reach out to your billing admin.

CLUSTER_CREATION_CLIENT_ERROR​

SQLSTATE: KDL01

Failed to create a pipeline cluster: <errorMessage>

This error is likely due to a misconfiguration in the pipeline.

Check the pipeline cluster configuration and associated cluster policy.

CLUSTER_CREATION_CREDITS_EXHAUSTED​

SQLSTATE: 57000

Failed to create a cluster because you've exhausted your available credits. Please add a payment method to upgrade your account.

CLUSTER_CREATION_RESOURCE_EXHAUSTED​

SQLSTATE: 57000

Failed to create a cluster because you've exceeded resource limits: <errorMessage>

CLUSTER_LAUNCH_CLIENT_ERROR​

SQLSTATE: KDL01

Failed to launch pipeline cluster <clusterId>: <clusterStateMessage>

This error is likely due to a misconfiguration in the pipeline.

Check the pipeline cluster configuration and associated cluster policy.

CLUSTER_LAUNCH_CLOUD_FAILURE​

SQLSTATE: 58000

Failed to launch pipeline cluster <clusterId>: <clusterStateMessage>

This error could be transient - restart your pipeline and report if you still see the same issue.

CLUSTER_SETUP_CLIENT_ERROR​

SQLSTATE: KDL01

For more details see CLUSTER_SETUP_CLIENT_ERROR

CLUSTER_UNREACHABLE​

SQLSTATE: 08000

Communication lost with driver. Cluster <clusterId> was not reachable for <timeoutSeconds> seconds.

COLUMN_MASK_WITH_NO_COLUMN​

SQLSTATE: 42000

Column mask found for column '<columnName>' which doesn't exist in MV/ST schema. If this is because of a change to the base table's schema,

please drop the old mask with ALTER TABLE [table_name] ALTER COLUMN [column where mask is applied] DROP MASK; or restore the column.

COLUMN_TO_UPDATE_NOT_FOUND​

SQLSTATE: 42703

Column <column> specified in COLUMNS TO UPDATE not found in source Dataframe.

CONCURRENT_UPGRADE_FAILED_TO_STOP_PREVIOUS_UPDATE​

SQLSTATE: 2D521

Started update '<upgradedUpdateId>' for an upgrade but failed to stop the previous update '<oldUpdateId>'.

This error is likely transient. The pipeline will be automatically retried and the issue should resolve itself.

Report this error to Databricks if you continue to still see the same issue.

CREATE_APPEND_ONCE_FLOW_FROM_BATCH_QUERY_NOT_ALLOWED​

SQLSTATE: 42000

Cannot create a streaming table append once flow from a batch query.

This prevents incremental ingestion from the source and may result in incorrect behavior.

Offending table: '<table>'.

To fix this, use the STREAM() or readStream operator to declare the source as a streaming input.

Example: SELECT ... FROM STREAM(<source table name>) or spark.readStream.table(<source table name>)

DATASET_DECORATOR_APPLIED_TWICE​

SQLSTATE: none assigned

Dataset <datasetName> already exists. Ensure that the query function has only been marked as a view or table once.

DATASET_NOT_DEFINED​

SQLSTATE: 42P01

Failed to read dataset '<datasetName>'. This dataset is not defined in the pipeline.

If this table is managed by another pipeline, then do not use dlt.read / dlt.readStream to read the table or prepend the name with the LIVE keyword.

DBFS_NOT_ENABLED​

SQLSTATE: 42000

DBFS is not enabled for this workspace; please publish to Unity Catalog or specify a different storage location for the pipeline.

DBSQL_PIPELINE_IS_MISSING​

SQLSTATE: 42000

A DBSQL pipeline is missing. Please refresh your materialized view or streaming table to create the DBSQL pipeline.

DBSQL_PIPELINE_SHOULD_NOT_HAVE_MULTIPLE_TABLES​

SQLSTATE: 42000

A DBSQL pipeline must have exactly one materialized view or streaming table, but found <tablesSize> tables: <tables>

DESTINATION_PIPELINE_NOT_FOUND​

SQLSTATE: 42K03

The destination pipeline with ID '<pipelineId>' can not be found. Make sure you are in the same workspace as the pipeline, you are the owner of the pipeline, and the pipeline ran at least once.

DESTINATION_PIPELINE_NOT_IN_DIRECT_PUBLISHING_MODE​

SQLSTATE: 0AKLT

The destination pipeline with ID '<pipelineId>' is not using the Direct Publishing Mode.

DESTINATION_PIPELINE_PERMISSION_DENIED​

SQLSTATE: 42501

You are not allowed to perform this operation, you are not the owner of the destination pipeline with ID '<pipelineId>'. Only owners can change the pipeline of a table.

DESTINATION_PIPELINE_TYPE_NOT_WORKSPACE_PIPELINE_TYPE​

SQLSTATE: 0AKLT

The destination pipeline with ID '<pipelineId>' is not an ETL pipeline.

DO_CREATE_OR_EDIT_INVALID_USER_ACTION​

SQLSTATE: 42000

Error while handling '<action>' request.

DROP_SCHEDULE_SCHEDULE_DOES_NOT_EXIST​

SQLSTATE: 42000

Cannot drop SCHEDULE on a table without an existing schedule or trigger.

DUPLICATED_FROM_JSON_SCHEMA_LOCATION​

SQLSTATE: 42616

Duplicated from_json schema location key: <schemaLocationKey>.

Please pick unique schema location keys for each from_json query in the pipeline

DUPLICATED_INGESTION_CONFIG_TABLE_SPECS​

SQLSTATE: 42710

Ingestion pipeline configuration contains duplicate tables. Please ensure that each table is unique.

EMPTY_INGESTION_CONFIG_OBJECTS​

SQLSTATE: 42000

Ingestion config objects are empty.

ENHANCED_AUTOSCALING_REQUIRES_ADVANCED_EDITION​

SQLSTATE: 42000

Enhanced Autoscaling 'spare_capacity_fraction' setting is only supported in the Advanced product edition of Lakeflow Declarative Pipelines.

Please edit your pipeline settings to set "edition": "advanced" in order to use 'spare_capacity_fraction'.

ENVIRONMENT_PIP_INSTALL_ERROR​

SQLSTATE: 42000

Failed to install environment dependency: '<pipDependency>'. Please check the driver's stdout logs on the cluster for more details.

EVENT_LOG_PICKER_FEATURE_NOT_SUPPORTED​

SQLSTATE: 0A000

Publishing the event log to Unity Catalog is not supported for this pipeline. If this is unexpected, please contact Databricks support.

EXPECTATION_VIOLATION​

SQLSTATE: 22000

Flow '<flowName>' failed to meet the expectation.

For more details see EXPECTATION_VIOLATION

EXPLORE_ONLY_CANNOT_BE_SET_WITH_VALIDATE_ONLY​

SQLSTATE: 42000

explore_only and validate_only cannot be both set to true.

EXPLORE_ONLY_IS_NOT_ENABLED​

SQLSTATE: 0A000

explore_only update is not enabled. Please contact Databricks support.

FAILED_TO_CREATE_EVENT_LOG​

SQLSTATE: 58030

Failed to create pipeline (id=<pipelineId>) event log with identifier <eventLogIdentifier>. See exception below for more details.

FAILED_TO_PUBLISH_VIEW_TO_METASTORE​

SQLSTATE: 42000

Failed to publish view <viewName> to the metastore because <reason>.

FAILED_TO_UPDATE_EVENT_LOG​

SQLSTATE: 58030

Failed to update pipeline (id=<pipelineId>) event log identifier to <newEventLogIdentifier>. See exception below for more details.

FLOW_SCHEMA_CHANGED​

SQLSTATE: KD007

Flow <flowName> has terminated because it encountered a schema change during execution.

The schema change is compatible with existing target schema and the next run of the flow can resume with the new schema.

FOREACH_BATCH_SINK_ONLY_SUPPORTED_IN_PREVIEW_CHANNEL​

SQLSTATE: 0A000

DLT ForeachBatch Sink is not currently supported.

The private preview for DLT ForeachBatch sink requires the PREVIEW channel.

DLT sinks: <sinkNames>

GATEWAY_PIPELINE_INIT_SCRIPTS_NOT_ALLOWED​

SQLSTATE: 0A000

The Gateway pipeline does not allow cluster init scripts. Please remove it from <from>.

GATEWAY_PIPELINE_SPARK_CONF_NOT_ALLOWED​

SQLSTATE: 0A000

The Gateway pipeline does not allow spark config [<configs>]. Please remove them from <from>.

GET_ORG_UPDATE_CAPACITY_LIMIT_EXCEEDED​

SQLSTATE: 54000

Number of requested org IDs exceeds the maximum allowed limit of <limit>

GOOGLE_ANALYTICS_RAW_DATA_CONNECTOR_SOURCE_CATALOG_MISSING​

SQLSTATE: KD000

An error occurred in Google Analytics raw data connector.

The 'source catalog' (GCP Project ID) is missing in your ingestion specification.

Please specify the source catalog (project ID) in your ingestion specification to resolve this issue.

HMS_NOT_ENABLED​

SQLSTATE: 42000

Hive Metastore is not enabled for this workspace; please publish to Unity Catalog.

ILLEGAL_COLUMN_TO_UPDATE_DATA_TYPE​

SQLSTATE: 42000

The data type of the column specified in COLUMNS TO UPDATE must be a string array but found <illegalDataType>.

ILLEGAL_ID_PARAM_IN_PIPELINE_SETTINGS​

SQLSTATE: 42000

The settings must not contain '<fieldName>'.

ILLEGAL_SCHEMA_FIELD_IN_PIPELINE_SPEC​

SQLSTATE: 42000

Specified 'schema' field in the pipeline settings is illegal. Reason: <reason>.

INCORRECT_ROOT_PATH_TYPE​

SQLSTATE: 42000

Root path '<rootPath>' must be a directory but found <objectType>.

INGESTION_CONFIG_DUPLICATED_SCHEMA​

SQLSTATE: 42710

Ingestion pipeline configuration contains duplicate schemas. Please ensure that each schema is unique.

INGESTION_GATEWAY_AUTHENTICATION_FAILURE​

SQLSTATE: 42501

Authentication failure

For more details see INGESTION_GATEWAY_AUTHENTICATION_FAILURE

INGESTION_GATEWAY_BREAKING_SCHEMA_CHANGE_FAILURE​

SQLSTATE: 42KD4

A schema mismatch has been detected between the source and target tables. To resolve this issue, a full refresh of the table '<entityName>' is required on the Ingestion Pipeline.

INGESTION_GATEWAY_CDC_NOT_ENABLED​

SQLSTATE: 42000

CDC is not enabled on <entityType> '<entityName>'. Enable CDC and perform a full table refresh on the Ingestion Pipeline. Error message: '<errorMessage>'.

INGESTION_GATEWAY_DB_VERSION_NOT_SUPPORTED​

SQLSTATE: 42000

Database version is not supported.

For more details see INGESTION_GATEWAY_DB_VERSION_NOT_SUPPORTED

INGESTION_GATEWAY_DDL_OBJECTS_MISSING​

SQLSTATE: 42000

DDL objects missing on <entityType> '<entityName>'. Execute the DDL objects script and full refresh the table on the Ingestion Pipeline. Error message: '<errorMessage>'.

INGESTION_GATEWAY_MISSING_CONNECTION_REFERENCE​

SQLSTATE: 42000

Ingestion gateway configuration is missing a connection.

Please add a reference to the Unity Catalog connection containing your credentials.

Ingestion gateway pipeline definition details:

<definition>

INGESTION_GATEWAY_MISSING_INTERNAL_STORAGE_CATALOG​

SQLSTATE: 42000

Ingestion gateway configuration is missing the internal storage location catalog.

Please add the internal storage location catalog.

Ingestion gateway pipeline definition details:

<definition>

INGESTION_GATEWAY_MISSING_INTERNAL_STORAGE_NAME​

SQLSTATE: 42000

Ingestion gateway configuration is missing the internal storage location name.

Please add the internal storage location name.

Ingestion gateway pipeline definition details:

<definition>

INGESTION_GATEWAY_MISSING_INTERNAL_STORAGE_SCHEMA​

SQLSTATE: 42000

Ingestion gateway configuration is missing the internal storage location schema.

Please add the internal storage location schema.

Ingestion gateway pipeline definition details:

<definition>

INGESTION_GATEWAY_MISSING_TABLE_IN_SOURCE​

SQLSTATE: 42P01

Table '<entityName>' does not exist in the source database or has been dropped. Resolve the issue and full refresh the table on the Managed Ingestion pipeline. Error message: '<errorMessage>'.

INGESTION_GATEWAY_PG_PUBLICATION_ALTER_FAILED​

SQLSTATE: 42000

Failed to alter replication publication for <entityType> '<entityName>'

Error message: <errorMessage>

For more details see INGESTION_GATEWAY_PG_PUBLICATION_ALTER_FAILED

INGESTION_GATEWAY_PG_PUBLICATION_CREATION_FAILED​

SQLSTATE: 42000

Failed to create replication publication for <entityType> '<entityName>'

Error message: <errorMessage>

For more details see INGESTION_GATEWAY_PG_PUBLICATION_CREATION_FAILED

INGESTION_GATEWAY_PG_PUBLICATION_DROP_FAILED​

SQLSTATE: 42000

Failed to drop replication publication for <entityType> '<entityName>'

Error message: <errorMessage>

For more details see INGESTION_GATEWAY_PG_PUBLICATION_DROP_FAILED

INGESTION_GATEWAY_PG_SLOT_CONSUMED_BY_OTHER_PROCESS​

SQLSTATE: 42000

Failed to create replication slot for <entityType> '<entityName>' due to replication slot is being used by another PID.

Error message: <errorMessage>

INGESTION_GATEWAY_PG_SLOT_CREATION_FAILED​

SQLSTATE: 42000

Failed to create replication slot for <entityType> '<entityName>'

Error message: <errorMessage>

For more details see INGESTION_GATEWAY_PG_SLOT_CREATION_FAILED

INGESTION_GATEWAY_SOURCE_INSUFFICIENT_PERMISSION_FAILURE​

SQLSTATE: 42501

The user does not have the required permissions to access this object or execute the stored procedure. Please ensure that all necessary privileges are granted. Refer to the following documentation: https://docs.databricks.com/aws/en/ingestion/lakeflow-connect/sql-server/database-user-requirements.

INGESTION_GATEWAY_SOURCE_SCHEMA_MISSING_ENTITY​

SQLSTATE: 42000

No tables are available for replication from the source.

Verify that the tables are correctly selected and defined in the ingestion pipeline

and user has the necessary catalog and schema access.

INGESTION_GATEWAY_UNREACHABLE_HOST_OR_PORT_FAILURE​

SQLSTATE: 08000

Connection failed due to an incorrect hostname <host> and/or port <port> of the source database.

For more details see INGESTION_GATEWAY_UNREACHABLE_HOST_OR_PORT_FAILURE

INGESTION_GATEWAY_XA_TRANSACTION_NOT_SUPPORTED​

SQLSTATE: 42000

Failed to replicate <entityType> '<entityName>' because it is part of an XA transaction.

Error message: <errorMessage>

INVALID_APPLY_CHANGES_COMMAND​

SQLSTATE: 42000

AUTO CDC command is invalid. <reason>.

INVALID_ARGUMENT_TYPE​

SQLSTATE: none assigned

Value of invalid type passed to parameter '<paramName>'. Expected <expectedType>. <additionalMsg>.

INVALID_COMPATIBILITY_OPTIONS​

SQLSTATE: 42616

The table options specified for table <table> are invalid since

For more details see INVALID_COMPATIBILITY_OPTIONS

INVALID_DECORATOR_USAGE​

SQLSTATE: none assigned

The first positional argument passed to @<decoratorName> must be callable. Either add @<decoratorName> with no parameters to your function, or pass options to @<decoratorName> using keyword arguments (e.g. <exampleUsage>).

INVALID_DESTINATION_PIPELINE_ID​

SQLSTATE: 42K03

Invalid destination pipeline ID: '<pipelineId>'. Pipeline IDs must be valid UUIDs.

Verify that you're using the correct pipeline ID, not the pipeline name.

INVALID_EVENT_LOG_CONFIGURATION​

SQLSTATE: F0000

Invalid event log configuration found in pipeline spec: <message>

INVALID_NAME_IN_USE_COMMAND​

SQLSTATE: 42000

Invalid name '<name>' in <command> command. Reason: <reason>

INVALID_PARAM_FOR_DBSQL_PIPELINE​

SQLSTATE: 42000

You can only specify 'pipeline_id' and 'pipeline_type' when calling 'dry_run' for a DBSQL pipeline.

INVALID_REFRESH_SELECTION​

SQLSTATE: 42000

For more details see INVALID_REFRESH_SELECTION

INVALID_REFRESH_SELECTION_REQUEST_FOR_CONTINUOUS_PIPELINE​

SQLSTATE: 42000

Refresh selection is not supported for the continuous mode.

INVALID_REFRESH_SELECTION_REQUEST_WITH_FULL_REFRESH​

SQLSTATE: 42000

full_refresh should not be set to true for a refresh selection request.

INVALID_ROOT_PATH​

SQLSTATE: 42000

Invalid root_path '<rootPath>': only absolute directory paths are currently supported. Directory paths must begin with '/' and not end with '/'.

INVALID_SCHEMA_NAME​

SQLSTATE: 3F000

Invalid schema '<schemaName>' specified in the pipeline setting. Reason: <reason>.

INVALID_SNAPSHOT_AND_VERSION_TYPE​

SQLSTATE: none assigned

snapshot_and_version for flow with target '<target>' returned unsupported type. <additionalMsg>.

INVALID_TRIGGER_INTERVAL_FORMAT​

SQLSTATE: 42000

The trigger interval configuration specified in the <configurationType> is invalid

JIRA_ADMIN_PERMISSION_MISSING​

SQLSTATE: KD000

Error encountered while calling Jira APIs. Please make sure to give the connecting user the Jira admin permissions for your Jira instance.

JIRA_AUTHORIZATION_FAILED​

SQLSTATE: KD000

Authorization failed while calling Jira APIs. Kindly re-authenticate the UC connection created and try again.

JIRA_CONNECTOR_ENTITY_MAPPING_PARSING_FAILED​

SQLSTATE: KD000

An error occurred in Jira connector.

An issue occurred while trying to parse the entity mapping for the entity type: <entityType>. Check if your entity is part of the supported types.

If the error persists, file a ticket.

JIRA_SITE_MISSING​

SQLSTATE: KD000

Could not find cloudId corresponding to the provided Jira domain <domain>.

Please ensure that the connecting user can access this site.

JOB_DETAILS_DO_NOT_MATCH​

SQLSTATE: 42000

If both 'update_cause_details.job_details' and 'job_task' are given then they must match, but they are <details> and <jobTask>

JOB_TASK_DETAILS_MISSING​

SQLSTATE: 42000

If 'cause' is JOB_TASK then either 'job_task' or 'update_cause_details.job_details' must be given.

LIVE_REFERENCE_OUTSIDE_QUERY_DEFINITION_CLASSIC​

SQLSTATE: 42000

Referencing datasets using LIVE virtual schema outside the dataset query definition (i.e., @dlt.table annotation) is not supported.

LIVE_REFERENCE_OUTSIDE_QUERY_DEFINITION_DPM​

SQLSTATE: 42000

Referencing datasets using LIVE virtual schema <identifier> outside the dataset query definition (i.e., @dlt.table annotation) is not supported.

LIVE_REFERENCE_OUTSIDE_QUERY_DEFINITION_SPARK_SQL​

SQLSTATE: 0A000

Referencing datasets using LIVE virtual schema in spark.sql(...) API must not be called outside the dataset query definition (i.e., @dlt.table annotation). It can only be called within the dataset query definition.

MATERIALIZED_VIEW_RETURNED_STREAMING_DATAFRAME​

SQLSTATE: none assigned

Query function for materialized view '<datasetName>' returned a streaming DataFrame.

It must return a non-streaming DataFrame.

MAX_RETRY_REACHED_BEFORE_ENZYME_RECOMPUTE​

SQLSTATE: 42000

Max retry count reached. Retry count:<flowRetryCount>. maxFlowFailureRetryCountThreshold:<maxFlowRetryAttempts>. <message>

MESA_PIPELINE_INVALID_DEFINITION_TYPE​

SQLSTATE: 42000

Starting a <definitionType> pipeline from UC definition is not allowed.

MESA_PIPELINE_MISMATCH_PIPELINE_TYPES​

SQLSTATE: 42000

Stored and updated definition must be the same pipeline type but got <stored> and <updated>.

MESA_PIPELINE_MISSING_DEFINITION​

SQLSTATE: 42000

The pipeline does not have a valid definition in UC, but a refresh is requested.

MESA_PIPELINE_MISSING_DEFINITION_UNEXPECTED​

SQLSTATE: 42000

Unexpectedly missing pipeline definition from UC.

METASTORE_OPERATION_TIMED_OUT​

SQLSTATE: 58000

Operations involved in updating the metastore information for <tableName> took longer than <timeout>.

This issue may be transient or could indicate bugs in the configured metastore client. Try restarting your pipeline and report this issue if it persists.

MISSING_CREATE_SCHEMA_PRIVILEGE​

SQLSTATE: 42501

User '<userName>' does not have permission to create schema in catalog <catalogName>

MISSING_CREATE_TABLE_PRIVILEGE​

SQLSTATE: 42501

User '<userName>' does not have permission to create table in schema <schemaName>

MISSING_RUN_AS_USER​

SQLSTATE: 28000

No run as user was specified for the update.

MULTI_QUERY_SNAPSHOT_TARGET_NOT_SUPPORTED​

SQLSTATE: 0A000

'<tableName>' already contains a AUTO CDC FROM SNAPSHOT query '<flowName>'. Currently, this API only supports one flow per destination.

MUTUALLY_EXCLUSIVE_OPTIONS​

SQLSTATE: 42000

Mutually exclusive options <options>. Please remove one of these options.

NEGATIVE_VALUE​

SQLSTATE: none assigned

Value for <arg_name> must be greater than or equal to 0, got '<arg_value>'.

NON_UC_TABLE_ALREADY_MANAGED_BY_OTHER_PIPELINE​

SQLSTATE: 42P07

Table '<tableName>' is already managed by pipeline <otherPipelineId>.

If you want table '<tableName>' to be managed by this pipeline -

  1. Remove the table from pipeline '<otherPipelineId>'.

  2. Start a full refresh update for this pipeline.

If you want to continue managing the table from multiple pipelines, disable this check by setting the configuration pipelines.tableManagedByMultiplePipelinesCheck.enabled to false in your pipeline settings.

This is not recommended as concurrent operations on the table may conflict with each other and lead to unexpected results.

NOTEBOOK_NAME_LIMIT_REACHED​

SQLSTATE: 42000

Invalid notebook path: '<nameStart>...<nameEnd>'. It is longer than <maxNotebookPathLength> characters.

NOTEBOOK_NOT_FOUND​

SQLSTATE: 42000

Unable to access the notebook '<notebookPath>'. It either does not exist, or the identity used to run this pipeline, <identity>, lacks the required permissions.

NOTEBOOK_PIP_INSTALL_ERROR​

SQLSTATE: 42000

Failed to run '<pipInstallCommand>' from notebook: <notebookPath>. Please check the driver's stdout logs on the cluster for more details.

NOTIFICATIONS_DUPLICATE_ALERTS​

SQLSTATE: 42000

Duplicate alerts '<alertsDuplicates>' specified in [<alerts>]

NOTIFICATIONS_DUPLICATE_EMAIL_ADDRESSES​

SQLSTATE: 42000

Duplicate email addresses '<emailRecipientsDuplicates>' specified in [<emailRecipients>]

NOTIFICATIONS_INVALID_ALERTS​

SQLSTATE: 42000

Invalid alerts have been specified to get notifications on: <invalidAlerts>

NOTIFICATIONS_INVALID_EMAIL_ADDRESS​

SQLSTATE: 42000

Invalid email address specified to receive notifications: <invalidEmailAddresses>

NOTIFICATIONS_MISSING_PARAMS​

SQLSTATE: 42000

Please specify at least one recipient and one alert in <setting>

NO_SOURCE_OR_SNAPSHOT_AND_VERSION_ARGUMENT_PROVIDED​

SQLSTATE: none assigned

Either source or snapshot_version must be set for create_auto_cdc_flow_from_snapshot with target '<target>'.

NO_TABLES_IN_PIPELINE​

SQLSTATE: 42617

Pipelines are expected to have at least one table defined but no tables were found in your pipeline.

Please verify that you have included the expected source files, and that your source code includes table definitions (e.g., CREATE MATERIALIZED VIEW in SQL code, @dlt.table in python code).

Note that only tables are counted towards this check. You may also encounter this error if you only include views or flows in your pipeline.

OWNER_IS_MISSING​

SQLSTATE: 42000

Owner does not exist.

PAGINATION_REQUEST_HAS_NAME_AND_PAGINATION​

SQLSTATE: 42000

You can provide a <name> or pagination, but not both.

PATCH_PIPELINE_DEFINITION_UNSUPPORTED_FIELD​

SQLSTATE: 42000

PatchPipelineDefinition only supports the schedule field, but the provided definition had other populated fields: '<updatedDefinition>'.

PERSISTED_VIEW_READS_FROM_STREAMING_SOURCE​

SQLSTATE: 42000

Persisted views do not support reading from streaming sources.

PERSISTED_VIEW_READS_FROM_TEMPORARY_VIEW​

SQLSTATE: 42K0F

Persisted view <persistedViewName> cannot reference temporary view <temporaryViewName> that will not be available outside the pipeline scope. Either make the persisted view temporary or persist the temporary view.

PIPELINE_CLONING_ALREADY_IN_PROGRESS​

SQLSTATE: 42000

Pipeline is already being cloned to pipeline with id '<pipelineId>'.

PIPELINE_CLONING_INVALID_DURING_ACTIVE_UPDATE​

SQLSTATE: 42000

The pipeline with ID '<pipelineId>' cannot be cloned during an active update.

PIPELINE_CLONING_INVALID_FIELDS​

SQLSTATE: 42000

Request included pipeline spec with invalid fields for clone. Allowed fields include: name, catalog, target, configuration.

PIPELINE_CLONING_INVALID_FOR_MISSING_TARGET​

SQLSTATE: 42000

The pipeline with ID '<pipelineId>' does not publish to a target schema. The source pipeline needs to publish to a target schema to be cloned. Please retry after specifying the 'target' field in the pipeline spec, and running a new update to publish to a target schema.

PIPELINE_CLONING_INVALID_FOR_UC_PIPELINE​

SQLSTATE: 42000

The pipeline with ID '<pipelineId>' is already a UC pipeline. UC pipelines cannot be cloned.

PIPELINE_CLONING_NO_MODE_SPECIFIED​

SQLSTATE: 42000

No specified clone mode.

PIPELINE_CLONING_NO_TARGET_SPECIFIED​

SQLSTATE: 42000

No specified target catalog for cloning.

PIPELINE_CREATION_NOT_ENABLED_FOR_WORKSPACE​

SQLSTATE: 42000

Pipeline creation is not enabled for this workspace.

PIPELINE_DOES_NOT_EXIST​

SQLSTATE: 42000

The pipeline with ID '<pipelineId>' does not exist.

For more details see PIPELINE_DOES_NOT_EXIST

PIPELINE_ENVIRONMENT_NOT_ENABLED​

SQLSTATE: 0A000

Using environment in DLT is not enabled.

PIPELINE_ENVIRONMENT_VERSION_NOT_ALLOWED​

SQLSTATE: 54000

Pipeline's environment currently does not support environment versions.

PIPELINE_FAILED_TO_UPDATE_UC_TABLE_DUE_TO_CONCURRENT_UPDATE​

SQLSTATE: 55000

Pipeline failed to update the UC table (<tableName>) due to concurrent alters after <attempts> attempts.

Please verify there aren't external processes modifying the table, retry the update, and contact Databricks support if this issue persists.

PIPELINE_FOR_TABLE_NEEDS_REFRESH​

SQLSTATE: 55000

The table <tableName> is not in the required state, the table has not been updated recently. The pipeline with ID '<pipelineId>' must be run once more and then retry the operation.

PIPELINE_FOR_TABLE_NOT_FOUND​

SQLSTATE: 42K03

The pipeline with ID '<pipelineId>', managing the table <tableName>, can not be found. Make sure you are in the same workspace as the pipeline, you are the owner of the pipeline, and the pipeline ran at least once.

PIPELINE_GLOB_INCLUDES_CONFLICTS​

SQLSTATE: F0000

Either glob or notebook/file field under libraries in the pipeline settings should be set. Please change the pipeline settings.

PIPELINE_GLOB_INCLUDES_NOT_SUPPORTED​

SQLSTATE: 0A000

Using glob field to include source files is a preview feature and is disabled.

Re-select each source file to include for the pipeline to fix this error.

Reach out to Databricks support to learn more about this feature and enroll in the preview.

PIPELINE_GLOB_UNSUPPORTED_SPECIAL_CHARACTER​

SQLSTATE: 42000

Special characters <specialChar> are reserved and should not be used in included path '<path>' in pipeline settings. Remove these characters to fix the error

PIPELINE_NAME_LIMIT_REACHED​

SQLSTATE: 42000

Name cannot be longer than <maxPipelineNameLength> characters.

PIPELINE_NON_RETRYABLE_ANALYSIS_CLIENT_ERROR​

SQLSTATE: 42000

Pipeline failed to analyze the source tables (<tables>) due to non-retryable errors after partial execution.

A new pipeline update will not be created.

<flowErrors>

Check the event log and fix the issues accordingly.

PIPELINE_NOT_ELIGIBLE_FOR_RESTORATION​

SQLSTATE: 42000

Pipeline '<pipelineId>' is past restoration window.

PIPELINE_NOT_IN_DIRECT_PUBLISHING_MODE​

SQLSTATE: 0AKLT

The pipeline with ID '<pipelineId>', managing the table <tableName>, is not using the Direct Publishing Mode.

PIPELINE_NOT_READY_FOR_SCHEDULED_UPDATE​

SQLSTATE: 55000

The table is not ready for refresh yet

For more details see PIPELINE_NOT_READY_FOR_SCHEDULED_UPDATE

PIPELINE_PERMISSION_DENIED_NOT_OWNER​

SQLSTATE: 42501

You are not allowed to perform this operation. You are not the owner of the pipeline with ID '<pipelineId>', managing the table <tableName>.

PIPELINE_RETRYABLE_ANALYSIS​

SQLSTATE: 42000

Pipeline failed to analyze the source tables (<tables>) due to retryable errors after partial execution.

A new pipeline update will be created to retry processing. If the error persists, please check the event log and address the issues accordingly.

PIPELINE_SETTINGS_FIELD_CANNOT_BE_EDITED​

SQLSTATE: 42000

'<uneditableFieldName>' cannot be modified by users. If users want to add or modify the <settingName>, please use the '<editableFieldName>' field instead.

PIPELINE_SETTINGS_MODIFIED_CONCURRENTLY​

SQLSTATE: 42000

Pipeline settings were modified concurrently.

PIPELINE_SETTINGS_UNSUPPORTED_CONFIGURATIONS​

SQLSTATE: 42000

The configurations <configurations> are not supported by Lakeflow Declarative Pipelines. Please remove these configurations.

PIPELINE_SETTING_SHOULD_NOT_SPECIFY_DEVELOPMENT​

SQLSTATE: 42000

Starting an update with the 'development' setting is not supported.

PIPELINE_SHOULD_NOT_HAVE_MULTIPLE_TABLES​

SQLSTATE: 42000

The pipeline must have exactly one table, but found <tablesSize> tables: <tables>

PIPELINE_SOURCE_FILE_NUMBER_EXCEEDED​

SQLSTATE: 54000

The number of source files, including files declared in folders, exceeds the limit of <limit>.

Remove or merge excessive files and change corresponding pipeline spec if needed,

or contact Databricks support to request a limit increase.

PIPELINE_SOURCE_FOLDER_DEPTH_EXCEEDED​

SQLSTATE: 54000

The folder '<folder_path>' exceeds the maximum allowed directory nesting level of <limit>. Reduce the folder nesting level or contact Databricks support to request a limit increase.

PIPELINE_SPEC_PARAM_CANNOT_BE_CHANGED​

SQLSTATE: 42000

Modifying following parameter <param> in pipeline settings is not allowed

PIPELINE_TYPE_NOT_SUPPORTED​

SQLSTATE: 42000

Pipeline type '<pipelineType>' is not supported.

PIPELINE_TYPE_NOT_WORKSPACE_PIPELINE_TYPE​

SQLSTATE: 0AKLT

The pipeline with ID '<pipelineId>', managing the table <tableName>, is not an ETL pipeline.

PIPELINE_TYPE_QUOTA_EXCEEDED​

SQLSTATE: 54000

Cannot start update '<updateId>' because the limit for active pipelines of type '<pipelineType>' has been reached.

PIPELINE_UPDATE_FOR_TABLE_IS_RUNNING​

SQLSTATE: 55000

The pipeline with ID '<pipelineId>', managing the table <tableName>, is running. Please stop the pipeline before running the operation.

PIPELINE_WORKSPACE_LIMIT_REACHED​

SQLSTATE: 42000

Pipeline creation of type '<pipelineType>' blocked because the workspace '<orgId>' already has '<countLimit>' pipelines. Please contact Databricks support to adjust this limit.

PIP_INSTALL_NOT_AT_TOP_OF_NOTEBOOK​

SQLSTATE: 42000

Found cells containing %pip install that are not at the top of the notebook for '<notebookPath>'

Move all %pip install cells to the start of the notebook.

PY4J_BLOCKED_API​

SQLSTATE: none assigned

You are using a Python API that is not supported in the current environment.

Please check Databricks documentation for alternatives.

<additionalInfo>

QUERY_BASED_INGESTION_CONNECTOR_ERROR​

SQLSTATE: 0A000

Error happened in query-based ingestion connector for <sourceName>.

For more details see QUERY_BASED_INGESTION_CONNECTOR_ERROR

REFERENCE_DLT_DATASET_OUTSIDE_QUERY_DEFINITION​

SQLSTATE: 0A000

Referencing DLT dataset <identifier> outside the dataset query definition (i.e., @dlt.table annotation) is not supported. Please read it instead inside the dataset query definition.

REFRESH_INITIATED_FROM_INVALID_WORKSPACE​

SQLSTATE: 42000

The refresh must be initiated in workspace <homeWorkspaceId>, where the resource was created.

The refresh was attempted in workspace <userWorkspaceId>.

REFRESH_MODE_ALREADY_EXISTS​

SQLSTATE: 42710

Cannot add <type> to a table that already has <existingType>. Please drop the existing schedule or use ALTER TABLE ... ALTER <type> ... to alter it.

REQUIRED_PARAM_NOT_FOUND​

SQLSTATE: 42000

Required parameter <param> is not found.

RESERVED_KEYWORD_IN_USE_CATALOG​

SQLSTATE: 42000

USE CATALOG '<reservedKeyword>' is illegal because '<reservedKeyword>' is a reserved keyword in DLT.

RESERVED_KEYWORD_IN_USE_SCHEMA​

SQLSTATE: 42000

USE SCHEMA '<reservedKeyword>' is illegal because '<reservedKeyword>' is a reserved keyword in DLT.

RESOURCES_ARE_BEING_PROVISIONED​

SQLSTATE: 42000

Pipeline resources are being provisioned for pipeline '<pipelineId>'.

RESTORE_NON_DELETED_PIPELINE​

SQLSTATE: 42000

Pipeline '<pipelineId>' is not deleted. Restoration is only applicable for deleted pipelines.

ROOT_PATH_NOT_FOUND​

SQLSTATE: 42000

Unable access root path '<rootPath>'. Please ensure you have the required access privileges.

RUN_AS_USER_NOT_FOUND​

SQLSTATE: 28000

The specified run as user '<runAsUserId>' for the update does not exist in the workspace.

SAAS_CONNECTION_ERROR​

SQLSTATE: KD000

Failed to make a connection to the <sourceName> source. Error code: <saasConnectionErrorCode>.

For more details see SAAS_CONNECTION_ERROR

SAAS_CONNECTOR_REFRESH_TOKEN_EXPIRED​

SQLSTATE: KD000

The refresh token for connection <connectionName> has expired. Edit the connection, re-authenticate, and re-run your pipeline.

SAAS_CONNECTOR_SCHEMA_CHANGE_ERROR​

SQLSTATE: 42KD4

A schema change has occurred in table <tableName> of the <sourceName> source.

For more details see SAAS_CONNECTOR_SCHEMA_CHANGE_ERROR

SAAS_CONNECTOR_SOURCE_API_ERROR​

SQLSTATE: KD000

An error occurred in the <sourceName> API call. Source API type: <saasSourceApiType>. Error code: <saasSourceApiErrorCode>.

Try refreshing the destination table. If the issue persists, please file a ticket.

SAAS_CONNECTOR_UNSUPPORTED_ERROR​

SQLSTATE: 0A000

Unsupported error happened in data source <sourceName>.

For more details see SAAS_CONNECTOR_UNSUPPORTED_ERROR

SAAS_INCOMPATIBLE_SCHEMA_CHANGES_DURING_INIT​

SQLSTATE: 42KD4

We have detected incompatible schema changes when initializing the pipeline:

<details>

Please perform a full refresh on the impacted tables.

SAAS_PARTIAL_ANALYSIS_INPUT_CREATION_ERROR​

SQLSTATE: 42KD4

Error encountered while creating input for partial analysis. A new pipeline update will not be created.

Please check the event log and fix the issues accordingly.

SAAS_SCHEMA_DIVERGED_DURING_ANALYSIS​

SQLSTATE: 42KD4

The source table (<table>)'s analyzed schema has diverged from its expected schema.

Please retry the pipeline update and see if the issue is resolved.

If this issue persists, please perform a full refresh on the tables mentioned above

Expected schema:

<expectedSchema>

Actual schema:

<actualSchema>

SAAS_UC_CONNECTION_INACCESSIBLE​

SQLSTATE: 08000

The provided Connection <connectionName> is inaccessible. Please check the connection and try again.

For more details see SAAS_UC_CONNECTION_INACCESSIBLE

SCHEMA_SPEC_EMPTY_CATALOG​

SQLSTATE: 3D000

SchemaSpec has an empty string in the catalog field.

Please remove the empty string or add the catalog name. (If this schema doesn't belong to a catalog in the source, do not set the field.)

Schema spec details:

<schemaSpec>

SCHEMA_SPEC_EMPTY_SCHEMA​

SQLSTATE: 3F000

SchemaSpec has an empty string in the schema field.

Please remove the empty string or add the schema name. (If this table doesn't belong to a schema in the source, do not set the field.)

Schema spec details:

<schemaSpec>

SCHEMA_SPEC_REQUIRE_ONE_OF_CATALOG_SCHEMA​

SQLSTATE: 42000

At least one of source catalog and source schema must be present. But both are empty.

Schema spec details:

<schemaSpec>

SERVERLESS_BUDGET_POLICY_BAD_REQUEST​

SQLSTATE: 42000

The provided budget policy with id '<budgetPolicyId>' can not be used in this workspace due to policy workspace binding constraints.

SERVERLESS_BUDGET_POLICY_IS_INVALID​

SQLSTATE: 42000

Serverless budget policy with id '<budgetPolicyId>' is invalid.

SERVERLESS_BUDGET_POLICY_MISSING​

SQLSTATE: 42000

Serverless budget policy with id '<budgetPolicyId>' does not exist.

SERVERLESS_BUDGET_POLICY_NOT_ENABLED​

SQLSTATE: 0A000

Serverless budget policy is not enabled, please contact Databricks support.

SERVERLESS_BUDGET_POLICY_NOT_ENABLED_FOR_ACCOUNT​

SQLSTATE: 0A000

Serverless budget policy is not enabled for this account. The user cannot specify a budget policy for this pipeline. Account admin should try to enroll through the feature preview portal. If the problem persists, please contact Databricks support.

SERVERLESS_BUDGET_POLICY_NOT_SUPPORTED_FOR_NON_SERVERLESS_PIPELINE​

SQLSTATE: 42000

Serverless budget policy cannot be assigned to a non-serverless pipeline.

SERVERLESS_BUDGET_POLICY_NOT_SUPPORTED_FOR_PIPELINE_TYPE​

SQLSTATE: 42000

Serverless budget policy is not supported for pipeline type <pipelineType>.

SERVERLESS_BUDGET_POLICY_PERMISSION_DENIED​

SQLSTATE: 42000

User does not have permission to use serverless budget policy with id '<budgetPolicyId>'.

SERVERLESS_NOT_AVAILABLE​

SQLSTATE: 0A000

Serverless compute is not available. Please contact Databricks for more information.

SERVERLESS_NOT_ENABLED​

SQLSTATE: 0A000

You can't use serverless compute with Lakeflow Declarative Pipelines. Please contact Databricks to enable this feature for your workspace.

SERVERLESS_NOT_ENABLED_FOR_USER​

SQLSTATE: 0A000

Serverless compute is not enabled for caller. Please contact your workspace admin to enable this feature.

SERVERLESS_NOT_ENABLED_FOR_WORKSPACE​

SQLSTATE: 0A000

Serverless compute is not available for this workspace and/or region. Please contact Databricks for more information.

SERVERLESS_REQUIRED​

SQLSTATE: 42000

You must use serverless compute in this workspace.

SERVICENOW_CONNECTION_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

For more details see SERVICENOW_CONNECTION_ERROR

SERVICENOW_CONNECTOR_EMPTY_CURSOR_KEY_ERROR​

SQLSTATE: KD000

An error occurred in the ServiceNow. Source API type: <saasSourceApiType>.

At this time, the ingestion pipeline cannot ingest the table '<tableName>' because the cursor key in a row contains an empty field.

To continue running your pipeline, remove this table. If the error persists, file a ticket.

SERVICENOW_CONNECTOR_INSTANCE_HIBERNATION_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

Failed to connect to the ServiceNow instance. The instance appears to be hibernating or inactive.

Log into your ServiceNow admin portal and wait for some time until the instance fully wakes up.

If the error persists, file a ticket.

SERVICENOW_CONNECTOR_INSTANCE_OFFLINE_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

Failed to connect to the ServiceNow instance. The instance is offline.

Log into your ServiceNow admin portal and wait for some time until the instance restores.

If the error persists, file a ticket.

SERVICENOW_CONNECTOR_INVALID_TABLE_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

Failed to query the schema of the table '<tableName>'. The table does not exist in the ServiceNow account for this user.

Verify the table name for any typos and ensure that the user has the necessary permissions to access the table.

If the error persists, file a ticket.

SERVICENOW_CONNECTOR_IP_ADDRESS_RESTRICTED_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

Failed to connect to the ServiceNow instance. The instance has IP address access control restrictions.

To resolve this, either disable the IP address restrictions by navigating to ServiceNow >> All >> System Security >> IP Address Access Control or use serverless stable IPs

If the error persists, file a ticket.

SERVICENOW_CONNECTOR_MALFORMED_ENDPOINT_URL_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

Invalid ServiceNow API endpoint URL detected. The URL structure does not match the expected ServiceNow format.

Check the ServiceNow instance configuration in the UC connection credentials.

For more details see SERVICENOW_CONNECTOR_MALFORMED_ENDPOINT_URL_ERROR

SERVICENOW_CONNECTOR_MAX_FAILED_ATTEMPTS_REACHED​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

This often happens when the data fetch for a table takes too long. As a first step, work with your ServiceNow administrator to enable indexing on the cursor column.

The cursor column is selected from the following list, in order of availability and preference: sys_updated_on (first choice), sys_created_on (second choice), sys_archived (third choice)

For instructions on enabling indexing in ServiceNow, see here: https://docs.databricks.com/ingestion/lakeflow-connect/servicenow-overview#why-is-my-servicenow-ingestion-performance-slow.

We also recommend increasing the REST API query timeout to more than 60 seconds to allow more time to fetch records.

Then, retry your ingestion pipeline. If the issue persists, file a ticket.

SERVICENOW_CONNECTOR_SCHEMA_FIELD_TYPE_MISMATCH​

SQLSTATE: KD000

An error occurred in ServiceNow while fetching the table schema.

Found two conflicting data types for field '<fieldName>': '<firstDataType>' and '<secondDataType>'.

To continue running your pipeline, remove this table. If the error persists, file a ticket.

SERVICENOW_CONNECTOR_UNAUTHORIZED_ACCESS_ERROR​

SQLSTATE: KD000

An error occurred in ServiceNow. Source API type: <saasSourceApiType>.

For more details see SERVICENOW_CONNECTOR_UNAUTHORIZED_ACCESS_ERROR

SET_TBLPROPERTIES_NOT_ALLOWED_FOR_PIPELINE_TABLE​

SQLSTATE: 0AKLT

ALTER <commandTableType> ... SET TBLPROPERTIES is not supported. To modify table properties, please change the original definition and run an update.

SFDC_CONNECTOR_BULK_QUERY_JOB_INCOMPLETE​

SQLSTATE: KD000

Ingestion for object <objName> is incomplete because the Salesforce API query job took too long, failed, or was manually cancelled.

For more details see SFDC_CONNECTOR_BULK_QUERY_JOB_INCOMPLETE

SFDC_CONNECTOR_BULK_QUERY_NOT_FOUND​

SQLSTATE: KD000

Ingestion for object <objName> failed because Salesforce bulk API query job is not found.

The bulk query job being referenced for ingesting this object was deleted either in the UI or by Salesforce after 7 days of its creation.

To trigger a new bulk job, please perform a FULL REFRESH on the specific destination table.

SFDC_CONNECTOR_CREATE_BULK_QUERY_API_LIMIT_EXCEEDED​

SQLSTATE: KD000

An error occurred in the Salesforce API call: API limit exceeded

Please wait for your API limits to reset. Then try refreshing the destination table.

If the error persists, file a ticket.

SFDC_CONNECTOR_CREATE_BULK_QUERY_JOB_FAILED​

SQLSTATE: KD000

Ingestion for object <objName> is incomplete because creation of Salesforce bulk API query job failed. Error code: <saasSourceApiErrorCode>.

<actionText>.

SINKS_NOT_SUPPORTED_IN_SEG​

SQLSTATE: 0A000

DLT sinks in pipeline are not supported in serverless egress control enabled workspaces. Supported DLT Sinks are Kafka and Delta.

The following unsupported sinks are found: <sinkNames>, their corresponding formats are: <sinkFormats>.

SOURCE_TABLE_NOT_MATERIALIZED​

SQLSTATE: 42704

Failed to read dependent dataset '<sourceTableName>' because it is not materialized. Run the entire pipeline to materialize all dependent datasets.

STANDALONE_PRIVATE_MVST_NOT_SUPPORTED​

SQLSTATE: 0A000

Creating a standalone PRIVATE MV/ST is not supported. Please remove the PRIVATE modifier

STREAMING_TARGET_NOT_DEFINED​

SQLSTATE: 42P01

Cannot find target table <target> for the <command> command. Target table <target> is not defined in the pipeline.

SYNCED_TABLE_USER_ERROR​

SQLSTATE: 42000

Synced table pipeline user error.

For more details see SYNCED_TABLE_USER_ERROR

TABLE_CHANGED_DURING_OPERATION​

SQLSTATE: 55019

The table <tableName> was modified outside of this transaction, and this transaction has been rolled back. Retry the operation.

TABLE_MATERIALIZATION_CYCLIC_FOREIGN_KEY_DEPENDENCY​

SQLSTATE: 42887

Pipeline update for pipeline: <pipelineId> detected a cyclic chain of foreign key constraints: <tables>.

TABLE_SPEC_BOTH_CATALOG_SCHEMA_REQUIRED​

SQLSTATE: 42000

TableSpec is missing one of the source catalog and/or source schema.

Table spec details:

<tableSpec>

TABLE_SPEC_EMPTY_CATALOG​

SQLSTATE: 3D000

TableSpec has an empty string in the catalog field.

Please remove the empty string or add the catalog name. (If this table doesn't belong to a catalog in the source, do not set the field.)

Table spec details:

<tableSpec>

TABLE_SPEC_EMPTY_SCHEMA​

SQLSTATE: 3F000

TableSpec has an empty string in the schema field.

Please remove the empty string or add the schema name. (If this table doesn't belong to a schema in the source, do not set the field.)

Table spec details:

<tableSpec>

TABLE_SPEC_EMPTY_TABLE​

SQLSTATE: 42601

Table name is empty. Please provide a table name.

Table spec details:

<tableSpec>

TABLE_TOKEN_NOT_EXIST_FOR_SCHEMA_EVOLUTION_LEGACY_TABLE​

SQLSTATE: 42KD4

Schema evolution cannot be enabled due to missing metadata. Please trigger a full refresh.

Reason: Snapshot table token must be defined when schema evolution is enabled.

TRIGGER_INTERVAL_VALUE_INVALID​

SQLSTATE: 22003

The trigger interval must be a positive duration, the maximum acceptable value is 2,147,483,647 seconds. Received: <actual> seconds.

TRIGGER_ON_VIEW_READ_FROM_FILE_NOT_SUPPORTED​

SQLSTATE: 0A000

The source <source> is a view that reads from a file location, which is not currently supported by trigger.

TRIGGER_SOURCE_TYPE_NOT_SUPPORTED​

SQLSTATE: 0A000

The source <source> with type <type> is currently not supported by trigger.

UC_CLEARING_TARGET_SCHEMA_NOT_ALLOWED​

SQLSTATE: 0AKD0

Clearing the target schema field in is not allowed for UC pipelines. Reason: <reason>.

UC_NOT_ENABLED​

SQLSTATE: 0A000

Using UC catalog in DLT is not enabled.

UC_PIPELINE_CANNOT_PUBLISH_TO_HMS​

SQLSTATE: 42000

UC enabled pipelines cannot publish to Hive Metastore. Please choose a different target catalog.

UC_TARGET_SCHEMA_REQUIRED​

SQLSTATE: 0AKD0

The target schema field is required for UC pipelines. Reason: <reason>.

UNABLE_TO_INFER_TABLE_SCHEMA​

SQLSTATE: 42KD9

Failed to infer the schema for table <tableName> from its upstream flows.

Please modify the flows that write to this table to make their schemas compatible.

Inferred schema so far:

<inferredDataSchema>

Incompatible schema:

<incompatibleDataSchema>

UNEXPECTED_PIPELINE_SCHEMA_PERMISSION_ERROR​

SQLSTATE: 42501

Unexpected error while checking schema permissions for pipeline <pipelineId>. Please contact Databricks support.

UNIFORM_COMPATIBILITY_CANNOT_SET_WITH_ROW_FILTERS_OR_COLUMN_MASKS​

SQLSTATE: 42000

Uniform compatibility cannot be set on materialized views or streaming tables that have a row filter or column masks applied.

UNITY_CATALOG_INITIALIZATION_FAILED​

SQLSTATE: 56000

Encountered an error with Unity Catalog while setting up the pipeline on cluster <clusterId>.

Ensure that your Unity Catalog configuration is correct, and that required resources (e.g., catalog, schema) exist and are accessible.

Also verify that the cluster has appropriate permissions to access Unity Catalog.

Details: <ucErrorMessage>

UNRESOLVED_SINK_PATH​

SQLSTATE: 22KD1

Storage path for sink <identifier> cannot be resolved. Please contact Databricks support.

UNRESOLVED_TABLES_FOR_MAINTENANCE​

SQLSTATE: 55000

The following tables were found in the pipeline definition but could not be resolved during maintenance. Please run a pipeline update execution with the latest pipeline definition to materialize all tables in the pipeline definition and unblock maintenance, or contact Databricks support if the problem persists.

<unresolvedTableIdentifiersSet>

UNRESOLVED_TABLE_PATH​

SQLSTATE: 22KD1

Storage path for table <identifier> cannot be resolved. Please contact Databricks support.

UNSUPPORTED_ALTER_COMMAND​

SQLSTATE: 0A000

ALTER <commandTableType> ... <command> is not supported.

UNSUPPORTED_CHANNEL_FOR_DPM​

SQLSTATE: 0A000

Unsupported channel for Direct Publishing Mode. Expect either 'CURRENT' or 'PREVIEW' channel, but got 'PREVIOUS'

UNSUPPORTED_COMMAND_IN_NON_DPM_PIPELINE​

SQLSTATE: 0A000

<command> only supported in direct publishing mode enabled Lakeflow Declarative Pipelines

UNSUPPORTED_COMMAND_IN_QUERY_DEFINITION​

SQLSTATE: 0A000

'<command>' is not supported in query definition. Please move the command outside of the query definition. If it is a pipeline in Python, move the '<command>' outside of @dlt.table()/@dlt.view() decorator. If it is a pipeline in Scala, move the '<command>' outside of 'query' method.

UNSUPPORTED_CUSTOM_DBR_VERSION​

SQLSTATE: 42000

Custom DBR version '<v>' is not supported in SHIELD and HIPAA workspaces. Expected one of [<supportedDbrVersions>]

UNSUPPORTED_CUSTOM_SCHEMA_PREVIEW​

SQLSTATE: 0A000

The custom schema private preview is disabled.

Please create a new pipeline with default publishing mode, using the 'schema' field in the pipeline specification, and move the datasets from this pipeline to the new pipeline.

UNSUPPORTED_CUSTOM_SCHEMA_PREVIEW_ENABLEMENT​

SQLSTATE: 0A000

The custom schema private preview is disabled and you cannot create

Please remove <sparkConfKeys> from the pipeline configuration

UNSUPPORTED_DBR_VERSION​

SQLSTATE: 42000

DBR version '<v>' is not supported. Expected one of [<supportedDbrVersions>]

UNSUPPORTED_FEATURE_FOR_WORKSPACE​

SQLSTATE: 0A000

<featureName> is not supported in your workspace. Please contact Databricks support to enable this feature for your workspace.

UNSUPPORTED_LANGUAGE​

SQLSTATE: 0A000

Failed to load <language> notebook '<notebookPath>'. Only <supportedLanguages> notebooks are supported currently.

UNSUPPORTED_LIBRARY_FILE_TYPE​

SQLSTATE: 0A000

The file at <path> does not have .py or .sql suffix. Only Python and SQL files are supported in pipelines.

UNSUPPORTED_LIBRARY_NOTEBOOK_LANGUAGE​

SQLSTATE: 0A000

Unsupported language <language> for notebook <path>. Only Python and SQL are supported in pipelines

UNSUPPORTED_LIBRARY_OBJECT_TYPE​

SQLSTATE: 0A000

The included at path <path> is of type <objectType> which is not supported. Currently, only notebooks and files can be used as libraries. To resolve this issue, remove the unsupported object or update the libraries configured for this pipeline so only supported object types are included.

UNSUPPORTED_MANAGED_INGESTION_SOURCE_TYPE​

SQLSTATE: 0A000

Invalid managed ingestion pipeline definition, unsupported source type: <sourceType>.

UNSUPPORTED_SAAS_INGESTION_TYPE​

SQLSTATE: 0A000

The provided ingestion type <ingestionType> is not supported.

Please contact Databricks support if this issue persists.

UNSUPPORTED_SPARK_SQL_COMMAND​

SQLSTATE: 0A000

'<command>' is not supported in spark.sql("...") API in DLT Python. Supported command: <supportedCommands>.

UNSUPPORTED_SQL_STATEMENT​

SQLSTATE: 0A000

Unsupported SQL statement for <datasetType> '<datasetName>': <operation> is not supported.

UPDATED_DEFINITION_SET_FOR_NON_DBSQL_PIPELINE​

SQLSTATE: 42000

Only DBSQL pipelines can have updated_definition.

USE_CATALOG_IN_HMS​

SQLSTATE: 0A000

USE CATALOG only supported in UC-enabled Lakeflow Declarative Pipelines

VIEW_TEXT_NOT_SAVED_IN_UC​

SQLSTATE: 42000

Cannot refresh table <tableName> as it does not have a query saved in Unity Catalog. Please contact Databricks support.

WORKDAY_REPORTS_CONNECTOR_REPORT_NOT_FOUND_ERROR​

SQLSTATE: KD000

An error occurred in Workday Reports. Source API type: <saasSourceApiType>.

The report URL '<reportUrl>' is incorrect. Please check for any typos in either the base URL or the report name to resolve the issue.

If the issue persists, file a ticket.

WORKDAY_REPORTS_CONNECTOR_REPORT_SIZE_EXCEEDED_ERROR​

SQLSTATE: KD000

An error occurred in Workday Reports. Source API type: <saasSourceApiType>.

The size of the report with URL '<reportUrl>' is greater than 2GB. Please ensure that the report size does not exceed this limit.

If the issue persists, file a ticket.

WORKDAY_REPORTS_CONNECTOR_UNAUTHORIZED_ACCESS_ERROR​

SQLSTATE: KD000

An error occurred in Workday Reports. Source API type: <saasSourceApiType>.

For more details see WORKDAY_REPORTS_CONNECTOR_UNAUTHORIZED_ACCESS_ERROR

WORKDAY_REPORT_URL_EMPTY​

SQLSTATE: 42000

Workday report URL is empty. At least one report must be provided.

WORKSPACE_QUOTA_EXCEEDED​

SQLSTATE: 54000

Cannot start update '<updateId>' because there are already '<maxActiveUpdates>' active pipelines running in this workspace.

Installation​ ERROR_BASE_ENVIRONMENT_FILE_NOT_FOUND​

SQLSTATE: none assigned

Base environment YAML file is missing. Please ensure the file exists in the specified path.

Description: Occurs when the required base environment YAML file cannot be found at the expected location.

Suggested Action: Ensure the base environment YAML file exists in the specified path and retry the installation.

ERROR_BASE_ENVIRONMENT_FILE_READ​

SQLSTATE: none assigned

Failed to read base environment file due to incorrect syntax or format of YAML file. Please review the file contents.

Description: Occurs when the base environment YAML file contains syntax or format errors.

Suggested Action: Validate and fix the YAML syntax in the environment file, then retry the installation.

ERROR_CONNECTION_REFUSED​

SQLSTATE: none assigned

Cannot connect to package repository. Check network connectivity, firewall settings, or repository availability.

Description: Occurs when pip cannot establish a connection to the remote package repository due to network issues, firewall restrictions, or repository downtime.

Suggested Action: Verify network connectivity, check firewall or proxy settings, confirm the repository URL is accessible, and try again later if the repository may be temporarily unavailable.

ERROR_CORE_PACKAGE_VERSION_CHANGE​

SQLSTATE: none assigned

Installed package is incompatible with Databricks core packages. Please align the package versions with the preinstalled library versions, and retry the installations.

Description: Occurs when a Databricks core dependency (e.g., pyspark) version is changed.

Suggested Action: Align the installed package versions with the preinstalled libraries and retry the installation.

ERROR_CRAN_PACKAGE_NOT_AVAILABLE​

SQLSTATE: none assigned

The CRAN package is not available for the preinstalled R version on this compute.

Description: Occurs when an R package is not published for the installed R version on the compute.

Suggested Action: Switch to a compute with a compatible R version, or choose a different package version.

ERROR_DBFS_DISABLED​

SQLSTATE: none assigned

Public DBFS root access is disabled. Please use an alternative storage.

Description: Occurs when attempting to install libraries from DBFS, but the public DBFS root is disabled in the workspace.

Suggested Action: Use alternative storage locations like UC volumes, workspace files or remote storage.

ERROR_DIRECTORY_NOT_INSTALLABLE​

SQLSTATE: none assigned

Directory is unsinstallable due to invalid Python package structure. Please check that the directory is properly set up.

Description: Occurs when pip install is run against a directory without a valid Python package structure.

Suggested Action: Ensure the directory contains a setup.py or pyproject.toml and retry, or package it as a wheel.

ERROR_DUPLICATE_INSTALLATION​

SQLSTATE: none assigned

Duplicate package installation detected. Try removing the duplicate package entry for the package and restart the cluster.

Description: Occurs when the same package is already installed on the cluster and a duplicate installation is attempted.

Suggested Action: Remove the duplicate package entries and restart the cluster.

ERROR_FEATURE_DISABLED​

SQLSTATE: none assigned

Unity Catalog volumes are disabled in the workspace. Please contact the workspace admin to enable this feature.

Description: Occurs when Unity Catalog volumes are disabled in the workspace, preventing installation from UC volumes.

Suggested Action: Contact your workspace administrator to enable Unity Catalog volumes, or use alternative storage.

ERROR_INVALID_FILE​

SQLSTATE: none assigned

The specified file cannot be installed due to incorrect file type. Please verify you are using the valid and supported file type.

Description: Occurs when the specified library file cannot be installed due to incorrect file type or format.

Suggested Action: Use a supported library file type (e.g., wheel, jar) and verify the path validity.

ERROR_INVALID_REQUIREMENT​

SQLSTATE: none assigned

Incorrect syntax or malformed entries in requirements file or package dependencies. Check and correct the requirements file and package dependencies contents.

Description: Invalid or malformed requirement format detected in either your requirements file or in a package's dependency specifications.

Suggested Action: Use the correct format (e.g., 'library-name==version') in requirements files, verify package dependency formats are valid, and check for typos or unsupported version specifiers.

ERROR_INVALID_SCRIPT_ENTRY_POINT​

SQLSTATE: none assigned

Invalid script entry point. Please check the package entry points or the setup.py file.

Description: Occurs when the specified console script entry point does not exist in the package metadata.

Suggested Action: Verify the entry point name in setup.py or pyproject.toml, or contact the package maintainer.

ERROR_INVALID_STORAGE_CONFIGURATION​

SQLSTATE: none assigned

Invalid storage configuration value detected for the cloud storage account. Please check the storage configuration, account settings and credentials.

Description: Occurs when the cloud storage account configuration is malformed or incorrect.

Suggested Action: Correct the storage configuration for the account and retry installation.

ERROR_INVALID_USER_INPUT​

SQLSTATE: none assigned

Invalid package syntax or arguments provided. Please verify that the input and options for library installation are valid.

Description: Occurs when pip is invoked with invalid options or arguments.

Suggested Action: Verify pip options and command syntax, correct the input, and retry installation.

ERROR_INVALID_WHEEL​

SQLSTATE: none assigned

Corrupted, malformed, or invalid wheel file detected. Please check the wheel file or retry the installation.

Description: Occurs when pip encounters a corrupted, incomplete, or malformed wheel file during installation.

Suggested Action: Clear pip cache, re-download or rebuild the wheel, resintall, and verify its integrity before retrying.

ERROR_JAR_EVICTED​

SQLSTATE: none assigned

The JAR package got evicted by the resolver due to version conflicts. Please resolve dependency version conflicts.

Description: Occurs when Maven dependency resolution evicts a jar because of version conflicts with other dependencies.

Suggested Action: Resolve conflicts in your dependency configuration, or use explicit version overrides.

ERROR_MAVEN_LIBRARY_RESOLUTION​

SQLSTATE: none assigned

Cannot resolve Maven library coordinates. Verify library details, repository access, or Maven repository availability.

Description: Occurs when Maven cannot find or resolve the specified library due to incorrect coordinates, network issues, or repository downtime.

Suggested Action: Verify the groupId:artifactId:version format, check repository URLs and credentials, try alternative repositories, or try again later if the repository may be temporarily unavailable.

ERROR_NO_MATCHING_DISTRIBUTION​

SQLSTATE: none assigned

Unable to download or access the specified cloud storage assets, typically due to misconfigurations, missing dependencies or connectivity issues. Please review cloud storage setup.

Description: Occurs when cluster cannot download or install library files from cloud storage, typically due to misconfigurations, missing dependencies, or network issues.

Suggested Action: Ensure the cloud storage URIs are correct, credentials are valid, and any required network proxies or libraries are properly configured, then retry the installation.

ERROR_NO_SUCH_FILE_OR_DIRECTORY​

SQLSTATE: none assigned

The library file does not exist or the user does not have permission to read the library file. Please check if the library file exists and the user has the right permissions to access the file.

Description: Occurs when the specified file is missing or inaccessible at the given path during library installation.

Suggested Action: Verify that the file exists at the specified path, correct the path or upload the missing file, and ensure proper permissions.

ERROR_OPERATION_NOT_SUPPORTED​

SQLSTATE: none assigned

Library installation is unsupported for this file type from the requested filesystem. Please check the library type and refer to the user guide about the supported library on the current compute.

Description: Occurs when the target filesystem, file type, or compute type does not support the requested operation during installation.

Suggested Action: Use a supported filesystem, file type, or compute type, or adjust installation target to a supported location.

ERROR_PERMISSION_DENIED​

SQLSTATE: none assigned

The user does not have the sufficient permission to install the package. Please check the file and directory access rights for the user.

Description: Occurs when the installing user lacks permission to read or write a file or directory during installation.

Suggested Action: Verify and grant proper permissions on the target directory, or contact your system administrator.

ERROR_PIP_CONFIG​

SQLSTATE: none assigned

User tried to install a Python library, but the cluster's or the workspace-level pip configuration file has syntax errors or is malformed. Please check and fix the pip config file.

Description: Occurs when pip's configuration file has syntax errors or is malformed.

Suggested Action: Fix the syntax errors in the pip configuration file, remove the config file to use default settings, restart the cluster, and retry the installation.

ERROR_REQUIREMENTS_FILE_INSTALLATION​

SQLSTATE: none assigned

requirements.txt files that contain a Unity Catalog volume or workspace file reference are not supported on non-UC enabled clusters. Please use a UC enabled cluster to install requirements.txt that refers to a UC volume or workspace file.

Description: Occurs when requirements.txt includes UC volumes or workspace file references on a non-UC enabled cluster.

Suggested Action: Use a UC-enabled cluster for requirements.txt that references workspace or UC files, or remove those references.

ERROR_REQUIRE_HASHES​

SQLSTATE: none assigned

Missing hashes when pip is run in --require-hashes mode and a requirement lacks a hash. Please address hash requirements or disable hash checking.

Description: Occurs when pip is run in --require-hashes mode and a requirement lacks a hash.

Suggested Action: Add hashes for all packages in requirements.txt or remove --require-hashes flag.

ERROR_RESTART_PYTHON​

SQLSTATE: none assigned

Failed to restart Python process. This may be due to updating the version of a package that conflicts with the preinstalled runtime libraries. Please check and align the package dependencies and their versions.

Description: Occurs when the Python environment cannot be restarted after library installation, often due to conflicts between installed packages and preinstalled Databricks Runtime libraries.

Suggested Action: Align installed package versions with the preinstalled Databricks Runtime libraries to avoid conflicts and Python restart errors.

ERROR_RESTART_SCALA​

SQLSTATE: none assigned

Failed to restart Scala process. This may be due to Scala version mismatch in scala JARs (e.g., running 2.12 jar on a 2.13 kernel). Please check and align the Scala versions.

Description: Occurs when the Scala environment cannot be restarted, often due to conflicts of Scala version mismatch in scala JARs.

Suggested Action: Align JAR scala versions with kernel scala version to avoid conflicts and Scala restart errors.

ERROR_S3_FORBIDDEN​

SQLSTATE: none assigned

Access denied to S3 resource. Check the IAM permissions and bucket policies.

Description: The cluster's AWS credentials do not have sufficient permissions to access the specified S3 resource.

Suggested Action: Verify and update the S3 bucket policies or IAM roles to grant the necessary read access to the cluster.

ERROR_SETUP_PY_FAILURE​

SQLSTATE: none assigned

The Python package's setup.py did not run successfully due to compatibility issues, missing dependencies, or configuration errors. Please check the setup file of your dependencies.

Description: Occurs when the package's setup.py script fails due to compatibility issues, missing dependencies, or configuration errors.

Suggested Action: Update package versions, install missing dependencies, replace deprecated packages, and verify the setup.py script.

ERROR_SSL_VIOLATION​

SQLSTATE: none assigned

Pip encountered SSL handshake or certificate verification issues. Please review the SSL configurations and certificates on your compute or workspace.

Description: Occurs when pip encounters SSL handshake or certificate verification issues when connecting to package repositories.

Suggested Action: Verify SSL certificates are valid, configure trusted hosts in pip, or check network SSL setting.

ERROR_UC_ASSET_NOT_FOUND​

SQLSTATE: none assigned

Unity Catalog object not found. Verify catalog, schema, and volume exist.

Description: Occurs when the specified Unity Catalog volume, catalog, or schema does not exist or is inaccessible

Suggested Action: Verify the Unity Catalog object path is correct and the object exists in your account.

ERROR_UNSUPPORTED_LIBRARY_TYPE​

SQLSTATE: none assigned

The library type is unsupported on this compute. Please check the supported libraries for the compute type.

Description: Occurs when attempting to install a library type that is not compatible with the selected compute.

Suggested Action: Use a supported library type for this compute or switch to a compute that supports this libraryType.

ERROR_UNSUPPORTED_PYTHON_VERSION​

SQLSTATE: none assigned

The Python library is incompatible with the Python version on this compute. Please use a compute with a compatible Python version.

Description: Occurs when a package's python_requires constraint does not match the Python version running on the compute.

Suggested Action: Install a package version that supports the current Python version, or change the compute version.

ERROR_UNSUPPORTED_SSL_ENABLED​

SQLSTATE: none assigned

Installation fails when spark.ssl.enabled configuration is turned on, which is unsupported for library installation. Disable SSL configuration and restart the cluster.

Description: Occurs when the spark.ssl.enabled configuration is turned on, which is unsupported for library installation.

Suggested Action: Disable SSL configuration (e.g. set spark.ssl.enabled=false or set spark.databricks.libraries.ignoreSSL=true) and restart the cluster.

ERROR_USER_NOT_FOUND_IN_WORKSPACE​

SQLSTATE: none assigned

Library installation failed because the user was not found in the workspace. This typically happens when a user has been removed from the workspace but their token is still being used.

Description: Occurs when a user's access token is in use but the user no longer exists in the specified workspace.

Suggested Action: Ensure the user has access to the workspace, or update the cluster configuration to use a valid user's credentials.

ERROR_VOLUME_PERMISSION_DENIED​

SQLSTATE: none assigned

Insufficient permissions on Unity Catalog volume. Please check the UC volume access rights or request access the UC Volume owner.

Description: Occurs when the user lacks permissions on the specified UC volume.

Suggested Action: Request READ permissions on the Unity Catalog volume from the volume owner or administrator.

ERROR_WHEEL_BUILD​

SQLSTATE: none assigned

Pip could not successfully build the wheel due to missing build dependencies or errors. Please check the wheel package contents and dependencies.

Description: Occurs when pip fails to build a wheel for the package due to missing build dependencies or errors.

Suggested Action: Ensure build tools and headers are installed, or install a prebuilt wheel with --no-binary.

ERROR_WHEEL_INSTALLATION​

SQLSTATE: none assigned

The wheel is incompatible with the current compute due to platform tag mismatch or an invalid wheel file. Please check the wheel package contents, dependencies, and their compatibility with the compute.

Description: Occurs when the wheel file is invalid or the platform tags do not match.

Suggested Action: Use a wheel built for the current platform or rebuild the wheel with appropriate tags.

FAULT_CLOUD_STORAGE_INSTALLATION​

SQLSTATE: none assigned

Unable to download or access the specified cloud storage assets, typically due to misconfigurations or connectivity issues. Please review the cloud storage setup.

Description: Occurs when the compute cannot download or install library files from cloud storage, typically due to misconfigurations or network issues.

Suggested Action: Ensure the cloud storage URIs are correct, credentials are valid, and any required network proxies or libraries are properly configured, then retry the installation.

FAULT_DBR_VERSION_EOS​

SQLSTATE: none assigned

The Databricks Runtime version on the compute has reached its end-of-support and is no longer supported. Please use a supported Databricks runtime version.

Description: Occurs when the library is installed on a Databricks Runtime version that no longer receives updates or support.

Suggested Action: Upgrade the cluster to a supported Databricks Runtime version and retry the installation.

FAULT_POLLER_ALLOWLIST_VALIDATION​

SQLSTATE: none assigned

Library installation is blocked due to missing from the allowlist. This can happen if a library is removed from the allowlist after it has been added to a cluster. Check the library allowlist, request the administrator to add libraries to the allowlist or remove unauthorized packages from the cluster.

Description: Occurs when one or more requested libraries are not approved in the metastore allowlist and are blocked from installation. This can also happen if a library was previously allowed but later removed from the allowlist.

Suggested Action: Check the metastore allowlist, request your administrator to add the missing libraries to the allowlist, or remove unauthorized libraries from the cluster.

FAULT_POLLER_DBR_UNSUPPORTED​

SQLSTATE: none assigned

The Databricks Runtime version on the cluster has reached its end-of-support and is no longer supported. Please use a supported Databricks runtime version.

Description: Occurs when the library is installed on a Databricks Runtime version that no longer receives updates or support.

Suggested Action: Change the cluster to use a supported Databricks Runtime version and retry the installation.

FAULT_POLLER_LIBRARY_STORAGE_FORMAT_UNSUPPORTED​

SQLSTATE: none assigned

The selected Databricks Runtime version may not support certain library storage formats, such as gs:// or abfss:// paths. Please upgrade Databricks Runtime or refer to the user guide regarding the capabilities for different Databricks Runtime version.

Description: Occurs when the Databricks Runtime version does not support the specified library storage format or protocol.

Suggested Action: Use a supported storage scheme or upgrade to a Databricks Runtime version that supports the desired storage format.

FAULT_POLLER_UNITY_CATALOG_NOT_AVAILABLE_ERROR​

SQLSTATE: none assigned

Library installation requires Unity Catalog but Unity Catalog is unavailable in the cluster or workspace. Please contact the workspace admin to enable this feature.

Description: Occurs when a library installation requires Unity Catalog but Unity Catalog is unavailable in the workspace.

Suggested Action: Request administration to enable Unity Catalog in your workspace or use a cluster that supports Unity Catalog.

FAULT_STORAGE_ACCESS_FAILURE​

SQLSTATE: none assigned

Unable to access cloud storage resources due to credential, network, or permission issues. Please check the cloud storage access configuration.

Description: Occurs when the compute cannot access the cloud storage resources due to credential, network, or permission issues.

Suggested Action: Verify storage credentials, network connectivity, and access permissions, then retry the installation.

Miscellaneous​ ABAC_POLICIES_NOT_SUPPORTED_FOR_RUNTIME_VERSION​

SQLSTATE: none assigned

DBR version <abacDBRMajorVersion>.<abacDBRMinorVersion> or higher is required to query table <tableFullName>, due to it being protected by an ABAC policy.

AZURE_ENTRA_CREDENTIALS_MISSING​

SQLSTATE: none assigned

Azure Entra (aka Azure Active Directory) credentials missing.

Ensure you are either logged in with your Entra account

or have setup an Azure DevOps personal access token (PAT) in User Settings > Git Integration.

If you are not using a PAT and are using Azure DevOps with the Repos API,

you must use an Azure Entra access token.

See https://docs.microsoft.com/azure/databricks/dev-tools/api/latest/aad/app-aad-token for steps to acquire an Azure Entra access token.

AZURE_ENTRA_CREDENTIALS_PARSE_FAILURE​

SQLSTATE: none assigned

Encountered an error with your Azure Entra (Azure Active Directory) credentials. Please try logging out of

Entra (https://portal.azure.com) and logging back in.

Alternatively, you may also visit User Settings > Git Integration to set

up an Azure DevOps personal access token.

AZURE_ENTRA_LOGIN_ERROR​

SQLSTATE: none assigned

Encountered an error with your Azure Active Directory credentials. Please try logging out of

Azure Active Directory (https://portal.azure.com) and logging back in.

AZURE_ENTRA_WORKLOAD_IDENTITY_ERROR​

SQLSTATE: none assigned

Encountered an error with Azure Workload Identity with Azure Exception: <azureWorkloadIdentityExceptionMessage>

CLEAN_ROOM_DELTA_SHARING_ENTITY_NOT_AUTHORIZED​

SQLSTATE: none assigned

Credential generation for clean room delta sharing securable cannot be requested.

CLEAN_ROOM_HIDDEN_SECURABLE_PERMISSION_DENIED​

SQLSTATE: none assigned

Securable <securableName> with type <securableType> and kind <securableKind> is clean room system managed, user does not have access.

CONSTRAINT_ALREADY_EXISTS​

SQLSTATE: none assigned

Constraint with name <constraintName> already exists, choose a different name.

CONSTRAINT_DOES_NOT_EXIST​

SQLSTATE: none assigned

Constraint <constraintName> does not exist.

COULD_NOT_READ_REMOTE_REPOSITORY​

SQLSTATE: none assigned

Could not read remote repository (<repoUrl>).

Please go to your remote Git provider to ensure that:

  1. Your remote Git repo URL is valid.

  2. Your personal access token or app password has the correct repo access.

COULD_NOT_RESOLVE_REPOSITORY_HOST​

SQLSTATE: none assigned

Could not resolve host for <repoUrl>.

CSMS_BEGINNING_OF_TIME_NOT_SUPPORTED​

SQLSTATE: none assigned

Parameter beginning_of_time cannot be true.

CSMS_CONTINUATION_TOKEN_EXPIRED​

SQLSTATE: none assigned

Requested objects could not be found for the continuation token.

CSMS_INVALID_CONTINUATION_TOKEN​

SQLSTATE: none assigned

Continuation token invalid. Cause: <msg>

CSMS_INVALID_MAX_OBJECTS​

SQLSTATE: none assigned

Invalid value <value> for parameter max_objects, expected value in [<minValue>, <maxValue>]

CSMS_INVALID_SUBSCRIPTION_ID​

SQLSTATE: none assigned

Subscription ID invalid. Cause: <msg>

CSMS_INVALID_URI_FORMAT​

SQLSTATE: none assigned

Invalid URI format. Expected a volume (e.g. “/Volumes/catalog/schema/volume”) or cloud storage path (e.g. “s3://some-uri”)

CSMS_KAFKA_TOPIC_MISSING​

SQLSTATE: none assigned

Must provide a Kafka topic

CSMS_LOCATION_ERROR​

SQLSTATE: none assigned

Failed to list objects. There are problems on the location that need to be resolved. Details: <msg>

CSMS_LOCATION_NOT_KNOWN​

SQLSTATE: none assigned

No location found for uri <path>

CSMS_METASTORE_ID_MISSING​

SQLSTATE: none assigned

Must provide a metastore uuid

CSMS_METASTORE_RESOLUTION_FAILED​

SQLSTATE: none assigned

Unable to determine a metastore for the request.

CSMS_RESOLVE_LOCAL_SHARD_NAME_FAILED​

SQLSTATE: none assigned

CSMS failed to resolve the local shard name

CSMS_SERVICE_DISABLED​

SQLSTATE: none assigned

Service is disabled

CSMS_SHARD_NAME_MISSING_IN_REQUEST​

SQLSTATE: none assigned

Shard name is missing from a rpc request to CSMS

CSMS_SUBSCRIPTION_ID_MISSING_IN_REQUEST​

SQLSTATE: none assigned

Subscription id is missing in request.

CSMS_SUBSCRIPTION_NOT_FOUND​

SQLSTATE: none assigned

Subscription with id <id> not found.

CSMS_UNITY_CATALOG_DISABLED​

SQLSTATE: none assigned

Unity catalog is disabled for this workspace

CSMS_UNITY_CATALOG_ENTITY_NOT_FOUND​

SQLSTATE: none assigned

Unity Catalog entity not found. Ensure that the catalog, schema, volume and/or external location exists.

CSMS_UNITY_CATALOG_EXTERNAL_LOCATION_DOES_NOT_EXIST​

SQLSTATE: none assigned

Unity Catalog external location does not exist.

CSMS_UNITY_CATALOG_EXTERNAL_STORAGE_OVERLAP​

SQLSTATE: none assigned

URI overlaps with other volumes

CSMS_UNITY_CATALOG_METASTORE_DOES_NOT_EXIST​

SQLSTATE: none assigned

Unable to determine a metastore for the request. Metastore does not exist

CSMS_UNITY_CATALOG_PATH_BASED_ACCESS_TO_TABLE_WITH_FILTER_NOT_ALLOWED​

SQLSTATE: none assigned

URI points to table with row-level filter or column mask. Path-based access to this table is not allowed.

CSMS_UNITY_CATALOG_PERMISSION_DENIED​

SQLSTATE: none assigned

Permission denied

CSMS_UNITY_CATALOG_TABLE_DOES_NOT_EXIST​

SQLSTATE: none assigned

Unity Catalog table does not exist.

CSMS_UNITY_CATALOG_VOLUME_DOES_NOT_EXIST​

SQLSTATE: none assigned

Unity Catalog volume does not exist.

CSMS_UNSUPPORTED_SECURABLE​

SQLSTATE: none assigned

Unsupported securable

CSMS_URI_MISSING​

SQLSTATE: none assigned

Must provide uri

CSMS_URI_TOO_LONG​

SQLSTATE: none assigned

Provided uri is too long. Maximum permitted length is <maxLength>.

DMK_CATALOGS_DISALLOWED_ON_CLASSIC_COMPUTE​

SQLSTATE: none assigned

Databricks Default Storage cannot be accessed using Classic Compute. Please use Serverless compute to access data in Default Storage

GITHUB_APP_COULD_NOT_REFRESH_CREDENTIALS​

SQLSTATE: none assigned

Operation failed because linked GitHub app credentials could not be refreshed.

Please try again or go to User Settings > Git Integration and try relinking your Git provider account.

If the problem persists, please file a support ticket.

GITHUB_APP_CREDENTIALS_NO_ACCESS​

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. An admin of the repository must go to https://github.com/apps/databricks/installations/new and install the Databricks GitHub app on the repository.

Alternatively, a GitHub account owner can install the app on the account to give access to the account's repositories.

  1. If the app is already installed, have an admin ensure that if they are using scoped access with the 'Only select repositories' option, they have included access to this repository by selecting it.

Refer to https://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GITHUB_APP_EXPIRED_CREDENTIALS​

SQLSTATE: none assigned

Linked GitHub app credentials expired after 6 months of inactivity.

Go to User Settings > Git Integration and try relinking your credentials.

If the problem persists, please file a support ticket.

GITHUB_APP_INSTALL_ON_DIFFERENT_USER_ACCOUNT​

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. GitHub user <gitCredentialUsername> should go to https://github.com/apps/databricks/installations/new and install the app on the account <gitCredentialUsername> to allow access.

  2. If user <gitCredentialUsername> already installed the app and they are using scoped access with the 'Only select repositories' option, they should ensure they have included access to this repository by selecting it.

Refer to https://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GITHUB_APP_INSTALL_ON_ORGANIZATION​

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. An owner of the GitHub organization <organizationName> should go to https://github.com/apps/databricks/installations/new and install the app on the organization <organizationName> to allow access.

  2. If the app is already installed on GitHub organization <organizationName>, have an owner of this organization ensure that if using scoped access with the 'Only select repositories' option, they have included access to this repository by selecting it.

Refer to https://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GITHUB_APP_INSTALL_ON_YOUR_ACCOUNT​

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. Go to https://github.com/apps/databricks/installations/new and install the app on your account <gitCredentialUsername> to allow access.

  2. If the app is already installed, and you are using scoped access with the 'Only select repositories' option, ensure that you have included access to this repository by selecting it.

Refer to https://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GIT_CLUSTER_NOT_READY​

SQLSTATE: none assigned

Git cluster is not ready.

GIT_CREDENTIAL_GENERIC_INVALID​

SQLSTATE: none assigned

Invalid Git provider credentials for repository URL <repoUrl>.

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

Go to User Settings > Git Integration to view your credential.

Please go to your remote Git provider to ensure that:

  1. You have entered the correct Git user email or username with your Git provider credentials.

  2. Your token or app password has the correct repo access.

  3. Your token has not expired.

  4. If you have SSO enabled with your Git provider, be sure to authorize your token.

GIT_CREDENTIAL_INVALID_PAT​

SQLSTATE: none assigned

Invalid Git provider Personal Access Token credentials for repository URL <repoUrl>.

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

Go to User Settings > Git Integration to view your credential.

Please go to your remote Git provider to ensure that:

  1. You have entered the correct Git user email or username with your Git provider credentials.

  2. Your token or app password has the correct repo access.

  3. Your token has not expired.

  4. If you have SSO enabled with your Git provider, be sure to authorize your token.

GIT_CREDENTIAL_MISSING​

SQLSTATE: none assigned

No Git credential configured, but credential required for this repository (<repoUrl>).

Go to User Settings > Git Integration to set up your Git credentials.

GIT_CREDENTIAL_NO_WRITE_PERMISSION​

SQLSTATE: none assigned

Write access to <gitCredentialProvider> repository (<repoUrl>) not granted.

Make sure you (<gitCredentialUsername>) have write access to this remote repository.

GIT_CREDENTIAL_PROVIDER_MISMATCHED​

SQLSTATE: none assigned

Incorrect Git credential provider for repository.

Your current Git credential's provider (<gitCredentialProvider>) does not match that of the repository's Git provider <repoUrl>.

Try a different repository or go to User Settings > Git Integration to update your Git credentials.

GIT_FILE_NAME_TOO_LONG​

SQLSTATE: none assigned

File or folder name(s) in path <path> exceed the <maxComponentBytes>-byte maximum per component.

Unix-based systems only support up to <maxComponentBytes> bytes per file or folder name.

Violations: <violations>

Please shorten the offending component(s) to proceed.

GIT_PROXY_CLUSTER_NOT_READY​

SQLSTATE: none assigned

Git proxy cluster is not ready.

GIT_PROXY_CONNECTION_FAILED​

SQLSTATE: none assigned

Failed to connect to Git Proxy, please check if Git Proxy is up and running.

Error: <error>

GIT_SECRET_IN_CODE​

SQLSTATE: none assigned

Secrets found in the commit. Detail: <secretDetail>. To fix this error:

Remove the secret and try committing again.

If the problem persists, please file a support ticket.

HIERARCHICAL_NAMESPACE_NOT_ENABLED​

SQLSTATE: none assigned

The Azure storage account does not have hierarchical namespace enabled.

INVALID_FIELD_LENGTH​

SQLSTATE: none assigned

<rpcName> <fieldName> too long. Maximum length is <maxLength> characters.

INVALID_PARAMETER_VALUE​

SQLSTATE: none assigned

<msg>

For more details see INVALID_PARAMETER_VALUE

JOBS_TASK_FRAMEWORK_TASK_RUN_OUTPUT_NOT_FOUND​

SQLSTATE: none assigned

Task Framework: Task Run Output for Task with runId <runId> and orgId <orgId> could not be found.

JOBS_TASK_FRAMEWORK_TASK_RUN_STATE_NOT_FOUND​

SQLSTATE: none assigned

Task Framework: Task Run State for Task with runId <runId> and orgId <orgId> could not be found.

JOBS_TASK_REGISTRY_TASK_CLIENT_CONFIG_DOES_NOT_EXIST​

SQLSTATE: none assigned

RPC ClientConfig for Task with ID <taskId> does not exist.

JOBS_TASK_REGISTRY_TASK_DOES_NOT_EXIST​

SQLSTATE: none assigned

Task with ID <taskId> does not exist.

JOBS_TASK_REGISTRY_UNSUPPORTED_JOB_TASK​

SQLSTATE: none assigned

Task Registry: Unsupported or unknown JobTask with class <taskClassName>.

PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_EXTERNAL_SHALLOW_CLONE​

SQLSTATE: none assigned

Path-based access to external shallow clone table <tableFullName> is not supported. Please use table names to access the shallow clone instead.

PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_FABRIC​

SQLSTATE: none assigned

Fabric table located at url '<url>' is not found. Please use the REFRESH FOREIGN CATALOG command to populate Fabric tables.

PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_TABLES_WITH_ROW_COLUMN_ACCESS_POLICIES​

SQLSTATE: none assigned

Path-based access to table <tableFullName> with row filter or column mask not supported.

PERMISSION_DENIED​

SQLSTATE: none assigned

User does not have <msg> on <resourceType> '<resourceName>'.

REDASH_DELETE_ASSET_HANDLER_INVALID_INPUT​

SQLSTATE: none assigned

Unable to parse delete object request: <invalidInputMsg>

REDASH_DELETE_OBJECT_NOT_IN_TRASH​

SQLSTATE: none assigned

Unable to delete object <resourceName> that is not in trash

REDASH_PERMISSION_DENIED​

SQLSTATE: none assigned

Could not find or have permission to access resource <resourceId>

REDASH_QUERY_NOT_FOUND​

SQLSTATE: none assigned

Unable to find the resource from query id <queryId>

REDASH_QUERY_SNIPPET_CREATION_FAILED​

SQLSTATE: none assigned

Unable to create new query snippet

REDASH_QUERY_SNIPPET_QUOTA_EXCEEDED​

SQLSTATE: none assigned

The quota for the number of query snippets has been reached. The current quota is <quota>.

REDASH_QUERY_SNIPPET_TRIGGER_ALREADY_IN_USE​

SQLSTATE: none assigned

The specified trigger <trigger> is already in use by another query snippet in this workspace.

REDASH_RESOURCE_NOT_FOUND​

SQLSTATE: none assigned

The requested resource <resourceName> does not exist

REDASH_RESTORE_ASSET_HANDLER_INVALID_INPUT​

SQLSTATE: none assigned

Unable to parse delete object request: <invalidInputMsg>

REDASH_RESTORE_OBJECT_NOT_IN_TRASH​

SQLSTATE: none assigned

Unable to restore object <resourceName> that is not in trash

REDASH_TRASH_OBJECT_ALREADY_IN_TRASH​

SQLSTATE: none assigned

Unable to trash already-trashed object <resourceName>

REDASH_UNABLE_TO_GENERATE_RESOURCE_NAME​

SQLSTATE: none assigned

Could not generate resource name from id <id>

REDASH_VISUALIZATION_CREATION_FAILED​

SQLSTATE: none assigned

Unable to create new visualization

REDASH_VISUALIZATION_NOT_FOUND​

SQLSTATE: none assigned

Could not find visualization <visualizationId>

REDASH_VISUALIZATION_QUOTA_EXCEEDED​

SQLSTATE: none assigned

The quota for the number of visualizations on query <query_id> has been reached. The current quota is <quota>.

REPOSITORY_URL_NOT_FOUND​

SQLSTATE: none assigned

Remote repository (<repoUrl>) not found.

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

Please go to your remote Git provider to ensure that:

  1. Your remote Git repo URL is valid.

  2. Your personal access token or app password has the correct repo access.

RESOURCE_ALREADY_EXISTS​

SQLSTATE: none assigned

<resourceType> '<resourceIdentifier>' already exists

RESOURCE_DOES_NOT_EXIST​

SQLSTATE: none assigned

<resourceType> '<resourceIdentifier>' does not exist.

ROW_COLUMN_ACCESS_POLICIES_NOT_SUPPORTED_ON_ASSIGNED_CLUSTERS​

SQLSTATE: none assigned

Query on table <tableFullName> with row filter or column mask not supported on assigned clusters.

ROW_COLUMN_SECURITY_NOT_SUPPORTED_WITH_TABLE_IN_DELTA_SHARING​

SQLSTATE: none assigned

Table <tableFullName> is being shared with Delta Sharing, and cannot use row/column security.

SERVICE_TEMPORARILY_UNAVAILABLE​

SQLSTATE: none assigned

The <serviceName> service is temporarily under maintenance. Please try again later.

TABLE_WITH_ROW_COLUMN_SECURITY_NOT_SUPPORTED_IN_ONLINE_MODE​

SQLSTATE: none assigned

Table <tableFullName> cannot have both row/column security and online materialized views.

TOO_MANY_ROWS_TO_UPDATE​

SQLSTATE: none assigned

Too many rows to update, aborting update.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4