Applies to: Databricks SQL Databricks Runtime
Constructs a virtual table that has no physical data based on the result-set of a SQL query or a metric view based on a yaml specification. ALTER VIEW and DROP VIEW only change metadata.
To execute this statement, you must be a metastore administrator or have USE CATALOG
and USE SCHEMA
privileges on the catalog and schema, along with CREATE TABLE
privileges in the target schema.
The user executing this command will become the owner of the view.
SyntaxâCREATE [ OR REPLACE ] [ TEMPORARY ] VIEW [ IF NOT EXISTS ] view_name
[ column_list ]
[ with_clause |
COMMENT view_comment |
DEFAULT COLLATION collation_name |
TBLPROPERTIES clause |
LANGUAGE YAML ] [...]
AS { query | $$ yaml_string $$ }
with_clause
WITH { { schema_binding | METRICS } |
( { schema_binding | METRICS } [, ...] } )
schema_binding
WITH SCHEMA { BINDING | COMPENSATION | [ TYPE ] EVOLUTION }
column_list
( { column_alias [ COMMENT column_comment ] } [, ...] )
Parametersâ
OR REPLACE
If a view of the same name already exists, it is replaced. To replace an existing view you must be its owner.
Replacing an existing view does not preserve privileges granted on the original view or the table_id
. Use ALTER VIEW to preserve privileges.
CREATE OR REPLACE VIEW view_name
is equivalent to DROP VIEW IF EXISTS view_name
followed by CREATE VIEW view_name
.
TEMPORARY
TEMPORARY
views are visible only to the session that created them and are dropped when the session ends.
GLOBAL TEMPORARY
Applies to: Databricks Runtime
GLOBAL TEMPORARY
views are tied to a system preserved temporary schema global_temp
.
IF NOT EXISTS
Creates the view only if it does not exist. If a view by this name already exists the CREATE VIEW
statement is ignored.
You may specify at most one of IF NOT EXISTS
or OR REPLACE
.
The name of the newly created view. A temporary view's name must not be qualified. The fully qualified view name must be unique.
View names created in hive_metastore
can only contain alphanumeric ASCII characters and underscores (INVALID_SCHEMA_OR_RELATION_NAME).
METRICS
Applies to: Databricks SQL Databricks Runtime 16.4 and above Unity Catalog only
Identifies the view as a metric view. The view must be defined with LANGUAGE YAML
and the body of the view must be a valid yaml specification.
This clause is not supported for temporary views.
A metric view does not support the DEFAULT COLLATION
and schema_binding
clauses.
The YAML specification of the metric view defines dimensions
and measures
. The dimensions
are the columns of the view by which the invoker can aggregate the measures, while the measures
define the aggregations of the view.
The invoker of a metric view uses the measure expression to access the views defined measures instead of specifying aggregation functions.
schema_binding
Applies to: Databricks Runtime 15.3 and above
Optionally specifies how the view adapts to changes to the schema of the query due to changes in the underlying object definitions.
This clause is not supported for temporary views, metric views, or materialized views.
SCHEMA BINDING
The view will become invalid if the query column-list changes except for the following conditions:
This is the default behavior.
SCHEMA COMPENSATION
The view will become invalid if the query column list changes except for the following conditions:
SCHEMA TYPE EVOLUTION
The view will adopt any changes to types in the query column list into its own definition when the SQL compiler detects such a change in response to a reference to the view.
SCHEMA EVOLUTION
SCHEMA TYPE EVOLUTION
, and also adopts changes in column names or added and dropped columns if the view does not include an explicit column_list
.column_list
does not match the number of expressions in the query
select-list anymore.column_list
Optionally labels the columns in the query result of the view. If you provide a column list the number of column aliases must match the number of expressions in the query or, for metric views, the YAML specification. In case no column list is specified aliases are derived from the body of the view.
The column aliases must be unique.
column_comment
An optional STRING
literal describing the column alias.
view_comment
An optional STRING
literal providing a view-level comments.
DEFAULT COLLATION collation_name
Applies to: Databricks SQL Databricks Runtime 16.3 and above
Defines the default collation to use within query
. If not specified, the default collation is derived from the schema in which the view is created.
This clause is not supported for metric views.
Optionally sets one or more user defined properties.
AS query
A query that constructs the view from base tables or other views.
This clause is not supported for metric views.
AS $$ yaml_string $$
A yaml specification that defines a metric view.
SQL
> CREATE OR REPLACE VIEW experienced_employee
(id COMMENT 'Unique identification number', Name)
COMMENT 'View for experienced employees'
AS SELECT id, name
FROM all_employee
WHERE working_years > 5;
> CREATE TEMPORARY VIEW subscribed_movies
AS SELECT mo.member_id, mb.full_name, mo.movie_title
FROM movies AS mo
INNER JOIN members AS mb
ON mo.member_id = mb.id;
> CREATE TABLE emp(name STRING, income INT);
> CREATE VIEW emp_v WITH SCHEMA BINDING AS SELECT * FROM emp;
â The view ignores adding a column to the base table
> ALTER TABLE emp ADD COLUMN bonus SMALLINT;
> SELECT * FROM emp_v;
name income
> CREATE OR REPLACE TABLE emp(name STRING, income SMALLINT, bonus SMALLINT);
> SELECT typeof(income) FROM emp_v;
INTEGER
â The view does not tolerate widening the underlying type
CREATE OR REPLACE TABLE emp(name STRING, income BIGINT, bonus SMALLINT);
> SELECT typeof(income) FROM emp_v;
Error
â Create a view with SCHEMA COMPENSATION
> CREATE TABLE emp(name STRING, income SMALLINT, bonus SMALLINT);
> CREATE VIEW emp_v WITH SCHEMA COMPENSATION AS SELECT * FROM emp;
CREATE OR REPLACE TABLE emp(name STRING, income INTEGER, bonus INTEGER);
> SELECT typeof(income) FROM emp_v;
INTEGER
ALTER TABLE emp DROP COLUMN bonus;
> SELECT * FROM emp_v;
Error
â Create a view with SCHEMA EVOLUTION
> CREATE TABLE emp(name STRING, income SMALLINT);
> CREATE VIEW emp_v WITH SCHEMA EVOLUTION AS SELECT * FROM emp;
> ALTER TABLE emp ADD COLUMN bonus SMALLINT
> SELECT * FROM emp_v;
name income bonus
> ALTER TABLE emp RENAME COLUMN income TO salary SMALLINT;
> SELECT * FROM emp_v;
name salary bonus
> CREATE OR REPLACE TABLE emp(name STRING, salary BIGINT);
> SELECT *, typeof(salary)AS salary_type FROM emp_v;
name salary
> CREATE VIEW v DEFAULT COLLATION UTF8_BINARY
AS SELECT 5::STRING AS text;
> CREATE OR REPLACE VIEW region_sales_metrics
(month COMMENT 'Month order was made',
status,
order_priority,
count_orders COMMENT 'Count of orders',
total_Revenue,
total_revenue_per_customer,
total_revenue_for_open_orders)
WITH METRICS
LANGUAGE YAML
COMMENT 'A Metric View for regional sales metrics.'
AS $$
version: 0.1
source: samples.tpch.orders
filter: o_orderdate > '1990-01-01'
dimensions:
- name: month
expr: date_trunc('MONTH', o_orderdate)
- name: status
expr: case
when o_orderstatus = 'O' then 'Open'
when o_orderstatus = 'P' then 'Processing'
when o_orderstatus = 'F' then 'Fulfilled'
end
- name: prder_priority
expr: split(o_orderpriority, '-')[1]
measures:
- name: count_orders
expr: count(1)
- name: total_revenue
expr: SUM(o_totalprice)
- name: total_revenue_per_customer
expr: SUM(o_totalprice) / count(distinct o_custkey)
- name: total_revenue_for_open_orders
expr: SUM(o_totalprice) filter (where o_orderstatus='O')
$$;
> DESCRIBE EXTENDED region_sales_metrics;
col_name data_type
month timestamp
status string
order_priority string
count_orders bigint measure
total_revenue decimal(28,2) measure
total_revenue_per_customer decimal(38,12) measure
total_revenue_for_open_orders decimal(28,2) measure
Catalog main
Database default
Table region_sales_metrics
Owner alf@melmak.et
Created Time Thu May 15 13:03:01 UTC 2025
Last Access UNKNOWN
Created By Spark
Type METRIC_VIEW
Comment A Metric View for regional sales metrics.
Use Remote Filtering false
View Text "
version: 0.1
source: samples.tpch.orders
filter: o_orderdate > '1990-01-01'
dimensions:
- name: month
expr: date_trunc('MONTH', o_orderdate)
- name: status
expr: case
when o_orderstatus = 'O' then 'Open'
when o_orderstatus = 'P' then 'Processing'
when o_orderstatus = 'F' then 'Fulfilled'
end
- name: Order_Priority
expr: split(o_orderpriority, '-')[1]
measures:
- name: count_orders
expr: count(1)
- name: total_Revenue
expr: SUM(o_totalprice)
- name: total_Revenue_per_Customer
expr: SUM(o_totalprice) / count(distinct o_custkey)
- name: Total_Revenue_for_Open_Orders
expr: SUM(o_totalprice) filter (where o_orderstatus='O')
"
Language YAML
Table Properties [metric_view.from.name=samples.tpch.orders, metric_view.from.type=ASSET, metric_view.where=o_orderdate > '1990-01-01']
> SELECT extract(month from month) as month,
measure(total_revenue_per_customer)::bigint AS total_revenue_per_customer
FROM region_sales_metrics
WHERE extract(year FROM month) = 1995
GROUP BY ALL
ORDER BY ALL;
month total_revenue_per_customer
1 167727
2 166237
3 167349
4 167604
5 166483
6 167402
7 167272
8 167435
9 166633
10 167441
11 167286
12 167542
> SELECT extract(month from month) as month,
status,
measure(total_revenue_per_customer)::bigint AS total_revenue_per_customer
FROM region_sales_metrics
WHERE extract(year FROM month) = 1995
GROUP BY ALL
ORDER BY ALL;
month status total_revenue_per_customer
1 Fulfilled 167727
2 Fulfilled 161720
2 Open 40203
2 Processing 193412
3 Fulfilled 121816
3 Open 52424
3 Processing 196304
4 Fulfilled 80405
4 Open 75630
4 Processing 196136
5 Fulfilled 53460
5 Open 115344
5 Processing 196147
6 Fulfilled 42479
6 Open 160390
6 Processing 193461
7 Open 167272
8 Open 167435
9 Open 166633
10 Open 167441
11 Open 167286
12 Open 167542
Related articlesâ
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4