A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.databricks.com/aws/en/delta/optimize below:

Optimize data file layout | Databricks Documentation

Optimize data file layout

Predictive optimization automatically runs OPTIMIZE on Unity Catalog managed tables. Databricks recommends enabling predictive optimization for all Unity Catalog managed tables to simplify data maintenance and reduce storage costs. See Predictive optimization for Unity Catalog managed tables.

The OPTIMIZE command rewrites data files to improve data layout for Delta tables. For tables with liquid clustering enabled, OPTIMIZE rewrites data files to group data by liquid clustering keys. For tables with partitions defined, file compaction and data layout are performed within partitions.

Tables without liquid clustering can optionally include a ZORDER BY clause to improve data clustering on rewrite. Databricks recommends using liquid clustering instead of partitions, ZORDER, or other data layout approaches.

See OPTIMIZE.

important

In Databricks Runtime 16.0 and above, you can use OPTIMIZE FULL to force reclustering for tables with liquid clustering enabled. See Force reclustering for all records.

Syntax examples​

You trigger compaction by running the OPTIMIZE command:

Python

from delta.tables import *
deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().executeCompaction()

Scala

import io.delta.tables._
val deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().executeCompaction()

If you have a large amount of data and only want to optimize a subset of it, you can specify an optional partition predicate using WHERE:

SQL

OPTIMIZE table_name WHERE date >= '2022-11-18'

Python

from delta.tables import *
deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().where("date='2021-11-18'").executeCompaction()

Scala

import io.delta.tables._
val deltaTable = DeltaTable.forName(spark, "table_name")
deltaTable.optimize().where("date='2021-11-18'").executeCompaction()

note

Readers of Delta tables use snapshot isolation, which means that they are not interrupted when OPTIMIZE removes unnecessary files from the transaction log. OPTIMIZE makes no data related changes to the table, so a read before and after an OPTIMIZE has the same results. Performing OPTIMIZE on a table that is a streaming source does not affect any current or future streams that treat this table as a source. OPTIMIZE returns the file statistics (min, max, total, and so on) for the files removed and the files added by the operation. Optimize stats also contains the Z-Ordering statistics, the number of batches, and partitions optimized.

You can also compact small files automatically using auto compaction. See Auto compaction for Delta Lake on Databricks.

How often should I run OPTIMIZE?​

Enable predictive optimization for Unity Catalog managed tables to ensure that OPTIMIZE runs automatically when it is cost effective.

When you choose how often to run OPTIMIZE, there is a trade-off between performance and cost. For better end-user query performance, run OPTIMIZE more often. This will incur a higher cost because of the increased resource usage. To optimize cost, run it less often.

Databricks recommends that you start by running OPTIMIZE on a daily basis (preferably at night when spot prices are low), and then adjust the frequency to balance cost and performance trade-offs.

What's the best instance type to run OPTIMIZE (bin-packing and Z-Ordering) on?​

Both operations are CPU intensive operations doing large amounts of Parquet decoding and encoding.

Databricks recommends Compute optimized instance types. OPTIMIZE also benefits from attached SSDs.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4