A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.databricks.com/aws/en/compute/serverless/best-practices below:

Best practices for serverless compute

Best practices for serverless compute

This article presents you with best practice recommendations for using serverless compute in your notebooks and jobs.

By following these recommendations, you will enhance the productivity, cost efficiency, and reliability of your workloads on Databricks.

Migrating workloads to serverless compute​

To protect the isolation of user code, serverless compute utilizes Databricks secure standard access mode (formerly shared access mode). Because of this, some workloads will require code changes to continue working on serverless compute. For a list of unsupported features, see Serverless compute limitations.

Certain workloads are easier to migrate than others. Workloads that meet the following requirements will be the easiest to migrate:

To test if a workload will work on serverless compute, run it on a non-serverless compute resource with Standard access mode and a Databricks Runtime of 14.3 or above. If the run is successful, the workload is ready for migration.

Because of the significance of this change and the current list of limitations, many workloads will not migrate seamlessly. Instead of recoding everything, Databricks recommends prioritizing serverless compute compatibility as you create new workloads.

Ingesting data from external systems​

Because serverless compute does not support JAR file installation, you cannot use a JDBC or ODBC driver to ingest data from an external data source.

Alternative strategies you can use for ingestion include:

Ingestion alternatives​

When using serverless compute, you can also use the following features to query your data without moving it.

Try one or both of these features and see whether they satisfy your query performance requirements.

Supported Spark configurations​

To automate the configuration of Spark on serverless compute, Databricks has removed support for manually setting most Spark configurations. To view a list of supported Spark configuration parameters, see Configure Spark properties for serverless notebooks and jobs.

Job runs on serverless compute will fail if you set an unsupported Spark configuration.

Monitor the cost of serverless compute​

There are multiple features you can use to help you monitor the cost of serverless compute:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4