This article explains the multiple serverless offerings available on Databricks. Serverless compute allows you to quickly connect to on-demand computing resources.
The articles in this section focus on serverless compute for notebooks, jobs, and Lakeflow Declarative Pipelines. For information on serverless SQL warehouses, see What are Serverless SQL warehouses?. For information on Model Serving, see Deploy models using Mosaic AI Model Serving.
For information on serverless compute plane architecture, see Serverless compute plane.
What is serverless compute?âServerless compute allows you to run workloads without provisioning a cluster. Instead, Databricks automatically allocates and manages the necessary compute resources. This enables you to focus on writing code and analyzing data, without worrying about cluster management or resource utilization.
Serverless compute offers the following benefits:
Databricks currently offers the following types of serverless compute:
To access serverless compute for notebooks, jobs, and Lakeflow Declarative Pipelines, an account admin might need to enable the feature. See Enable serverless compute.
To access serverless SQL warehouses, see Enable serverless SQL warehouses.
Serverless compute limitationsâFor a list of limitations, see Serverless compute limitations.
Frequently asked questions (FAQ)âServerless compute is a versionless product, which means that Databricks automatically upgrades the serverless compute runtime to support enhancements and upgrades to the platform. All users get the same updates, rolled out over a short period of time.
How do I determine which serverless version I am running?âServerless workloads always run on the latest runtime version. See Release notes for the most recent version.
How do I estimate costs for serverless?âDatabricks recommends running and benchmarking a representative or specific workload and then analyzing the billing system table. See Billable usage system table reference.
How do I analyze DBU usage for a specific workload?âTo see the cost for a specific workload, query the system.billing.usage
system table. See Monitor the cost of serverless compute for sample queries and to download our cost observability dashboard.
Yes, there could be up to a 24-hour delay between when you run a workload and its usage being reflected in the billable usage system table.
I haven't enabled serverless compute for jobs and notebooks, why do I see billing records for serverless jobs?âLakehouse Monitoring and predictive optimization are also billed under the serverless jobs SKU.
Serverless compute does not have to be enabled to use these two features.
Does serverless compute support private repos?âRepositories can be private or require authentication. For security reasons, a pre-signed URL is required when accessing authenticated repositories.
How do I install libraries for my job tasks?âDatabricks recommends using environments to install and manage libraries for your jobs. See Configure environment for non-notebook job tasks.
Can I connect to custom data sources?âNo, only sources that use Lakehouse Federation are supported. See Supported data sources.
How does the serverless compute plane networking work?âServerless compute resources run in the serverless compute plane, which is managed by Databricks. For more details on the network and architecture, see Serverless compute plane networking.
Can I configure serverless compute for jobs with Databricks Asset Bundles?âYes, Databricks Asset Bundles can be used to configure jobs that use serverless compute. See Job that uses serverless compute.
How do I run my serverless workload from my local development machine or from my data application?âDatabricks Connect allows you to connect to Databricks from your local machine and run workloads on serverless. See What is Databricks Connect?.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4