This article explains how to use serverless compute for notebooks. For information on using serverless compute for jobs, see Run your Lakeflow Jobs with serverless compute for workflows.
For pricing information, see Databricks pricing.
RequirementsâIf your workspace is enabled for serverless interactive compute, all users in the workspace have access to serverless compute for notebooks. No additional permissions are required.
To attach to the serverless compute, click the Connect drop-down menu in the notebook and select Serverless. For new notebooks, the attached compute automatically defaults to serverless upon code execution if no other resource has been selected.
View query insightsâServerless compute for notebooks and jobs uses query insights to assess Spark execution performance. After running a cell in a notebook, you can view insights related to SQL and Python queries by clicking the See performance link.
You can click on any of the Spark statements to view the query metrics. From there you can click See query profile to see a visualization of the query execution. For more information on query profiles, see Query profile.
Query historyâAll queries that are run on serverless compute will also be recorded on your workspace's query history page. For information on query history, see Query history.
Query insight limitationsâTo control long-running queries, serverless notebooks have a default execution timeout of 2.5 hours. You can manually set the timeout length by configuring spark.databricks.execution.timeout
in the notebook. See Configure Spark properties for serverless notebooks and jobs.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4