Databricks Asset Bundles, also known simply as bundles, enable you to programmatically validate, deploy, and run Databricks resources such as Lakeflow Declarative Pipelines. See What are Databricks Asset Bundles?.
This article describes how to create a bundle to programmatically manage a pipeline. See Lakeflow Declarative Pipelines. The bundle is created using the Databricks Asset Bundles default bundle template for Python, which consists of a notebook paired with the definition of a pipeline and job to run it. You then validate, deploy, and run the deployed pipeline in your Databricks workspace.
tip
If you have existing pipelines that were created using the Databricks user interface or API that you want to move to bundles, you must define them in a bundle's configuration files. Databricks recommends that you first create a bundle using the steps below and then validate whether the bundle works. You can then add additional definitions, notebooks, and other sources to the bundle. See Retrieve an existing pipeline definition using the UI.
Requirementsâdatabricks -v
. To install the Databricks CLI, see Install or update the Databricks CLI.Databricks provides a Python module to assist your local development of Lakeflow Declarative Pipelines code by providing syntax checking, autocomplete, and data type checking as you write code in your IDE.
The Python module for local development is available on PyPi. To install the module, see Python stub for Lakeflow Declarative Pipelines.
Create a bundle using a project templateâCreate the bundle using the Databricks default bundle template for Python. This template consists of a notebook that defines an ETL pipeline (using Lakeflow Declarative Pipelines), which filters data from the original dataset. For more information about bundle templates, see Databricks Asset Bundle project templates.
If you want to create a bundle from scratch, see Create a bundle manually.
Step 1: Set up authenticationâIn this step, you set up authentication between the Databricks CLI on your development machine and your Databricks workspace. This article assumes that you want to use OAuth user-to-machine (U2M) authentication and a corresponding Databricks configuration profile named DEFAULT
for authentication.
Use the Databricks CLI to initiate OAuth token management locally by running the following command for each target workspace.
In the following command, replace <workspace-url>
with your Databricks workspace instance URL, for example https://dbc-a1b2345c-d6e7.cloud.databricks.com
.
Bash
databricks auth login --host <workspace-url>
The Databricks CLI prompts you to save the information that you entered as a Databricks configuration profile. Press Enter
to accept the suggested profile name, or enter the name of a new or existing profile. Any existing profile with the same name is overwritten with the information that you entered. You can use profiles to quickly switch your authentication context across multiple workspaces.
To get a list of any existing profiles, in a separate terminal or command prompt, use the Databricks CLI to run the command databricks auth profiles
. To view a specific profile's existing settings, run the command databricks auth env --profile <profile-name>
.
In your web browser, complete the on-screen instructions to log in to your Databricks workspace.
To view a profile's current OAuth token value and the token's upcoming expiration timestamp, run one of the following commands:
databricks auth token --host <workspace-url>
databricks auth token -p <profile-name>
databricks auth token --host <workspace-url> -p <profile-name>
If you have multiple profiles with the same --host
value, you might need to specify the --host
and -p
options together to help the Databricks CLI find the correct matching OAuth token information.
Initialize a bundle using the default Python bundle project template.
note
The default-python template requires that uv
is installed. See Installing uv.
Use your terminal or command prompt to switch to a directory on your local development machine that will contain the template's generated bundle.
Use the Databricks CLI to run the bundle init
command:
For Template to use
, leave the default value of default-python
by pressing Enter
.
For Unique name for this project
, leave the default value of my_project
, or type a different value, and then press Enter
. This determines the name of the root directory for this bundle. This root directory is created within your current working directory.
For Include a stub (sample) notebook
, select no
and press Enter
. This instructs the Databricks CLI to not add a sample notebook at this point, as the sample notebook that is associated with this option has no Lakeflow Declarative Pipelines code in it.
For Include a stub (sample) Delta Live Tables pipeline
, leave the default value of yes
by pressing Enter
. This instructs the Databricks CLI to add a sample notebook that has Lakeflow Declarative Pipelines code in it.
For Include a stub (sample) Python package
, select no
and press Enter
. This instructs the Databricks CLI to not add sample Python wheel package files or related build instructions to your bundle.
For Use serverless
, select yes
and press Enter
. This instructs the Databricks CLI to configure your bundle to run on serverless compute.
To view the files that the template generated, switch to the root directory of your newly created bundle. Files of particular interest include the following:
databricks.yml
: This file specifies the bundle's programmatic name, includes a reference to the pipeline definition, and specifies settings about the target workspace.resources/<project-name>_job.yml
and resources/<project-name>_pipeline.yml
: These files define the job that contains a pipeline refresh task, and the pipeline's settings.src/dlt_pipeline.ipynb
: This file is a notebook that, when run, executes the pipeline.For customizing pipelines, the mappings within a pipeline declaration correspond to the create pipeline operation's request payload as defined in POST /api/2.0/pipelines in the REST API reference, expressed in YAML format.
Step 4: Validate the project's bundle configuration fileâIn this step, you check whether the bundle configuration is valid.
From the root directory, use the Databricks CLI to run the bundle validate
command, as follows:
Bash
databricks bundle validate
If a summary of the bundle configuration is returned, then the validation succeeded. If any errors are returned, fix the errors, and then repeat this step.
If you make any changes to your bundle after this step, you should repeat this step to check whether your bundle configuration is still valid.
Step 5: Deploy the local project to the remote workspaceâIn this step, you deploy the local notebook to your remote Databricks workspace and create the pipeline within your workspace.
From the bundle root, use the Databricks CLI to run the bundle deploy
command as follows:
Bash
databricks bundle deploy -t dev
Check whether the local notebook was deployed: In your Databricks workspace's sidebar, click Workspace.
Click into the Users > <your-username>
> .bundle > <project-name>
> dev > files > src folder. The notebook should be in this folder.
Check whether your pipeline was created:
<your-username>
] <project-name>
_pipeline.If you make any changes to your bundle after this step, you should repeat steps 4-5 to check whether your bundle configuration is still valid and then redeploy the project.
Step 6: Run the deployed projectâIn this step, you trigger a run of the pipeline in your workspace from the command line.
From the root directory, use the Databricks CLI to run the bundle run
command, as follows, replacing <project-name>
with the name of your project from Step 2:
Bash
databricks bundle run -t dev <project-name>_pipeline
Copy the value of Update URL
that appears in your terminal and paste this value into your web browser to open your Databricks workspace.
In your Databricks workspace, after the pipeline completes successfully, click the taxi_raw view and the filtered_taxis materialized view to see the details.
If you make any changes to your bundle after this step, you should repeat steps 4-6 to check whether your bundle configuration is still valid, redeploy the project, and run the redeployed project.
Step 7: Clean upâIn this step, you delete the deployed notebook and the pipeline from your workspace.
From the root directory, use the Databricks CLI to run the bundle destroy
command, as follows:
Bash
databricks bundle destroy -t dev
Confirm the pipeline deletion request: When prompted to permanently destroy resources, type y
and press Enter
.
Confirm the notebook deletion request: When prompted to permanently destroy the previously deployed folder and all of its files, type y
and press Enter
.
If you also want to delete the bundle from your development machine, you can now delete the local directory from Step 2.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4