Databricks Container Services lets you specify a Docker image when you create compute. Some example use cases include:
You can also use Docker images to create custom deep learning environments on compute with GPU devices. For additional information about using GPU compute with Databricks Container Services, see Databricks Container Services on GPU compute.
For tasks to be executed each time the container starts, use an init script.
Requirementsâdocker
command must be available on your PATH
.spark.databricks.unityCatalog.volumes.enabled true
.spark.databricks.unityCatalog.hms.federation.enabled true
.172.17.0.0/16
is the default IP range used by Docker. To prevent connectivity issues due to an IP conflict, avoid setting up resources in this subnet.Databricks recommends that you build your Docker base from a base that Databricks has built and tested. It is also possible to build your Docker base from scratch. This section describes the two options.
Option 1. Use a base built by DatabricksâThis example uses the 17.x
tag for an image that will target a compute with Databricks Runtime 17.0 LTS and above:
Bash
FROM databricksruntime/standard:17.x
...
To specify additional Python libraries, such as the latest version of pandas and urllib, use the container-specific version of pip
. For the databricksruntime/standard:17.x
container, include the following:
Bash
RUN /databricks/python3/bin/pip install pandas
RUN /databricks/python3/bin/pip install urllib3
Base images are hosted on Docker Hub at https://hub.docker.com/u/databricksruntime. The Dockerfiles used to generate these bases are at https://github.com/databricks/containers.
note
Docker Hub hosted images with Tags with â-LTSâ suffix will be patched. All other images are examples and are not patched regularly.
note
The base images databricksruntime/standard
and databricksruntime/minimal
are not to be confused with the unrelated databricks-standard
and databricks-minimal
environments included in the no longer available Databricks Runtime with Conda (Beta).
You can also build your Docker base from scratch. The Docker image must meet these requirements:
PATH
To build your own image from scratch, you must create the virtual environment. You must also include packages that are built into Databricks compute, such as Python and R. To get started, you can use the appropriate base image:
databricksruntime/rbase
databricksruntime/python
databricksruntime/minimal
You can also refer to the example Dockerfiles in GitHub.
warning
Test your custom container image thoroughly on a Databricks compute. Your container may work on a local or build machine, but when your container is launched on Databricks the compute launch may fail, certain features may become disabled, or your container may stop working, even silently. In worst-case scenarios, it could corrupt your data or accidentally expose your data to external parties.
Step 2: Push your base imageâPush your custom base image to a Docker registry. This process is supported using the following registries:
Other Docker registries that support no auth or basic auth are also expected to work.
note
If you use Docker Hub for your Docker registry, be sure to check that rate limits accommodate the amount of compute that you expect to launch in a six-hour period. These rate limits are different for anonymous users, authenticated users without a paid subscription, and paid subscriptions. See the Docker documentation for details. If this limit is exceeded, you will get a â429 Too Many Requestsâ response.
Step 3: Launch your computeâYou can launch your compute using the UI or the API.
Launch your compute using the UIâOn the Create compute page, specify a Databricks Runtime Version that supports Databricks Container Services.
Under Advanced, select the Docker tab.
Select Use your own Docker container.
In the Docker Image URL field, enter your custom Docker image.
Docker image URL examples:
Select the authentication type. You can use secrets to store username and password authentication values. See Docker image authentication.
Use the Databricks CLI to launch a compute with your custom Docker base.
Bash
databricks clusters create \
--cluster-name <cluster-name> \
--node-type-id i3.xlarge \
--json '{
"num_workers": 0,
"docker_image": {
"url": "databricksruntime/standard:latest",
"basic_auth": {
"username": "<docker-registry-username>",
"password": "<docker-registry-password>"
}
},
"spark_version": "16.4.x-scala2.12",
"aws_attributes": {
"availability": "ON_DEMAND",
"instance_profile_arn": "arn:aws:iam::<aws-account-number>:instance-profile/<iam-role-name>"
}
}'
Authentication requirements depend on your Docker image type. You can also use secrets to store authentication usernames and passwords. See Use secrets for authentication.
basic_auth
fields.For Amazon ECR images, do not include authentication information. Instead, launch your compute with an instance profile that includes permissions to pull Docker images from the Docker repository where the image resides. To do this, follow steps 3 and 4 of the process for setting up secure access to S3 buckets using instance profiles.
Here is an example of an IAM role with permission to pull any image. The repository is specified by <arn-of-repository>
.
JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ecr:GetAuthorizationToken"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetrepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": ["<arn-of-repository>"]
}
]
}
If the Amazon ECR image resides in a different AWS account than the Databricks compute, use an ECR repository policy in addition to the compute instance profile to grant the compute access. Here is an example of an ECR repository policy. The IAM role assumed by the compute's instance profile is specified by <arn-of-IAM-role>
.
JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountPush",
"Effect": "Allow",
"Principal": {
"AWS": "<arn-of-IAM-role>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer",
"ecr:GetrepositoryPolicy",
"ecr:ListImages"
]
}
]
}
Databricks Container Service supports using secrets for authentication. When creating your compute resource in the UI, use the Authentication field to select Username and password, then instead of entering your plain text username or password, enter your secrets using the {{secrets/<scope-name>/<dcs-secret>}}
format. If you use the API, enter the secrets in the basic_auth
fields.
For information on creating secrets, see Secret management.
Use an init scriptâDatabricks Container Services enable customers to include init scripts in the Docker container. In most cases, you should avoid init scripts and instead make customizations through Docker directly (using the Dockerfile). However, certain tasks must be executed when the container starts, instead of when the container is built. Use an init script for these tasks.
For example, suppose you want to run a security daemon inside a custom container. Install and build the daemon in the Docker image through your image building pipeline. Then, add an init script that starts the daemon. In this example, the init script would include a line like systemctl start my-daemon
.
In the API, you can specify init scripts as part of the compute spec as follows. For more information, see the Clusters API.
Bash
"init_scripts": [
{
"file": {
"destination": "file:/my/local/file.sh"
}
}
]
For Databricks Container Services images, you can also store init scripts in cloud storage.
The following steps take place when you launch a compute that uses Databricks Container Services:
Databricks ignores the Docker CMD
and ENTRYPOINT
primitives.
To use custom containers on your compute, a workspace admin must enable Databricks Container Services.
Workspace admins can enable Databricks Container Service using the Databricks CLI. In a JSON request body, specify enableDcs
to true
, as in the following example:
Bash
databricks workspace-conf set-status \
--json '{"enableDcs": "true"}'
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4