note
Some GPU-enabled instance types are in Beta and are marked as such in the drop-down list when you select the driver and worker types during compute creation.
OverviewâDatabricks supports compute accelerated with graphics processing units (GPUs). This article describes how to create compute with GPU-enabled instances and describes the GPU drivers and libraries installed on those instances.
To learn more about deep learning on GPU-enabled compute, see Deep learning.
Create a GPU computeâCreating a GPU compute is similar to creating any compute. Keep in mind the following:
The process for configuring GPU instances using the [Clusters API] varies depending on whether the kind
field is set. kind
determines whether your request uses the simple form specification:
kind = CLASSIC_PREVIEW
, set "use_ml_runtime": true
.kind
field, set spark_version
to a GPU-enabled version, such as 15.4.x-gpu-ml-scala2.12.warning
Databricks is deprecating and will no longer support spinning up compute using Amazon EC2 P3 instances as AWS is deprecating these instances.
Databricks supports the following GPU-accelerated instance types:
For all GPU-accelerated instance types, keep the following in mind:
See Supported Instance Types for a list of supported GPU instance types and their attributes.
GPU schedulingâGPU scheduling distributes Spark tasks efficiently across a large number of GPUs.
Databricks Runtime supports GPU-aware scheduling from Apache Spark 3.0. Databricks preconfigures it on GPU compute.
note
GPU scheduling is not enabled on single-node compute.
GPU scheduling for AI and MLâspark.task.resource.gpu.amount
is the only Spark config related to GPU-aware scheduling that you may need to configure. The default configuration uses one GPU per task, which is a good baseline for distributed inference workloads and distributed training if you use all GPU nodes.
To reduce communication overhead during distributed training, Databricks recommends setting spark.task.resource.gpu.amount
to the number of GPUs per worker node in the compute Spark configuration. This creates only one Spark task for each Spark worker and assigns all GPUs in that worker node to the same task.
To increase parallelization for distributed deep learning inference, you can set spark.task.resource.gpu.amount
to fractional values such as 1/2, 1/3, 1/4, ⦠1/N. This creates more Spark tasks than there are GPUs, allowing more concurrent tasks to handle inference requests in parallel. For example, if you set spark.task.resource.gpu.amount
to 0.5
, 0.33
, or 0.25
, then the available GPUs will be split among double, triple, or quadruple the number of tasks.
For PySpark tasks, Databricks automatically remaps assigned GPU(s) to zero-based indices. For the default configuration that uses one GPU per task, you can use the default GPU without checking which GPU is assigned to the task. If you set multiple GPUs per task, for example, 4, the indices of the assigned GPUs are always 0, 1, 2, and 3. If you do need the physical indices of the assigned GPUs, you can get them from the CUDA_VISIBLE_DEVICES
environment variable.
If you use Scala, you can get the indices of the GPUs assigned to the task from TaskContext.resources().get("gpu")
.
Databricks installs the NVIDIA driver and libraries required to use GPUs on Spark driver and worker instances:
/usr/local/cuda
.The version of the NVIDIA driver included is 535.54.03, which supports CUDA 11.0.
For the versions of the libraries included, see the release notes for the specific Databricks Runtime version you are using.
note
This software contains source code provided by NVIDIA Corporation. Specifically, to support GPUs, Databricks includes code from CUDA Samples.
NVIDIA End User License Agreement (EULA)âWhen you select a GPU-enabled âDatabricks Runtime Versionâ in Databricks, you implicitly agree to the terms and conditions outlined in the NVIDIA EULA with respect to the CUDA, cuDNN, and Tesla libraries, and the NVIDIA End User License Agreement (with NCCL Supplement) for the NCCL library.
Databricks Container Services on GPU computeâYou can use Databricks Container Services on compute with GPUs to create portable deep learning environments with customized libraries. See Customize containers with Databricks Container Service for instructions.
To create custom images for GPU compute, you must select a standard runtime version instead of Databricks Runtime ML for GPU. When you select Use your own Docker container, you can choose GPU compute with a standard runtime version. The custom images for GPU are based on the official CUDA containers, which is different from Databricks Runtime ML for GPU.
When you create custom images for GPU compute, you cannot change the NVIDIA driver version because it must match the driver version on the host machine.
The databricksruntime
Docker Hub contains example base images with GPU capability. The Dockerfiles used to generate these images are located in the example containers GitHub repository, which also has details on what the example images provide and how to customize them.
The following error indicates that the AWS cloud provider does not have enough capacity for the requested compute resource. Error: Cluster terminated. Reason: AWS Insufficient Instance Capacity Failure
To resolve this error, you can try creating compute in a different availability zone. The availability zone is in the compute configuration under Advanced > Access mode. You can also review AWS reserved instances pricing to purchase an additional quota.
If your compute uses P4d or G5 instance types and Databricks Runtime 7.3 LTS ML, the CUDA package version in 7.3 is incompatible with newer GPU instances. In those cases, ML packages such as TensorFlow Keras and PyTorch will produce errors such as:
InternalError: CUDA runtime implicit initialization on GPU:x failed. Status: device kernel image is invalid
UserWarning: NVIDIA A100-SXM4-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation.
You can resolve these errors by upgrading to Databricks Runtime 10.4 LTS ML or above.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4