This article gives a brief introduction to using PyTorch, Tensorflow, and distributed training for developing and fine-tuning deep learning models on Databricks. It also includes links to pages with example notebooks illustrating how to use those tools.
PyTorch is included in Databricks Runtime ML and provides GPU accelerated tensor computation and high-level functionalities for building deep learning networks. You can perform single node training or distributed training with PyTorch on Databricks. See PyTorch. For an end-to-end tutorial notebook using PyTorch and MLflow, see Tutorial: End-to-end deep learning models on Databricks.
TensorFlowâDatabricks Runtime ML includes TensorFlow and TensorBoard, so you can use these libraries without installing any packages. TensorFlow supports deep-learning and general numerical computations on CPUs, GPUs, and clusters of GPUs. TensorBoard provides visualization tools to help you debug and optimize machine learning and deep learning workflows. See TensorFlow for single node and distributed training examples.
Distributed trainingâBecause deep learning models are data and computation-intensive, distributed training can be important. For examples of distributed deep learning using integrations with Ray, TorchDistributor, and DeepSpeed see Distributed training.
Track deep learning model developmentâTracking remains a cornerstone of the MLflow ecosystem and is especially vital for the iterative nature of deep learning. Databricks uses MLflow to track deep learning training runs and model development. See Track model development using MLflow.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4