torchtitan
is currently in a pre-release state and under extensive development. We showcase training Llama 3.1 LLMs at scale, and are working on other types of generative AI models, including LLMs with MoE architectures, multimodal LLMs, and diffusion models, in the experiments
folder. To use the latest features of torchtitan
, we recommend using the most recent PyTorch nightly.
torchtitan
.torchtitan
v0.1.0, and also set up nightly builds.torchtitan
in under 4 minutes.torchtitan
is a PyTorch native platform designed for rapid experimentation and large-scale training of generative AI models. As a minimal clean-room implementation of PyTorch native scaling techniques, torchtitan
provides a flexible foundation for developers to build upon. With torchtitan
extension points, one can easily create custom extensions tailored to specific needs.
Our mission is to accelerate innovation in the field of generative AI by empowering researchers and developers to explore new modeling architectures and infrastructure techniques.
The Guiding Principles when building torchtitan
torchtitan
has been showcasing PyTorch's latest distributed training features, via pretraining Llama 3.1 LLMs of various sizes. To accelerate contributions to and innovations around torchtitan, we are hosting a new experiments
folder. We look forward to your contributions!
torchtune
for fine-tuningtorch.compile
support--training.global_batch_size
argument in configurationWe report performance on up to 512 GPUs, and verify loss converging correctness of various techniques.
You may want to see how the model is defined or how parallelism techniques are applied. For a guided tour, see these files first:
torch.compile
to the modelOne can choose to install torchtitan
from a stable release, a nightly build, or directly run the source code. Please install PyTorch before proceeding.
One can install the latest stable release of torchtitan
via pip
or conda
.
conda install conda-forge::torchtitan
Note that each stable release pins the nightly versions of torch
and torchao
. Please see release.md for more details.
This method requires the nightly build of PyTorch. You can replace cu126
with another version of cuda (e.g. cu128
) or an AMD GPU (e.g. rocm6.3
).
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu126 --force-reinstall pip install --pre torchtitan --index-url https://download.pytorch.org/whl/nightly/cu126
This method requires the nightly build of PyTorch or the latest PyTorch built from source.
git clone https://github.com/pytorch/torchtitan cd torchtitan pip install -r requirements.txt
torchtitan
currently supports training Llama 3.1 (8B, 70B, 405B) out of the box. To get started training these models, we need to download the tokenizer. Follow the instructions on the official meta-llama repository to ensure you have access to the Llama model weights.
Once you have confirmed access, you can run the following command to download the Llama 3.1 tokenizer to your local machine.
# Get your HF token from https://huggingface.co/settings/tokens # Llama 3.1 tokenizer python scripts/download_hf_assets.py --repo_id meta-llama/Llama-3.1-8B --assets tokenizer --hf_token=...
Llama 3 8B model locally on 8 GPUs
CONFIG_FILE="./torchtitan/models/llama3/train_configs/llama3_8b.toml" ./run_train.sh
For training on ParallelCluster/Slurm type configurations, you can use the multinode_trainer.slurm
file to submit your sbatch job.
To get started adjust the number of nodes and GPUs
#SBATCH --ntasks=2
#SBATCH --nodes=2
Then start a run where nnodes
is your total node count, matching the sbatch node count above.
If your gpu count per node is not 8, adjust --nproc_per_node
in the torchrun command and #SBATCH --gpus-per-task
in the SBATCH command section.
We provide a detailed look into the parallelisms and optimizations available in torchtitan
, along with summary advice on when to use various techniques.
TorchTitan: One-stop PyTorch native solution for production ready LLM pre-training
@inproceedings{
liang2025torchtitan,
title={TorchTitan: One-stop PyTorch native solution for production ready {LLM} pretraining},
author={Wanchao Liang and Tianyu Liu and Less Wright and Will Constable and Andrew Gu and Chien-Chin Huang and Iris Zhang and Wei Feng and Howard Huang and Junjie Wang and Sanket Purandare and Gokul Nadathur and Stratos Idreos},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=SFN6Wm7YBI}
}
Source code is made available under a BSD 3 license, however you may have other legal obligations that govern your use of other content linked in this repository, such as the license or terms of service for third-party data and models.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4