This project is a port of Andrej Karpathy's llm.c to Mojo, currently in beta. Visit llm.c for a detailed explanation of the original project.
Note: This project is based on the stable Mojo 25.5 release.
Before using llm.🔥 for the first time, please run the following preparatory commands:
pip install -r requirements.txt python prepro_tinyshakespeare.py python train_gpt2.py
If you don't have it, install pixi:
curl -fsSL https://pixi.sh/install.sh | shStep 2: Run the training program
Start the virtual environment and execute the training program:
pixi shell mojo train_gpt2.mojo
Note: The first time you run
pixi shell
, it will automatically install all necessary dependencies defined inpixi.toml
.
For a more detailed step-by-step guide including additional setup details and options, please refer to our detailed usage instructions.
Basic benchmark results: (M2 MacBook Pro)
Below are average training loop times, observed across the various implementations. Please note that these results are intended to provide a general comparison rather than precise, repeatable metrics.
We are running the OpenMP-enabled train_gpt2.c with 64 threads. (OMP_NUM_THREADS=64 ./train_gpt2
)
We ported test_gpt2.c
from the original repository to Mojo to validate our port's functionality. For instructions on how to run this test and insights into the results it yields, please see our guide here.
At this stage, there are no plans for further development of this app. It primarily serves as a proof of concept, showcasing Mojo's ability to implement C-like applications in terms of speed and low-level programming. That said, I’m always open to new ideas or collaboration opportunities, so feel free to reach out to discuss ideas.
train_gpt2.c
Update 2024.04.14attention_backward
MIT
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4