Large Language Model (LLM) example in java i.e. GPT2. This is a port of the Llm.c code that lives here written by @karpathy
Before Running ChatGPT2 in JavaBefore attempting to run this some prep work needs to happen. If you check the llm.c repository these steps are very similar. The reason the same code is in this repository is because LLM.c is still a moving target.
I highly recommend running the original llm.c to see it work. It's wonderful.
python -m venv .venv source .venv/bin/activate python -m pip install -r requirements.txt python prepro_tinyshakespeare.py python train_gpt2.py
I used the GraalVM for this running version 21. If you're using sdkman.
sdk default java 21.0.2-graalce
I tested the following JVM version and they all seem to work. I have not investigated why some are slower than others.
sdk install java 21-tem sdk use java 21-tem
sdk install java 21.0.3-amzn sdk use java 21.0.3-amzn
Note the arguments passed to the JVM. Of particular note is "-Djava.util.concurrent.ForkJoinPool.common.parallelism=10", adjust this based on how many cores you have. The matrix multiplication methods are entirely CPU bound so adding more threads than cores will just slow things down.
mvn clean install; java -jar -ea --add-modules jdk.incubator.vector --enable-preview -Xmx8g -Djava.util.concurrent.ForkJoinPool.common.parallelism=10 target/gpt2-1.0-SNAPSHOT.jar
I've made no attempt to tune this for performance. The C version is still much faster than this version. There are some low-hanging fruit like parallelizing some of the loops. I made the matmul_forward and matmul_backward both parallel because it was painfully slow without it.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4