Efficient and accurate low-bit weight quantization (INT3/4) for LLMs, supporting instruction-tuned models and multi-modal LMs.
The current release supports:
Thanks to AWQ, TinyChat can deliver more efficient responses with LLM/VLM chatbots through 4-bit inference.
TinyChat also supports inference with vision language models (e.g., VILA, LLaVA). In the following examples, W4A16 quantized models from VILA family are launched with TinyChat.
Prompt: What might be the next step according to the video?
Answer: The next step in the video could be to place the shaped dough onto a baking sheet and let it rise before baking.
Online demo: https://vila.hanlab.ai
Check out TinyChat, which offers a turn-key solution for on-device inference of LLMs and VLMs on resource-constrained edge platforms. With TinyChat, it is now possible to efficiently run large models on small and low-power devices even without Internet connection!
from_pretrained
. You can either load quantized models from the Hub or your own HF quantized models.git clone https://github.com/mit-han-lab/llm-awq
cd llm-awq
conda create -n awq python=3.10 -y
conda activate awq
pip install --upgrade pip # enable PEP 660 support
pip install -e .
For edge devices like Orin, before running the commands above, please:
conda create -n awq python=3.8 -y
for JetPack 5).cd awq/kernels
python setup.py install
pip install flash-attn --no-build-isolation
We recommend starting an interactive python CLI interface and run import flash_attn
to check whether FlashAttention-2 is installed successfully. If not, we recommend downloading pre-built wheels from here. Please notice:
.whl
name;cxx11abiTRUE
and cxx11abiFALSE
wheels if one of them does not work;.whl
filename, but minor mismatches (e.g. 12.1 vs 12.2, or even 11.8 vs 12.2) usually do not matter.git clone https://github.com/NVlabs/VILA.git cd VILA pip install -e .
We provide pre-computed AWQ search results for multiple model families, including LLaMA, OPT, Vicuna, and LLaVA. To get the pre-computed AWQ search results, run:
# git lfs install # install git lfs if not already git clone https://huggingface.co/datasets/mit-han-lab/awq-model-zoo awq_cache
The detailed support list:
Note: We only list models that we have prepare the AWQ searching results in the table above. AWQ also supports models such as LLaVA-v1.5 7B, and you may need to run the AWQ search on your own to quantize these models. For our latest VLM NVILA, quantized weights are available here.
AWQ can be easily applied to various LMs thanks to its good generalization, including instruction-tuned models and multi-modal LMs. It provides an easy-to-use tool to reduce the serving cost of LLMs.
Here we provide two examples of AWQ application: Vicuna-7B (chatbot) and LLaVA-13B (visual reasoning) under ./examples
directory. AWQ can easily reduce the GPU memory of model serving and speed up token generation. It provides accurate quantization, providing reasoning outputs. You should be able to observe memory savings when running the models with 4-bit weights.
Note that we perform AWQ using only textual calibration data, depsite we are running on multi-modal input. Please refer to ./examples
for details.
We provide several sample script to run AWQ (please refer to ./scripts
). We use Llama3-8B as an example.
python -m awq.entry --model_path /PATH/TO/LLAMA3/llama3-8b \ --w_bit 4 --q_group_size 128 \ --run_awq --dump_awq awq_cache/llama3-8b-w4-g128.pt
python -m awq.entry --model_path /PATH/TO/LLAMA3/llama3-8b \ --tasks wikitext \ --w_bit 4 --q_group_size 128 \ --load_awq awq_cache/llama3-8b-w4-g128.pt \ --q_backend fake
mkdir quant_cache python -m awq.entry --model_path /PATH/TO/LLAMA3/llama3-8b \ --w_bit 4 --q_group_size 128 \ --load_awq awq_cache/llama3-8b-w4-g128.pt \ --q_backend real --dump_quant quant_cache/llama3-8b-w4-g128-awq.pt
python -m awq.entry --model_path /PATH/TO/LLAMA3/llama3-8b \ --tasks wikitext \ --w_bit 4 --q_group_size 128 \ --load_quant quant_cache/llama3-8b-w4-g128-awq.ptResults on Visual Language Models
AWQ also seamlessly supports large multi-modal models (LMMs). Please refer to TinyChat for more details.
If you find AWQ useful or relevant to your research, please kindly cite our paper:
@inproceedings{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Chen, Wei-Ming and Wang, Wei-Chen and Xiao, Guangxuan and Dang, Xingyu and Gan, Chuang and Han, Song},
booktitle={MLSys},
year={2024}
}
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers
LLaVA: Large Language and Vision Assistant
VILA: On Pre-training for Visual Language Models
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4