🤗 Hugging Face | 🖥️ Official Website | 🕖 HunyuanAPI | 🕹️ Demo | 🤖 ModelScope
Technical Report | GITHUB | cnb.cool | LICENSE
Welcome to the official repository of Hunyuan-A13B, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
Model IntroductionWith the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
Key Features and AdvantagesAs a powerful yet computationally efficient large model, Hunyuan-A13B is an ideal choice for researchers and developers seeking high performance under resource constraints. Whether for academic research, cost-effective AI solution development, or innovative application exploration, this model provides a robust foundation for advancement.
Related NewsNote: The following benchmarks are evaluated by TRT-LLM-backend on several base models.
Model Hunyuan-Large Qwen2.5-72B Qwen3-A22B Hunyuan-A13B MMLU 88.40 86.10 87.81 88.17 MMLU-Pro 60.20 58.10 68.18 67.23 MMLU-Redux 87.47 83.90 87.40 87.67 BBH 86.30 85.80 88.87 87.56 SuperGPQA 38.90 36.20 44.06 41.32 EvalPlus 75.69 65.93 77.60 78.64 MultiPL-E 59.13 60.50 65.94 69.33 MBPP 72.60 76.00 81.40 83.86 CRUX-I 57.00 57.63 - 70.13 CRUX-O 60.63 66.20 79.00 77.00 MATH 69.80 62.12 71.84 72.35 CMATH 91.30 84.80 - 91.17 GSM8k 92.80 91.50 94.39 91.83 GPQA 25.18 45.90 47.47 49.12Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
Topic Bench OpenAI-o1-1217 DeepSeek R1 Qwen3-A22B Hunyuan-A13B-Instruct Mathematics AIME 2024Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = os.environ['MODEL_PATH']
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto",trust_remote_code=True)
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
model_inputs.pop("token_type_ids", None)
outputs = model.generate(**model_inputs, max_new_tokens=4096)
output_text = tokenizer.decode(outputs[0])
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
Fast and slow thinking switch
This model supports two modes of operation:
Switching to Fast Thinking Mode:
To disable the reasoning process, set enable_thinking=False
in the apply_chat_template call:
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
enable_thinking=False
)
Deployment
For deployment, you can use frameworks such as TensorRT-LLM, vLLM, or SGLang to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags
TensorRT-LLM Docker ImageWe provide a pre-built Docker image based on the latest version of TensorRT-LLM.
From Docker Hub:
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
From China Mirror(Thanks to CNB):
First, pull the image from CNB:
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-trtllm
Then, rename the image to better align with the following scripts:
docker tag docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-trtllm hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
docker run --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
trtllm-serve \
/path/to/HunYuan-moe-A13B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
vLLM Inference from Docker Image
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, note: cuda 12.4 is require for this docker.
From Docker Hub:
docker pull hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1
From China Mirror(Thanks to CNB):
First, pull the image from CNB:
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b/hunyuan-infer-vllm-cuda12.4:v1
Then, rename the image to better align with the following scripts:
docker tag docker.cnb.cool/tencent/hunyuan/hunyuan-a13b/hunyuan-infer-vllm-cuda12.4:v1 hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1
Download Model file:
modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct
Start the API server:
model download by huggingface:
docker run --rm --ipc=host \
-v ~/.cache:/root/.cache/ \
--security-opt seccomp=unconfined \
--net=host \
--gpus=all \
-it \
--entrypoint python3 hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1 \
-m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--tensor-parallel-size 4 \
--port 8000 \
--model tencent/Hunyuan-A13B-Instruct \
--trust_remote_code
model downloaded by modelscope:
docker run --rm --ipc=host \
-v ~/.cache/modelscope:/root/.cache/modelscope \
--security-opt seccomp=unconfined \
--net=host \
--gpus=all \
-it \
--entrypoint python3 hunyuaninfer/hunyuan-infer-vllm-cuda12.4:v1 \
-m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--tensor-parallel-size 4 \
--port 8000 \
--model /root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct/ \
--trust_remote_code
Source Code
Support for this model has been added via this PR 20114 in the vLLM project, This patch already been merged by community at Jul-1-2025.
You can build and run vLLM from source using code after ecad85
.
The Hunyuan A13B model supports a maximum context length of 256K tokens (262,144 tokens). However, due to GPU memory constraints on most hardware setups, the default configuration in config.json
limits the context length to 32K tokens to prevent out-of-memory (OOM) errors.
To enable full 256K context support, you can manually modify the max_position_embeddings
field in the model's config.json
file as follows:
{
...
"max_position_embeddings": 262144,
...
}
When serving the model using vLLM, you can also explicitly set the maximum model length by adding the following flag to your server launch command:
--max-model-len 262144
Recommended Configuration for 256K Context Length
The following configuration is recommended for deploying the model with 256K context length support on systems equipped with NVIDIA H20 GPUs (96GB VRAM):
Model DType KV-Cache Dtype Number of Devices Model Lengthbfloat16
bfloat16
4 262,144
Tool Calling with vLLM⚠️ Note: Using FP8 quantization for KV-cache may impact generation quality. The above settings are suggested configurations for stable 256K-length service deployment.
To support agent-based workflows and function calling capabilities, this model includes specialized parsing mechanisms for handling tool calls and internal reasoning steps.
For a complete working example of how to implement and use these features in an agent setting, please refer to our full agent implementation on GitHub:
🔗 Hunyuan A13B Agent Example
When deploying the model using vLLM, the following parameters can be used to configure the tool parsing behavior:
These settings enable vLLM to correctly interpret and route tool calls generated by the model according to the expected format.
Reasoning parservLLM reasoning parser support on Hunyuan A13B model is under development.
SGLang Docker ImageWe also provide a pre-built Docker image based on the latest version of SGLang.
To get started:
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang
or
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-sglang
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ipc=host \
docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang \
-m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4