A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://huggingface.co/Menlo/Jan-nano-128k below:

Website Navigation


Menlo/Jan-nano-128k ยท Hugging Face

Jan-Nano-128k: Empowering deeper research through extended context understanding.

Note: Jan-Nano is a non-thinking model.

Authors: Alan Dao, Bach Vu Dinh

Overview

Jan-Nano-128k represents a significant advancement in compact language models for research applications. Building upon the success of Jan-Nano, this enhanced version features a native 128k context window that enables deeper, more comprehensive research capabilities without the performance degradation typically associated with context extension methods.

Key Improvements:

This model maintains full compatibility with Model Context Protocol (MCP) servers while dramatically expanding the scope of research tasks it can handle in a single session.

Evaluation

Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our MCP-based methodology, demonstrating superior performance compared to its predecessor:

Why Jan-Nano-128k?

Traditional approaches to extending context length, such as YaRN (Yet another RoPE extensioN), often result in performance degradation as context length increases. Jan-Nano-128k breaks this paradigm:

This fundamental difference makes Jan-Nano-128k ideal for research applications requiring deep document analysis, multi-document synthesis, and complex reasoning over large information sets.

๐Ÿ–ฅ๏ธ How to Run Locally

Jan desktop will eventually support this model (WIP). Otherwise you can check the deployment options below that we have tested.

For additional tutorials and community guidance, visit our Discussion Forums.

Deployment

Deploy using VLLM:

vllm serve Menlo/Jan-nano-128k \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes \
    --rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' --max-model-len 131072

Or llama-server from llama.cpp:

llama-server ... --rope-scaling yarn --rope-scale 3.2 --yarn-orig-ctx 40960

Note: The chat template is included in the tokenizer. For troubleshooting, download the Non-think chat template.

Recommended Sampling Parameters
Temperature: 0.7
Top-p: 0.8
Top-k: 20
Min-p: 0.0
FAQ: ๐Ÿค Community & Support ๐Ÿ“„ Citation
@misc{dao2025jannanotechnicalreport,
      title={Jan-nano Technical Report}, 
      author={Alan Dao and Dinh Bach Vu},
      year={2025},
      eprint={2506.22760},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.22760}, 
}

Jan-Nano-128k: Empowering deeper research through extended context understanding.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4