The Qwen2.5-Omni model is a unified multiple modalities model proposed in Qwen2.5-Omni Technical Report from Qwen team, Alibaba Group.
The abstract from the technical report is the following:
SAM-HQWe present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model.
Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture.
In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench.
Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.
SAM-HQ (High-Quality Segment Anything Model) was proposed in Segment Anything in High Quality by Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu.
The model is an enhancement to the original SAM model that produces significantly higher quality segmentation masks while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability.
SAM-HQ introduces several key improvements over the original SAM model:
The abstract from the paper is the following:
The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced dataset of 44k masks, which takes only 4 hours on 8 GPUs.
Tips:
The GraniteMoeHybrid
model builds on top of GraniteMoeSharedModel
and Bamba
. Its decoding layers consist of state space layers or MoE attention layers with shared experts. By default, the attention layers do not use positional encoding.
The D-FINE model was proposed in D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
The abstract from the paper is the following:
We introduce D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).
FDR transforms the regression process from predicting fixed coordinates to iteratively refining probability distributions, providing a fine-grained intermediate representation that significantly enhances localization accuracy. GO-LSD is a bidirectional optimization strategy that transfers localization knowledge from refined distributions to shallower layers through self-distillation, while also simplifying the residual prediction tasks for deeper layers. Additionally, D-FINE incorporates lightweight optimizations in computationally intensive modules and operations, achieving a better balance between speed and accuracy. Specifically, D-FINE-L / X achieves 54.0% / 55.8% AP on the COCO dataset at 124 / 78 FPS on an NVIDIA T4 GPU. When pretrained on Objects365, D-FINE-L / X attains 57.1% / 59.3% AP, surpassing all existing real-time detectors. Furthermore, our method significantly enhances the performance of a wide range of DETR models by up to 5.3% AP with negligible extra parameters and training costs. Our code and pretrained models: this https URL.
The Conversational Speech Model (CSM) is the first open-source contextual text-to-speech model released by Sesame. It is designed to generate natural-sounding speech with or without conversational context. This context typically consists of multi-turn dialogue between speakers, represented as sequences of text and corresponding spoken audio.
Model Architecture:
CSM is composed of two LLaMA-style auto-regressive transformer decoders: a backbone decoder that predicts the first codebook token and a depth decoder that generates the remaining tokens. It uses the pretrained codec model Mimi, introduced by Kyutai, to encode speech into discrete codebook tokens and decode them back into audio.
The original csm-1b checkpoint is available under the Sesame organization on Hugging Face.
BitNetTrained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
LlamaGuardLlama Guard 4 is a new multimodal model designed to detect inappropriate content in images and text, whether used as input or generated as output by the model. It’s a dense 12B model pruned from Llama 4 Scout model, and it can run on a single GPU (24 GBs of VRAM). It can evaluate both text-only and image+text inputs, making it suitable for filtering both inputs and outputs of large language models. This enables flexible moderation pipelines where prompts are analyzed before reaching the model, and generated responses are reviewed afterwards for safety. It can also understand multiple languages.
TimesFMTimesFM (Time Series Foundation Model) is a pretrained time-series foundation model proposed in A decoder-only foundation model for time-series forecasting by Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. It is a decoder only model that uses non-overlapping patches of time-series data as input and outputs some output patch length prediction in an autoregressive fashion.
The abstract from the paper is the following:
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
MLCDThe MLCD models were released by the DeepGlint-AI team in unicom, which focuses on building foundational visual models for large multimodal language models using large-scale datasets such as LAION400M and COYO700M, and employs sample-to-cluster contrastive learning to optimize performance. MLCD models are primarily used for multimodal visual large language models, such as LLaVA.
JanusThe Janus Model was originally proposed in Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation by DeepSeek AI team and later refined in Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling. Janus is a vision-language model that can generate both image and text output, it can also take both images and text as input.
Note
The model doesn't generate both images and text in an interleaved format. The user has to pass a parameter indicating whether to generate text or image.
The abstract from the original paper is the following:
In this paper, we introduce Janus, an autoregressive framework that unifies multimodal understanding and generation. Prior research often relies on a single visual encoder for both tasks, such as Chameleon. However, due to the differing levels of information granularity required by multimodal understanding and generation, this approach can lead to suboptimal performance, particularly in multimodal understanding. To address this issue, we decouple visual encoding into separate pathways, while still leveraging a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder's roles in understanding and generation, but also enhances the framework's flexibility. For instance, both the multimodal understanding and generation components can independently select their most suitable encoding methods. Experiments show that Janus surpasses previous unified model and matches or exceeds the performance of task-specific models. The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next-generation unified multimodal models.
The abstract from the aforementioned Janus-Pro
paper, released afterwards, is the following:
In this work, we introduce Janus-Pro, an advanced version of the previous work Janus. Specifically, Janus-Pro incorporates (1) an optimized training strate (2) expanded training data, and (3) scaling to larger model size. With these improvements, Janus-Pro achieves significant advancements in both multimodal understanding and text-to-image instruction-following capabilities, while also enhancing the stability of text-to-image generation. We hope this work will inspire further exploration in the field. Code and models are publicly available.
InternVLThe InternVL3 family of Visual Language Models was introduced in InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models.
The abstract from the paper is the following:
We introduce InternVL3, a significant advancement in the InternVL series featuring a native multimodal pre-training paradigm. Rather than adapting a text-only large language model (LLM) into a multimodal large language model (MLLM) that supports visual inputs, InternVL3 jointly acquires multimodal and linguistic capabilities from both diverse multimodal data and pure-text corpora during a single pre-training stage. This unified training paradigm effectively addresses the complexities and alignment challenges commonly encountered in conventional post-hoc training pipelines for MLLMs. To further improve performance and scalability, InternVL3 incorporates variable visual position encoding (V2PE) to support extended multimodal contexts, employs advanced post-training techniques such as supervised fine-tuning (SFT) and mixed preference optimization (MPO), and adopts test-time scaling strategies alongside an optimized training infrastructure. Extensive empirical evaluations demonstrate that InternVL3 delivers superior performance across a wide range of multi-modal tasks. In particular, InternVL3-78B achieves a score of 72.2 on the MMMU benchmark, setting a new state-of-the-art among open-source MLLMs. Its capabilities remain highly competitive with leading proprietary models, including ChatGPT-4o, Claude 3.5 Sonnet, and Gemini 2.5 Pro, while also maintaining strong pure-language proficiency. In pursuit of open-science principles, we will publicly release both the training data and model weights to foster further research and development in next-generation MLLMs.
Overview of InternVL3 models architecture, which is the same as InternVL2.5. Taken from the original checkpoint.
Comparison of InternVL3 performance on OpenCompass against other SOTA VLLMs. Taken from the original checkpoint.
Kernel integrationWe integrate some kernels in the transformers
library via the kernels
package: https://github.com/huggingface/kernels
We start with some kernels in the Llama model, and we iterate to identify the best performance optimizations
In the previous release, we've added TP support in order to run distributed inference. However, this is not supported for all quantization methods. We are progressively adding support to it. Right now, only compressed-tensors, fp8 and fp8-fbgemm support it.
From the AutoRound contributors:
AutoRound is an advanced quantization algorithm that delivers strong accuracy, even at 2-bit precision. It leverages sign gradient descent to fine-tune both rounding values and min-max clipping thresholds in just 200 steps ... More details here: https://github.com/intel/auto-round
We have added two new sections to better understand and get started with quantization:
We've added GGUF support to gemma3 family models.
Most Vision Models and VLMs in Transformers can now benefit from fast image processors. By utilizing torch/torchvision functional transforms, these processors offer a substantial speedup when processing images compared to PiL/numpy functions, and support processing on both CPU and CUDA.
The new @auto_docstring
decorator makes it easier to add proper documentation when contributing a model without bloating the modeling code:
@auto_docstring
: AutoDocstringgenerate
We now support custom generate
methods to be loaded from model.generate
. The custom generate
methods can be stored on the Hub, enabling quick distribution of experiments regarding new caches, decoding methods, heuristics, ...
from transformers import AutoModelForCausalLM, AutoTokenizer # `generate` with `custom_generate` -> `generate` uses custom code # note: calling the custom method prints "✨ using a custom generation method ✨" tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", device_map="auto") inputs = tokenizer(["The quick brown"], return_tensors="pt").to(model.device) gen_out = model.generate(**inputs, custom_generate="transformers-community/custom_generate_example", trust_remote_code=True) print(tokenizer.batch_decode(gen_out, skip_special_tokens=True))
You can find the docs here, and all custom generation methods by searching for the custom_generate
tag.
The transformers-cli
command is updated to be simpler and cleaner, specifically for its chat
variant.
The following is now possible and recommended:
transformers chat Qwen/Qwen2.5-3B-Instruct
Additionally, almost any generate flag can now be passed as a positional argument, present and future, as opposed to being limited to a set of hardcoded flags, for example:
transformers chat Qwen/Qwen2.5-0.5B-Instruct do_sample=False max_new_tokens=10
chat
] generate parameterization powered by GenerationConfig
and UX-related changes by @gante in #38047The agents folder is finally removed from transformers
in favour of using smolagents
.
We are moving away from torch 2.0 as it has been released more than two years ago.
General bugfixes and improvementsinit empty weights
without accelerate by @Cyrilvallez in #37337_init_weights
by @Cyrilvallez in #37341GenerationMixin
inheritance by default in PreTrainedModel
by @gante in #37173_pytree._register_pytree_node
and torch.cpu.amp.autocast
by @bzhong-solink in #37372kernels
to 0.4.3 by @ArthurZucker in #37419rms_norm_eps
for the L2Norm for Llama4 by @ArthurZucker in #37418tests/models/
by @ydshieh in #37415fsspec
dependency which isn't directly used by transformers by @cyyever in #37318_init_weights()
issues - make it work for composite models by @Cyrilvallez in #37070num_logits_to_keep
by @Cyrilvallez in #37149from_pretrained
by @Cyrilvallez in #37216attn_temperature_tuning
by @gmlwns2000 in #37501test_offloaded_cache_implementation
on XPU by @yao-matrix in #37514as_tensor
) by @ydshieh in #37551test_can_load_with_global_device_set
using a subprocess by @ydshieh in #37553xxx_token_id
for multimodal tokens by @zucchini-nlp in #37573test_past_key_values_format
by @gante in #37614/scripts
🧹 🧹 by @gante in #37676en
docs in push CI by @gante in #37677siglip.md
to Korean by @devxaitist in #37145Qwen2_5OmniConfig.get_text_config
by @shahruk10 in #37690/model_cards
🧹 🧹 by @gante in #37685sacrebleu
(and document why) by @gante in #37700qwen2_5_omni
] fix flaky tests by @gante in #37721test_nemotron_8b_generation_sdpa
by @faaany in #37665embeds_to_talker
device in Qwen2.5-Omni by @BakerBunker in #37739AriaForConditionalGenerationIntegrationTest
on T4
by @ydshieh in #37746MllamaForConditionalGenerationIntegrationTest
by @ydshieh in #37750HybridCache
init when device
is passed by @gante in #37718GPT2Model
StaticCache support by @poedator in #35761torch
version by @gante in #37760roberta.md
to Korean by @garongkim in #37069keypoint_detection.md
to Korean by @rlaalsrl0922 in #36649hub.py
by @srai9 in #37796test_generate_continue_from_past_key_values
by @gante in #37724electra.md
to Korean by @Kim-Ju-won in #36763torch.compile
test by @gante in #37894AOPerModuleConfig
and include_embedding
by @jerryzh168 in #37802load_state_dict
by @woct0rdho in #37902gpu_selection.md
to Korean by @nsbg in #36757vocab_size
access for multimodal models by @kurzdev in #37937max_memory
argument when factoring in unused reserved memory by @gante in #37982pad
image transform for batched inputs by @sebasv in #37544Optional
typing by @qubvel in #38018test_push_to_hub_with_saves_each_epoch
for now by @ydshieh in #38022torchscript.md
by @Madghostek in #38004test_speculative_decoding_non_distil
device-agnostic by @faaany in #38010AutoDocstring
] Based on inspect parsing of the signature by @ArthurZucker and @yonigozlan in #33771ready for review
by @ydshieh in #37885Trigger CircleCI via GitHub Actions when
ready for review` by @ydshieh in #38038Trigger CircleCI via GitHub Actions when "ready for review" by @ydshieh in #37885)
kernels
from docker images by @ydshieh in #38083require_read_token
by @ydshieh in #38093librispeech_asr
dataset by @faaany in #38073lr_scheduler_kwargs
options to create LR Scheduler when LayerWiseDummyOptimizer is used by @BlackNoodle in #34559past_key_values
type hint in model output types by @ChengLyu in #37953check_bad commit.py
gives wrong results by @ydshieh in #38107manueldeprada
to run_slow
whitelist by @manueldeprada in #38126include_embedding
flag by @jerryzh168 in #37935SinusoidsPositionEmbedding
precision by @BakerBunker in #38151Trigger CircleCI by ready for review
by @ydshieh in #38171convert to draft
workflow by @ydshieh in #38177fetch_tests
CircleCI job by @ydshieh in #38176test_sdpa_equivalence
(redundant) by @gante in #37911The following contributors have made significant changes to the library over the last release:
fsspec
dependency which isn't directly used by transformers (#37318)test_offloaded_cache_implementation
on XPU (#37514)RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4