A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://huggingface.co/docs/transformers/v4.51.3/en/model_doc/mamba2 below:

Website Navigation


Mamba 2

Mamba 2 Overview

The Mamba2 model was proposed in Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality by Tri Dao and Albert Gu. It is a State Space Model similar to Mamba 1, with better performances in a simplified architecture.

The abstract from the paper is the following:

While Transformers have been the main architecture behind deep learning’s success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba’s selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.

Tips:

This version should support all implementations of Mamba 2, and in particular Mamba-2 codestral from Mistral AI. In particular, mamba 2 codestral was released with a number of groups equal to 8, which can be thought intuitively as similar to the number of kv heads in an attention-based model. This model has two different forward passes, torch_forward or cuda_kernels_forward. The latter uses the original cuda kernels if they are found in your environment, and is slower on the prefill i.e. requires a “warmup run” due to high cpu overhead, see here and also here. Without compilation, the torch_forward implementation is faster by a factor 3 to 4. Further, there are no positional embeddings in this model, but there is an attention_mask and a specific logic to mask out hidden states in two places in the case of batched generation, see here as well. Due to this, in addition to the reimplementation of mamba2 kernels, batched generation and cached generation are expected to have slight discrepancies. Further, the results given by the cuda kernels or the torch forward are expected to be slightly different. The SSM algorithm heavily relies on tensor contractions, which have matmul equivalents but the order of operations is slightly different, making the difference greater at smaller precisions. Another note, shutdown of hidden states corresponding to padding tokens is done in 2 places and mostly has been tested with left-padding. Right-padding will propagate noise down the line and is not guaranteed to yield satisfactory results. tokenizer.padding_side = "left" ensures you are using the correct padding side.

This model was contributed by Molbap, with tremendous help from Anton Vlasjuk. The original code can be found here.

Usage A simple generation example:
from transformers import Mamba2Config, Mamba2ForCausalLM, AutoTokenizer
import torch
model_id = 'mistralai/Mamba-Codestral-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False)
model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9')
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]

out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))

Here’s a draft script for finetuning:

from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, Mamba2ForCausalLM, TrainingArguments
model_id = 'mistralai/Mamba-Codestral-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left" 

model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9')
dataset = load_dataset("Abirate/english_quotes", split="train")



training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=2,
    logging_dir='./logs',
    logging_steps=10,
    learning_rate=2e-3
)
lora_config =  LoraConfig(
        r=8,
        target_modules=["embeddings", "in_proj", "out_proj"],
        task_type="CAUSAL_LM",
        bias="none"
)
trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    args=training_args,
    peft_config=lora_config,
    train_dataset=dataset,
    dataset_text_field="quote",
)
trainer.train()
Mamba2Config class transformers.Mamba2Config < source >

( num_heads = 128 head_dim = 64 vocab_size = 32768 hidden_size = 4096 state_size = 128 num_hidden_layers = 64 layer_norm_epsilon = 1e-05 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 expand = 2 conv_kernel = 4 n_groups = 8 use_bias = False use_conv_bias = True hidden_act = 'silu' initializer_range = 0.1 residual_in_fp32 = True time_step_rank = 'auto' time_step_min = 0.001 time_step_max = 0.1 time_step_floor = 0.0001 time_step_limit = (0.0, inf) rescale_prenorm_residual = False use_cache = True rms_norm = True chunk_size = 256 tie_word_embeddings = False **kwargs )

Parameters

This is the configuration class to store the configuration of a Mamba2Model. It is used to instantiate a MAMBA2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MAMBA2 state-spaces/mamba2-2.8b architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import Mamba2Config, Mamba2Model

>>> 
>>> configuration = Mamba2Config()

>>> 
>>> model = Mamba2Model(configuration)

>>> 
>>> configuration = model.config
Mamba2Model class transformers.Mamba2Model < source >

( config )

Parameters

The bare MAMBA2 Model transformer outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward < source >

( input_ids: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.LongTensor] = None cache_params: typing.Optional[transformers.models.mamba2.modeling_mamba2.Mamba2Cache] = None use_cache: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None **kwargs ) transformers.models.mamba2.modeling_mamba2.Mamba2Output or tuple(torch.FloatTensor)

Parameters

Returns

transformers.models.mamba2.modeling_mamba2.Mamba2Output or tuple(torch.FloatTensor)

A transformers.models.mamba2.modeling_mamba2.Mamba2Output or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Mamba2Config) and inputs.

The Mamba2Model forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import AutoTokenizer, Mamba2Model
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/mamba-codestral-7B-v0.1")
>>> model = Mamba2Model.from_pretrained("mistralai/mamba-codestral-7B-v0.1")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
Mamba2LMHeadModel class transformers.Mamba2ForCausalLM < source >

( config )

Parameters

The MAMBA2 Model transformer with a language modeling head on top (linear layer with weights not tied to the input embeddings).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward < source >

( input_ids: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None cache_params: typing.Optional[transformers.models.mamba2.modeling_mamba2.Mamba2Cache] = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None use_cache: typing.Optional[bool] = None cache_position: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None **kwargs ) transformers.models.mamba2.modeling_mamba2.Mamba2CausalLMOutput or tuple(torch.FloatTensor)

Parameters

Returns

transformers.models.mamba2.modeling_mamba2.Mamba2CausalLMOutput or tuple(torch.FloatTensor)

A transformers.models.mamba2.modeling_mamba2.Mamba2CausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Mamba2Config) and inputs.

The Mamba2ForCausalLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> import torch
>>> from transformers import AutoTokenizer, Mamba2ForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/mamba-codestral-7B-v0.1")
>>> model = Mamba2ForCausalLM.from_pretrained("mistralai/mamba-codestral-7B-v0.1")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits
< > Update on GitHub

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.3