Each framework has a generate method for text generation implemented in their respective GenerationMixin
class:
Regardless of your framework of choice, you can parameterize the generate method with a GenerationConfig class instance. Please refer to this class for the complete list of generation parameters, which control the behavior of the generation method.
To learn how to inspect a model’s generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the text generation strategies guide. The guide also explains how to use related features, like token streaming.
GenerationConfig class transformers.GenerationConfig < source >( **kwargs )
Parameters that control the length of the output
int
, optional, defaults to 20) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens
. Its effect is overridden by max_new_tokens
, if also set. int
, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. int
, optional, defaults to 0) — The minimum length of the sequence to be generated. Corresponds to the length of the input prompt + min_new_tokens
. Its effect is overridden by min_new_tokens
, if also set. int
, optional) — The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. bool
or str
, optional, defaults to False
) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True
, where the generation stops as soon as there are num_beams
complete candidates; False
, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never"
, where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). float
, optional) — The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed. str or list[str]
, optional) — A string or a list of strings that should terminate generation if the model outputs them. Parameters that control the generation strategy used
bool
, optional, defaults to False
) — Whether or not to use sampling ; use greedy decoding otherwise. int
, optional, defaults to 1) — Number of beams for beam search. 1 means no beam search. int
, optional, defaults to 1) — Number of groups to divide num_beams
into in order to ensure diversity among different groups of beams. this paper for more details. float
, optional) — The values balance the model confidence and the degeneration penalty in contrastive search decoding. str
or list[int]
, optional) — The layers to use for DoLa decoding. If None
, DoLa decoding is not used. If a string, it must be one of “low” or “high”, which means using the lower part or higher part of the model layers, respectively. “low” means the first half of the layers up to the first 20 layers, and “high” means the last half of the layers up to the last 20 layers. If a list of integers, it must contain the indices of the layers to use for candidate premature layers in DoLa. The 0-th layer is the word embedding layer of the model. Set to 'low'
to improve long-answer reasoning tasks, 'high'
to improve short-answer tasks. Check the documentation or the paper for more details. Parameters that control the cache
bool
, optional, defaults to True
) — Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding. str
, optional, default to None
) — Name of the cache class that will be instantiated in generate
, for faster decoding. Possible values are:
"dynamic"
: DynamicCache"static"
: StaticCache"offloaded_static"
: OffloadedStaticCache"sliding_window"
: SlidingWindowCache"hybrid"
: HybridCache"mamba"
: MambaCache"quantized"
: QuantizedCacheIf none is specified, we will use the default cache for the model (which is often DynamicCache). See our cache documentation for further information.
CacheConfig
or dict
, optional, default to None
) — Arguments used in the key-value cache class can be passed in cache_config
. Can be passed as a Dict
and it will be converted to its respective CacheConfig
internally. Otherwise can be passed as a CacheConfig
class matching the indicated cache_implementation
. bool
, optional, default to True
) — Whether to return the legacy or new format of the cache when DynamicCache
is used by default. Parameters for manipulation of the model output logits
float
, optional, defaults to 1.0) — The value used to module the next token probabilities. This value is set in a model’s generation_config.json
file. If it isn’t set, the default value is 1.0 int
, optional, defaults to 50) — The number of highest probability vocabulary tokens to keep for top-k-filtering. This value is set in a model’s generation_config.json
file. If it isn’t set, the default value is 50. float
, optional, defaults to 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p
or higher are kept for generation. This value is set in a model’s generation_config.json
file. If it isn’t set, the default value is 1.0 float
, optional) — Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0 and 1. Typical values are in the 0.01-0.2 range, comparably selective as setting top_p
in the 0.99-0.8 range (use the opposite of normal top_p
values). float
, optional, defaults to 1.0) — Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p
or higher are kept for generation. See this paper for more details. float
, optional, defaults to 0.0) — If set to float strictly between 0 and 1, only tokens with a conditional probability greater than epsilon_cutoff
will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. float
, optional, defaults to 0.0) — Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between 0 and 1, a token is only considered if it is greater than either eta_cutoff
or sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits)))
. The latter term is intuitively the expected next token probability, scaled by sqrt(eta_cutoff)
. In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. float
, optional, defaults to 0.0) — This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty
is only effective if group beam search
is enabled. float
, optional, defaults to 1.0) — The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. float
, optional, defaults to 1.0) — The parameter for encoder_repetition_penalty. An exponential penalty on sequences that are not in the original input. 1.0 means no penalty. float
, optional, defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty
> 0.0 promotes longer sequences, while length_penalty
< 0.0 encourages shorter sequences. int
, optional, defaults to 0) — If set to int > 0, all ngrams of that size can only occur once. list[list[int]]
, optional) — List of list of token ids that are not allowed to be generated. Check NoBadWordsLogitsProcessor for further documentation and examples. list[list[int]]
or list[list[list[int]]]
, optional) — List of token ids that must be generated. If given a list[list[int]]
, this is treated as a simple list of words that must be included, the opposite to bad_words_ids
. If given list[list[list[int]]]
, this triggers a disjunctive constraint, where one can allow different forms of each word. bool
, optional, defaults to False
) — Whether to renormalize the logits after applying all the logits processors (including the custom ones). It’s highly recommended to set this flag to True
as the search algorithms suppose the score logits are normalized but some logit processors break the normalization. list[Constraint]
, optional) — Custom constraints that can be added to the generation to ensure that the output will contain the use of certain tokens as defined by Constraint
objects, in the most sensible way possible. int
, optional, defaults to model.config.forced_bos_token_id
) — The id of the token to force as the first generated token after the decoder_start_token_id
. Useful for multilingual models like mBART where the first generated token needs to be the target language token. int
or list[int], *optional*, defaults to
model.config.forced_eos_token_id) -- The id of the token to force as the last generated token when
max_length` is reached. Optionally, use a list to set multiple end-of-sequence tokens. bool
, optional, defaults to model.config.remove_invalid_values
) — Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values
can slow down generation. tuple(int, float)
, optional) — This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been generated. The tuple shall consist of: (start_index, decay_factor)
where start_index
indicates where penalty starts and decay_factor
represents the factor of exponential decay list[int]
, optional) — A list of tokens that will be suppressed at generation. The SupressTokens
logit processor will set their log probs to -inf
so that they are not sampled. list[int]
, optional) — A list of tokens that will be suppressed at the beginning of the generation. The SupressBeginTokens
logit processor will set their log probs to -inf
so that they are not sampled. dict[tuple[int], float]
, optional)) — Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. Check SequenceBiasLogitsProcessor for further documentation and examples. bool
, optional, defaults to False
) — Heal tail tokens of prompts by replacing them with their appropriate extensions. This enhances the quality of completions for prompts affected by greedy tokenization bias. float
, optional) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale > 1
. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. bool
, optional) — Switch to sequential beam search and sequential topk for contrastive search to reduce peak memory. Used with beam search and contrastive search. BaseWatermarkingConfig
or dict
, optional) — Arguments used to watermark the model outputs by adding a small bias to randomly selected set of “green” tokens. See the docs of SynthIDTextWatermarkingConfig and WatermarkingConfig for more details. If passed as Dict
, it will be converted to a WatermarkingConfig
internally. Parameters that define the output variables of generate
int
, optional, defaults to 1) — The number of independently computed returned sequences for each element in the batch. bool
, optional, defaults to False
) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more details. bool
, optional, defaults to False
) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more details. bool
, optional, defaults to False
) — Whether or not to return the prediction scores. See scores
under returned tensors for more details. bool
, optional) — Whether or not to return the unprocessed prediction logit scores. See logits
under returned tensors for more details. bool
, optional, defaults to False
) — Whether or not to return a ModelOutput, as opposed to returning exclusively the generated sequence. This flag must be set to True
to return the generation cache (when use_cache
is True
) or optional outputs (see flags starting with output_
) Special tokens that can be used at generation time
int
, optional) — The id of the padding token. int
, optional) — The id of the beginning-of-sequence token. Union[int, list[int]]
, optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. Generation parameters exclusive to encoder-decoder models
int
, optional, defaults to 0) — If set to int > 0, all ngrams of that size that occur in the encoder_input_ids
cannot occur in the decoder_input_ids
. int
or list[int]
, optional) — If an encoder-decoder model starts decoding with a different token than bos, the id of that token or a list of length batch_size
. Indicating a list enables different start ids for each element in the batch (e.g. multilingual models with different target languages in one batch) Generation parameters exclusive to assistant generation
bool
, optional, defaults to False
) — Whether the model is an assistant (draft) model. int
, optional, defaults to 20) — Defines the number of speculative tokens that shall be generated by the assistant model before being checked by the target model at each iteration. Higher values for num_assistant_tokens
make the generation more speculative : If the assistant model is performant larger speed-ups can be reached, if the assistant model requires lots of corrections, lower speed-ups are reached. str
, optional, defaults to "constant"
) — Defines the schedule at which max assistant tokens shall be changed during inference.
"heuristic"
: When all speculative tokens are correct, increase num_assistant_tokens
by 2 else reduce by 1. num_assistant_tokens
value is persistent over multiple generation calls with the same assistant model."heuristic_transient"
: Same as "heuristic"
but num_assistant_tokens
is reset to its initial value after each generation call."constant"
: num_assistant_tokens
stays unchanged during generationfloat
, optional, defaults to 0.4) — The confidence threshold for the assistant model. If the assistant model’s confidence in its prediction for the current token is lower than this threshold, the assistant model stops the current token generation iteration, even if the number of speculative tokens (defined by num_assistant_tokens
) is not yet reached. The assistant’s confidence threshold is adjusted throughout the speculative iterations to reduce the number of unnecessary draft and target forward passes, biased towards avoiding false negatives. assistant_confidence_threshold
value is persistent over multiple generation calls with the same assistant model. It is an unsupervised version of the dynamic speculation lookahead from Dynamic Speculation Lookahead Accelerates Speculative Decoding of Large Language Models https://huggingface.co/papers/2405.04304. int
, optional) — The number of tokens to be output as candidate tokens. int
, optional) — The maximum ngram size to be considered for matching in the prompt. Default to 2 if not provided. int
, optional) — If set to a positive integer, early exit of the model will be used as an assistant. Can only be used with models that support early exit (i.e. models where logits from intermediate layers can be interpreted by the LM head). int
, optional, defaults to 10) — If set to a positive integer, the re-encodeing process will additionally consider the last assistant_lookbehind
assistant tokens to correctly align tokens. Can only be used with different tokenizers in speculative decoding. See this blog for more details. int
, optional, defaults to 10) — If set to a positive integer, the re-encodeing process will additionally consider the last target_lookbehind
target tokens to correctly align tokens. Can only be used with different tokenizers in speculative decoding. See this blog for more details. Parameters related to performances and compilation
generate
will compile
the forward pass for faster inference. bool
, optional) — Whether to disable the automatic compilation of the forward pass. Automatic compilation happens when specific criteria are met, including using a compilable cache. Please open an issue if you find the need to use this flag. Class that holds a configuration for a generation task. A generate
call supports the following generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models:
num_beams=1
and do_sample=False
penalty_alpha>0.
and top_k>1
num_beams=1
and do_sample=True
num_beams>1
and do_sample=False
num_beams>1
and do_sample=True
num_beams>1
and num_beam_groups>1
constraints!=None
or force_words_ids!=None
assistant_model
or prompt_lookup_num_tokens
is passed to .generate()
dola_layers
is passed to .generate()
To learn more about decoding strategies refer to the text generation strategies guide.
A large number of these flags control the logits or the stopping criteria of the generation. Make sure you check the generate-related classes for a full description of the possible manipulations, as well as examples of their usage.
from_pretrained < source >( pretrained_model_name: typing.Union[str, os.PathLike] config_file_name: typing.Union[str, os.PathLike, NoneType] = None cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs ) → GenerationConfig
Parameters
str
or os.PathLike
) — This can be either:
./my_model_directory/
.str
or os.PathLike
, optional, defaults to "generation_config.json"
) — Name of the generation configuration JSON file to be loaded from pretrained_model_name
. str
or os.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. bool
, optional, defaults to False
) — Whether or not to force to (re-)download the configuration files and override the cached versions if they exist. dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. str
or bool
, optional) — The token to use as HTTP bearer authorization for remote files. If True
, or not specified, will use the token generated when running huggingface-cli login
(stored in ~/.huggingface
). str
, optional, defaults to "main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision
can be any identifier allowed by git.
To test a pull request you made on the Hub, you can pass revision="refs/pr/<pr_number>"
.
bool
, optional, defaults to False
) — If False
, then this function returns just the final configuration object.
If True
, then this functions returns a Tuple(config, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs
which has not been used to update config
and is otherwise ignored.
str
, optional, defaults to ""
) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. dict[str, Any]
, optional) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs
keyword parameter. The configuration object instantiated from this pretrained model.
Instantiate a GenerationConfig from a generation configuration file.
Examples:
>>> from transformers import GenerationConfig >>> >>> generation_config = GenerationConfig.from_pretrained("openai-community/gpt2") >>> >>> generation_config.save_pretrained("./test/saved_model/") >>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/") >>> >>> generation_config.save_pretrained("./test/saved_model/", config_file_name="my_configuration.json") >>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/", "my_configuration.json") >>> >>> >>> generation_config, unused_kwargs = GenerationConfig.from_pretrained( ... "openai-community/gpt2", top_k=1, foo=False, do_sample=True, return_unused_kwargs=True ... ) >>> generation_config.top_k 1 >>> unused_kwargs {'foo': False}from_model_config < source >
( model_config: PretrainedConfig ) → GenerationConfig
Parameters
PretrainedConfig
) — The model config that will be used to instantiate the generation config. The configuration object instantiated from those parameters.
Instantiates a GenerationConfig from a PretrainedConfig. This function is useful to convert legacy PretrainedConfig objects, which may contain generation parameters, into a stand-alone GenerationConfig.
save_pretrained < source >( save_directory: typing.Union[str, os.PathLike] config_file_name: typing.Union[str, os.PathLike, NoneType] = None push_to_hub: bool = False **kwargs )
Parameters
str
or os.PathLike
) — Directory where the configuration JSON file will be saved (will be created if it does not exist). str
or os.PathLike
, optional, defaults to "generation_config.json"
) — Name of the generation configuration JSON file to be saved in save_directory
. bool
, optional, defaults to False
) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id
(will default to the name of save_directory
in your namespace). dict[str, Any]
, optional) — Additional key word arguments passed along to the push_to_hub() method. Save a generation configuration object to the directory save_directory
, so that it can be re-loaded using the from_pretrained() class method.
( **kwargs ) → dict[str, Any]
Parameters
Dictionary containing all the key-value pairs that were not used to update the instance.
Updates attributes of this class instance with attributes from kwargs
if they match existing attributes, returning all the unused kwargs.
( strict = False )
Parameters
Validates the values of the attributes of the GenerationConfig instance. Raises exceptions in the presence of parameterization that can be detected as incorrect from the configuration instance alone.
Note that some parameters not validated here are best validated at generate runtime, as they may depend on other inputs and/or the model, such as parameters related to the generation length.
get_generation_mode < source >( assistant_model: typing.Optional[ForwardRef('PreTrainedModel')] = None ) → GenerationMode
Parameters
PreTrainedModel
, optional) — The assistant model to be used for assisted generation. If set, the generation mode will be assisted generation. The generation mode triggered by the instance.
Returns the generation mode triggered by the GenerationConfig instance.
GenerationMixinA class containing all functions for auto-regressive text generation, to be used as a mixin in model classes. Inheriting from this class causes the model to have special generation-related behavior, such as loading a GenerationConfig
at initialization time or ensuring generate
-related tests are run in transformers
CI.
A model class should inherit from GenerationMixin
to enable calling methods like generate
, or when it has defined a custom generate
method that relies on GenerationMixin
, directly or indirectly, which approximately shares the same interface to public methods like generate
. Three examples:
LlamaForCausalLM
should inherit from GenerationMixin
to enable calling generate
and other public methods in the mixin;BlipForQuestionAnswering
has a custom generate
method that approximately shares the same interface as GenerationMixin.generate
(it has a few extra arguments, and the same output). That function also calls GenerationMixin.generate
indirectly, through an inner model. As such, BlipForQuestionAnswering
should inherit from GenerationMixin
to benefit from all generation-related automation in our codebase;BarkModel
has a custom generate
method and one of its inner models calls GenerationMixin.generate
. However, its generate
does not share the same interface as GenerationMixin.generate
. In this case, BarkModel
should NOT inherit from GenerationMixin
, as it breaks the generate
interface.The class exposes generate(), which can be used for:
num_beams=1
and do_sample=False
penalty_alpha>0
and top_k>1
num_beams=1
and do_sample=True
num_beams>1
and do_sample=False
num_beams>1
and do_sample=True
num_beams>1
and num_beam_groups>1
constraints!=None
or force_words_ids!=None
assistant_model
or prompt_lookup_num_tokens
is passed to .generate()
To learn more about decoding strategies refer to the text generation strategies guide.
generate < source >( inputs: typing.Optional[torch.Tensor] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None prefix_allowed_tokens_fn: typing.Optional[typing.Callable[[int, torch.Tensor], list[int]]] = None synced_gpus: typing.Optional[bool] = None assistant_model: typing.Optional[ForwardRef('PreTrainedModel')] = None streamer: typing.Optional[ForwardRef('BaseStreamer')] = None negative_prompt_ids: typing.Optional[torch.Tensor] = None negative_prompt_attention_mask: typing.Optional[torch.Tensor] = None use_model_defaults: typing.Optional[bool] = None custom_generate: typing.Optional[str] = None **kwargs ) → ModelOutput or torch.LongTensor
Parameters
torch.Tensor
of varying shape depending on the modality, optional) — The sequence used as a prompt for the generation or as model inputs to the encoder. If None
the method initializes it with bos_token_id
and a batch size of 1. For decoder-only models inputs
should be in the format of input_ids
. For encoder-decoder models inputs can represent any of input_ids
, input_values
, input_features
, or pixel_values
. **kwargs
passed to generate matching the attributes of generation_config
will override them. If generation_config
is not provided, the default will be used, which has the following loading priority: 1) from the generation_config.json
model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. LogitsProcessorList
, optional) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. StoppingCriteriaList
, optional) — Custom stopping criteria that complements the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. If your stopping criteria depends on the scores
input, make sure you pass return_dict_in_generate=True, output_scores=True
to generate
. This feature is intended for advanced users. Callable[[int, torch.Tensor], list[int]]
, optional) — If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id
and input_ids
. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID batch_id
and the previously generated tokens inputs_ids
. This argument is useful for constrained generation conditioned on the prefix, as described in Autoregressive Entity Retrieval. bool
, optional) — Whether to continue running the while loop until max_length. Unless overridden, this flag will be set to True
if using FullyShardedDataParallel
or DeepSpeed ZeRO Stage 3 with multiple GPUs to avoid deadlocking if one GPU finishes generating before other GPUs. Otherwise, defaults to False
. PreTrainedModel
, optional) — An assistant model that can be used to accelerate generation. The assistant model must have the exact same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistant model is much faster than running generation with the model you’re calling generate from. As such, the assistant model should be much smaller. BaseStreamer
, optional) — Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids)
and the streamer is responsible for any further processing. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — The negative prompt needed for some processors such as CFG. The batch size must match the input batch size. This is an experimental feature, subject to breaking API changes in future versions. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Attention_mask for negative_prompt_ids
. bool
, optional) — When it is True
, unset parameters in generation_config
will be set to the model-specific default generation configuration (model.generation_config
), as opposed to the global defaults (GenerationConfig()
). If unset, models saved starting from v4.50
will consider this flag to be True
. str
, optional) — A string containing the name of a huggingface.co repository. If provided, the custom generate
function defined in that reposity’s custom_generate/generate.py
file will be executed instead of the standard generate
method. Note that the logic is for generation is entirely defined in that repository, and the return type may be different from the standard generate
method. dict[str, Any]
, optional) — Ad hoc parametrization of generation_config
and/or additional model-specific kwargs that will be forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns
ModelOutput or torch.LongTensor
A ModelOutput (if return_dict_in_generate=True
or when config.return_dict_in_generate=True
) or a torch.LongTensor
.
If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False
), the possible ModelOutput types are:
If the model is an encoder-decoder model (model.config.is_encoder_decoder=True
), the possible ModelOutput types are:
Generates sequences of token ids for models with a language modeling head.
Most generation-controlling parameters are set in generation_config
which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config
by passing the corresponding parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True)
.
For an overview of generation strategies and code examples, check out the following guide.
compute_transition_scores < source >( sequences: Tensor scores: tuple beam_indices: typing.Optional[torch.Tensor] = None normalize_logits: bool = False ) → torch.Tensor
Parameters
torch.LongTensor
) — The generated sequences. The second dimension (sequence_length) is either equal to max_length
or shorter if all batches finished early due to the eos_token_id
. tuple(torch.FloatTensor)
) — Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor
with up to max_new_tokens
elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size)
. torch.LongTensor
, optional) — Beam indices of generated token id at each generation step. torch.LongTensor
of shape (batch_size*num_return_sequences, sequence_length)
. Only required if a num_beams>1
at generate-time. bool
, optional, defaults to False
) — Whether to normalize the logits (which, for legacy reasons, may be unnormalized). A torch.Tensor
of shape (batch_size*num_return_sequences, sequence_length)
containing the transition scores (logits)
Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quickly obtain the scores of the selected tokens at generation time.
Examples:
>>> from transformers import GPT2Tokenizer, AutoModelForCausalLM >>> import numpy as np >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer(["Today is"], return_tensors="pt") >>> >>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits=True ... ) >>> >>> >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip(generated_tokens[0], transition_scores[0]): ... ... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}") | 262 | the | -1.414 | 24.33% | 1110 | day | -2.609 | 7.36% | 618 | when | -2.010 | 13.40% | 356 | we | -1.859 | 15.58% | 460 | can | -2.508 | 8.14% >>> >>> outputs = model.generate( ... **inputs, ... max_new_tokens=5, ... num_beams=4, ... num_return_sequences=4, ... return_dict_in_generate=True, ... output_scores=True, ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False ... ) >>> >>> >>> >>> >>> output_length = np.sum(transition_scores.numpy() < 0, axis=1) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty) >>> print(np.allclose(outputs.sequences_scores, reconstructed_scores)) TrueTFGenerationMixin class transformers.TFGenerationMixin < source >
( )
A class containing all of the functions supporting generation, to be used as a mixin in TFPreTrainedModel.
The class exposes generate(), which can be used for:
greedy_search()
if num_beams=1
and do_sample=False
contrastive_search()
if penalty_alpha>0
and top_k>1
sample()
if num_beams=1
and do_sample=True
beam_search()
if num_beams>1
You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide.
generate < source >( inputs: typing.Optional[tensorflow.python.framework.tensor.Tensor] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None logits_processor: typing.Optional[transformers.generation.tf_logits_process.TFLogitsProcessorList] = None seed = None **kwargs ) → ModelOutput or tf.Tensor
Parameters
tf.Tensor
of varying shape depending on the modality, optional) — The sequence used as a prompt for the generation or as model inputs to the encoder. If None
the method initializes it with bos_token_id
and a batch size of 1. For decoder-only models inputs
should of in the format of input_ids
. For encoder-decoder models inputs can represent any of input_ids
, input_values
, input_features
, or pixel_values
. ~generation.GenerationConfig
, optional) — The generation configuration to be used as base parametrization for the generation call. **kwargs
passed to generate matching the attributes of generation_config
will override them. If generation_config
is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json
model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. LogitsProcessorList
, optional) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. list[int]
, optional) — Random seed to control sampling, containing two integers, used when do_sample
is True
. See the seed
argument from stateless functions in tf.random
. dict[str, Any]
, optional) — Ad hoc parametrization of generate_config
and/or additional model-specific kwargs that will be forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns
ModelOutput or tf.Tensor
A ModelOutput (if return_dict_in_generate=True
or when config.return_dict_in_generate=True
) or a tf.Tensor
.
If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False
), the possible ModelOutput types are:
If the model is an encoder-decoder model (model.config.is_encoder_decoder=True
), the possible ModelOutput types are:
Generates sequences of token ids for models with a language modeling head.
Most generation-controlling parameters are set in generation_config
which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config
by passing the corresponding parameters to generate, e.g. .generate(inputs, num_beams=4, do_sample=True)
.
For an overview of generation strategies and code examples, check out the following guide.
compute_transition_scores < source >( sequences: Tensor scores: tuple beam_indices: typing.Optional[tensorflow.python.framework.tensor.Tensor] = None normalize_logits: bool = False ) → tf.Tensor
Parameters
tf.Tensor
) — The generated sequences. The second dimension (sequence_length) is either equal to max_length
or shorter if all batches finished early due to the eos_token_id
. tuple(tf.Tensor)
) — Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens Tuple of tf.Tensor
with up to max_new_tokens
elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size)
. tf.Tensor
, optional) — Beam indices of generated token id at each generation step. tf.Tensor
of shape (batch_size*num_return_sequences, sequence_length)
. Only required if a num_beams>1
at generate-time. bool
, optional, defaults to False
) — Whether to normalize the logits (which, for legacy reasons, may be unnormalized). A tf.Tensor
of shape (batch_size*num_return_sequences, sequence_length)
containing the transition scores (logits)
Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quickly obtain the scores of the selected tokens at generation time.
Examples:
>>> from transformers import GPT2Tokenizer, TFAutoModelForCausalLM >>> import numpy as np >>> tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") >>> model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer(["Today is"], return_tensors="tf") >>> >>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits=True ... ) >>> >>> >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip(generated_tokens[0], transition_scores[0]): ... ... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}") | 262 | the | -1.414 | 24.33% | 1110 | day | -2.609 | 7.36% | 618 | when | -2.010 | 13.40% | 356 | we | -1.859 | 15.58% | 460 | can | -2.508 | 8.14% >>> >>> outputs = model.generate( ... **inputs, ... max_new_tokens=5, ... num_beams=4, ... num_return_sequences=4, ... return_dict_in_generate=True, ... output_scores=True, ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False ... ) >>> >>> >>> >>> output_length = np.sum(transition_scores.numpy() < 0, axis=1) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = np.sum(transition_scores, axis=1) / (output_length**length_penalty) >>> print(np.allclose(outputs.sequences_scores, reconstructed_scores)) TrueFlaxGenerationMixin class transformers.FlaxGenerationMixin < source >
( )
A class containing all functions for auto-regressive text generation, to be used as a mixin in FlaxPreTrainedModel.
The class exposes generate(), which can be used for:
_greedy_search()
if num_beams=1
and do_sample=False
_sample()
if num_beams=1
and do_sample=True
_beam_search()
if num_beams>1
and do_sample=False
You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide.
generate < source >( input_ids: Array generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None prng_key: typing.Optional[jax.Array] = None trace: bool = True params: typing.Optional[dict[str, jax.Array]] = None logits_processor: typing.Optional[transformers.generation.flax_logits_process.FlaxLogitsProcessorList] = None **kwargs )
Parameters
jnp.ndarray
of shape (batch_size, sequence_length)
) — The sequence used as a prompt for the generation. ~generation.GenerationConfig
, optional) — The generation configuration to be used as base parametrization for the generation call. **kwargs
passed to generate matching the attributes of generation_config
will override them. If generation_config
is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json
model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. bool
, optional, defaults to True
) — Whether to trace generation. Setting trace=False
should only be used for debugging and will lead to a considerably slower runtime. dict[str, jnp.ndarray]
, optional) — Optionally the model parameters can be passed. Can be useful for parallelized generation. FlaxLogitsProcessorList
, optional) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. dict[str, Any]
, optional) — Ad hoc parametrization of generate_config
and/or additional model-specific kwargs that will be forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Generates sequences of token ids for models with a language modeling head.
< > Update on GitHubRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4