A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://huggingface.co/docs/transformers/v4.51.3/en/main_classes/output below:

Website Navigation


Model outputs

Model outputs

All models have outputs that are instances of subclasses of ModelOutput. Those are data structures containing all the information returned by the model, but that can also be used as tuples or dictionaries.

Let’s see how this looks in an example:

from transformers import BertTokenizer, BertForSequenceClassification
import torch

tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")

inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0)  
outputs = model(**inputs, labels=labels)

The outputs object is a SequenceClassifierOutput, as we can see in the documentation of that class below, it means it has an optional loss, a logits, an optional hidden_states and an optional attentions attribute. Here we have the loss since we passed along labels, but we don’t have hidden_states and attentions because we didn’t pass output_hidden_states=True or output_attentions=True.

When passing output_hidden_states=True you may expect the outputs.hidden_states[-1] to match outputs.last_hidden_state exactly. However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it’s returned.

You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None. Here for instance outputs.loss is the loss computed by the model, and outputs.attentions is None.

When considering our outputs object as tuple, it only considers the attributes that don’t have None values. Here for instance, it has two elements, loss then logits, so

will return the tuple (outputs.loss, outputs.logits) for instance.

When considering our outputs object as dictionary, it only considers the attributes that don’t have None values. Here for instance, it has two keys that are loss and logits.

We document here the generic model outputs that are used by more than one model type. Specific output types are documented on their corresponding model page.

ModelOutput class transformers.utils.ModelOutput < source >

( *args **kwargs )

Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular python dictionary.

You can’t unpack a ModelOutput directly. Use the to_tuple() method to convert it to a tuple before.

Convert self to a tuple containing all the attributes/keys that are not None.

BaseModelOutput class transformers.modeling_outputs.BaseModelOutput < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model’s outputs, with potential hidden states and attentions.

BaseModelOutputWithPooling class transformers.modeling_outputs.BaseModelOutputWithPooling < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None pooler_output: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model’s outputs that also contains a pooling of the last hidden states.

BaseModelOutputWithCrossAttentions class transformers.modeling_outputs.BaseModelOutputWithCrossAttentions < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model’s outputs, with potential hidden states and attentions.

BaseModelOutputWithPoolingAndCrossAttentions class transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None pooler_output: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model’s outputs that also contains a pooling of the last hidden states.

BaseModelOutputWithPast class transformers.modeling_outputs.BaseModelOutputWithPast < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).

BaseModelOutputWithPastAndCrossAttentions class transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).

Seq2SeqModelOutput class transformers.modeling_outputs.Seq2SeqModelOutput < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential decoding.

CausalLMOutput class transformers.modeling_outputs.CausalLMOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

CausalLMOutputWithCrossAttentions class transformers.modeling_outputs.CausalLMOutputWithCrossAttentions < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

CausalLMOutputWithPast class transformers.modeling_outputs.CausalLMOutputWithPast < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

MaskedLMOutput class transformers.modeling_outputs.MaskedLMOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for masked language models outputs.

Seq2SeqLMOutput class transformers.modeling_outputs.Seq2SeqLMOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for sequence-to-sequence language models outputs.

NextSentencePredictorOutput class transformers.modeling_outputs.NextSentencePredictorOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of models predicting if two sentences are consecutive or not.

SequenceClassifierOutput class transformers.modeling_outputs.SequenceClassifierOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of sentence classification models.

Seq2SeqSequenceClassifierOutput class transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of sequence-to-sequence sentence classification models.

MultipleChoiceModelOutput class transformers.modeling_outputs.MultipleChoiceModelOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of multiple choice models.

TokenClassifierOutput class transformers.modeling_outputs.TokenClassifierOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of token classification models.

QuestionAnsweringModelOutput class transformers.modeling_outputs.QuestionAnsweringModelOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None start_logits: typing.Optional[torch.FloatTensor] = None end_logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of question answering models.

Seq2SeqQuestionAnsweringModelOutput class transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None start_logits: typing.Optional[torch.FloatTensor] = None end_logits: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of sequence-to-sequence question answering models.

Seq2SeqSpectrogramOutput class transformers.modeling_outputs.Seq2SeqSpectrogramOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None spectrogram: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for sequence-to-sequence spectrogram outputs.

SemanticSegmenterOutput class transformers.modeling_outputs.SemanticSegmenterOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of semantic segmentation models.

ImageClassifierOutput class transformers.modeling_outputs.ImageClassifierOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of image classification models.

ImageClassifierOutputWithNoAttention class transformers.modeling_outputs.ImageClassifierOutputWithNoAttention < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of image classification models.

DepthEstimatorOutput class transformers.modeling_outputs.DepthEstimatorOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None predicted_depth: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for outputs of depth estimation models.

Wav2Vec2BaseModelOutput class transformers.modeling_outputs.Wav2Vec2BaseModelOutput < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None extract_features: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Base class for models that have been trained with the Wav2Vec2 loss objective.

XVectorOutput class transformers.modeling_outputs.XVectorOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None logits: typing.Optional[torch.FloatTensor] = None embeddings: typing.Optional[torch.FloatTensor] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None )

Parameters

Output type of Wav2Vec2ForXVector.

Seq2SeqTSModelOutput class transformers.modeling_outputs.Seq2SeqTSModelOutput < source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None loc: typing.Optional[torch.FloatTensor] = None scale: typing.Optional[torch.FloatTensor] = None static_features: typing.Optional[torch.FloatTensor] = None )

Parameters

Base class for time series model’s encoder outputs that also contains pre-computed hidden states that can speed up sequential decoding.

Seq2SeqTSPredictionOutput class transformers.modeling_outputs.Seq2SeqTSPredictionOutput < source >

( loss: typing.Optional[torch.FloatTensor] = None params: typing.Optional[typing.Tuple[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None loc: typing.Optional[torch.FloatTensor] = None scale: typing.Optional[torch.FloatTensor] = None static_features: typing.Optional[torch.FloatTensor] = None )

Parameters

Base class for time series model’s decoder outputs that also contain the loss as well as the parameters of the chosen distribution.

SampleTSPredictionOutput class transformers.modeling_outputs.SampleTSPredictionOutput < source >

( sequences: typing.Optional[torch.FloatTensor] = None )

Parameters

Base class for time series model’s predictions outputs that contains the sampled values from the chosen distribution.

TFBaseModelOutput class transformers.modeling_tf_outputs.TFBaseModelOutput < source >

( last_hidden_state: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for model’s outputs, with potential hidden states and attentions.

TFBaseModelOutputWithPooling class transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling < source >

( last_hidden_state: Optional[tf.Tensor] = None pooler_output: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for model’s outputs that also contains a pooling of the last hidden states.

TFBaseModelOutputWithPoolingAndCrossAttentions class transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions < source >

( last_hidden_state: Optional[tf.Tensor] = None pooler_output: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for model’s outputs that also contains a pooling of the last hidden states.

TFBaseModelOutputWithPast class transformers.modeling_tf_outputs.TFBaseModelOutputWithPast < source >

( last_hidden_state: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).

TFBaseModelOutputWithPastAndCrossAttentions class transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions < source >

( last_hidden_state: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).

TFSeq2SeqModelOutput class transformers.modeling_tf_outputs.TFSeq2SeqModelOutput < source >

( last_hidden_state: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential decoding.

TFCausalLMOutput class transformers.modeling_tf_outputs.TFCausalLMOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

TFCausalLMOutputWithCrossAttentions class transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

TFCausalLMOutputWithPast class transformers.modeling_tf_outputs.TFCausalLMOutputWithPast < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

TFMaskedLMOutput class transformers.modeling_tf_outputs.TFMaskedLMOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for masked language models outputs.

TFSeq2SeqLMOutput class transformers.modeling_tf_outputs.TFSeq2SeqLMOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for sequence-to-sequence language models outputs.

TFNextSentencePredictorOutput class transformers.modeling_tf_outputs.TFNextSentencePredictorOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of models predicting if two sentences are consecutive or not.

TFSequenceClassifierOutput class transformers.modeling_tf_outputs.TFSequenceClassifierOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of sentence classification models.

TFSeq2SeqSequenceClassifierOutput class transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of sequence-to-sequence sentence classification models.

TFMultipleChoiceModelOutput class transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of multiple choice models.

TFTokenClassifierOutput class transformers.modeling_tf_outputs.TFTokenClassifierOutput < source >

( loss: tf.Tensor | None = None logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of token classification models.

TFQuestionAnsweringModelOutput class transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput < source >

( loss: tf.Tensor | None = None start_logits: Optional[tf.Tensor] = None end_logits: Optional[tf.Tensor] = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of question answering models.

TFSeq2SeqQuestionAnsweringModelOutput class transformers.modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput < source >

( loss: tf.Tensor | None = None start_logits: Optional[tf.Tensor] = None end_logits: Optional[tf.Tensor] = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None )

Parameters

Base class for outputs of sequence-to-sequence question answering models.

FlaxBaseModelOutput class transformers.modeling_flax_outputs.FlaxBaseModelOutput < source >

( last_hidden_state: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for model’s outputs, with potential hidden states and attentions.

“Returns a new object replacing the specified fields with new values.

FlaxBaseModelOutputWithPast class transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPast < source >

( last_hidden_state: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Dict[str, jax.Array]] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for model’s outputs, with potential hidden states and attentions.

“Returns a new object replacing the specified fields with new values.

FlaxBaseModelOutputWithPooling class transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling < source >

( last_hidden_state: typing.Optional[jax.Array] = None pooler_output: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for model’s outputs that also contains a pooling of the last hidden states.

“Returns a new object replacing the specified fields with new values.

FlaxBaseModelOutputWithPastAndCrossAttentions class transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions < source >

( last_hidden_state: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).

“Returns a new object replacing the specified fields with new values.

FlaxSeq2SeqModelOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput < source >

( last_hidden_state: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential decoding.

“Returns a new object replacing the specified fields with new values.

FlaxCausalLMOutputWithCrossAttentions class transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions < source >

( logits: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for causal language model (or autoregressive) outputs.

“Returns a new object replacing the specified fields with new values.

FlaxMaskedLMOutput class transformers.modeling_flax_outputs.FlaxMaskedLMOutput < source >

( logits: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for masked language models outputs.

“Returns a new object replacing the specified fields with new values.

FlaxSeq2SeqLMOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput < source >

( logits: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for sequence-to-sequence language models outputs.

“Returns a new object replacing the specified fields with new values.

FlaxNextSentencePredictorOutput class transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput < source >

( logits: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of models predicting if two sentences are consecutive or not.

“Returns a new object replacing the specified fields with new values.

FlaxSequenceClassifierOutput class transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput < source >

( logits: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of sentence classification models.

“Returns a new object replacing the specified fields with new values.

FlaxSeq2SeqSequenceClassifierOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput < source >

( logits: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of sequence-to-sequence sentence classification models.

“Returns a new object replacing the specified fields with new values.

FlaxMultipleChoiceModelOutput class transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput < source >

( logits: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of multiple choice models.

“Returns a new object replacing the specified fields with new values.

FlaxTokenClassifierOutput class transformers.modeling_flax_outputs.FlaxTokenClassifierOutput < source >

( logits: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of token classification models.

“Returns a new object replacing the specified fields with new values.

FlaxQuestionAnsweringModelOutput class transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput < source >

( start_logits: typing.Optional[jax.Array] = None end_logits: typing.Optional[jax.Array] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of question answering models.

“Returns a new object replacing the specified fields with new values.

FlaxSeq2SeqQuestionAnsweringModelOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput < source >

( start_logits: typing.Optional[jax.Array] = None end_logits: typing.Optional[jax.Array] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None )

Parameters

Base class for outputs of sequence-to-sequence question answering models.

“Returns a new object replacing the specified fields with new values.

< > Update on GitHub

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.3