A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://huggingface.co/docs/transformers/main/en/model_doc/glm4v below:

Website Navigation


GLM-4.1V

GLM-4.1V

The example below demonstrates how to generate text based on an image with Pipeline or the AutoModel class.

import torch
from transformers import pipeline
pipe = pipeline(
    task="image-text-to-text",
    model="THUDM/GLM-4.1V-9B-Thinking",
    device=0,
    torch_dtype=torch.bfloat16
)
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
            },
            { "type": "text", "text": "Describe this image."},
        ]
    }
]
pipe(text=messages,max_new_tokens=20, return_full_text=False)

Using GLM-4.1V with video input is similar to using it with image input. The model can process video data and generate text based on the content of the video.

from transformers import AutoProcessor, Glm4vForConditionalGeneration
import torch

processor = AutoProcessor.from_pretrained("THUDM/GLM-4.1V-9B-Thinking")
model = Glm4vForConditionalGeneration.from_pretrained(
    pretrained_model_name_or_path="THUDM/GLM-4.1V-9B-Thinking",
    torch_dtype=torch.bfloat16,
    device_map="cuda:0"
)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video",
                "url": "https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/720/Big_Buck_Bunny_720_10s_10MB.mp4",
            },
            {
                "type": "text",
                "text": "discribe this video",
            },
        ],
    }
]
inputs = processor.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=True, return_tensors="pt", padding=True).to("cuda:0")
generated_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=1.0)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(output_text)
Glm4vConfig class transformers.Glm4vConfig < source >

( text_config = None vision_config = None image_token_id = 151343 video_token_id = 151344 image_start_token_id = 151339 image_end_token_id = 151340 video_start_token_id = 151341 video_end_token_id = 151342 **kwargs )

Parameters

This is the configuration class to store the configuration of a Glm4vModel. It is used to instantiate a GLM-4.1V model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of GLM-4.1V-9B-Thinking THUDM/GLM-4.1V-9B-Thinking.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

>>> from transformers import Glm4vForConditionalGeneration, Glm4vConfig

>>> 
>>> configuration = Glm4vConfig()

>>> 
>>> model = Glm4vForConditionalGeneration(configuration)

>>> 
>>> configuration = model.config
Glm4vTextConfig class transformers.Glm4vTextConfig < source >

( vocab_size = 151552 hidden_size = 4096 intermediate_size = 13696 num_hidden_layers = 40 num_attention_heads = 32 num_key_value_heads = 2 hidden_act = 'silu' max_position_embeddings = 32768 initializer_range = 0.02 rms_norm_eps = 1e-05 use_cache = True tie_word_embeddings = False rope_theta = 10000.0 attention_dropout = 0.0 rope_scaling = None image_token_id = None video_token_id = None **kwargs )

Parameters

This is the configuration class to store the configuration of a Glm4vModel. It is used to instantiate a GLM-4.1V model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of GLM-4.1V-9B-Thinking THUDM/GLM-4.1V-9B-Thinking.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

>>> from transformers import Glm4vTextModel, Glm4vConfig

>>> 
>>> configuration = Glm4vConfig()

>>> 
>>> model = Glm4vTextModel(configuration)

>>> 
>>> configuration = model.config
Glm4vImageProcessor class transformers.Glm4vImageProcessor < source >

( do_resize: bool = True size: typing.Optional[dict[str, int]] = None resample: Resampling = <Resampling.BICUBIC: 3> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, list[float], NoneType] = None image_std: typing.Union[float, list[float], NoneType] = None do_convert_rgb: bool = True patch_size: int = 14 temporal_patch_size: int = 2 merge_size: int = 2 **kwargs )

Parameters

Constructs a GLM-4V image processor that dynamically resizes images based on the original images.

preprocess < source >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] videos: typing.Union[list['PIL.Image.Image'], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), list['np.ndarray'], list['torch.Tensor'], list[list['PIL.Image.Image']], list[list['np.ndarrray']], list[list['torch.Tensor']]] = None do_resize: typing.Optional[bool] = None size: typing.Optional[dict[str, int]] = None resample: Resampling = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, list[float], NoneType] = None image_std: typing.Union[float, list[float], NoneType] = None patch_size: typing.Optional[int] = None temporal_patch_size: typing.Optional[int] = None merge_size: typing.Optional[int] = None do_convert_rgb: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None )

Parameters

Glm4vVideoProcessor class transformers.Glm4vVideoProcessor < source >

( **kwargs: typing_extensions.Unpack[transformers.models.glm4v.video_processing_glm4v.Glm4vVideoProcessorInitKwargs] )

Parameters

Constructs a fast GLM-4V image processor that dynamically resizes videos based on the original videos.

preprocess < source >

( videos: typing.Union[list['PIL.Image.Image'], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), list['np.ndarray'], list['torch.Tensor'], list[list['PIL.Image.Image']], list[list['np.ndarrray']], list[list['torch.Tensor']]] **kwargs: typing_extensions.Unpack[transformers.processing_utils.VideosKwargs] )

Parameters

Glm4vImageProcessorFast class transformers.Glm4vImageProcessorFast < source >

( **kwargs: typing_extensions.Unpack[transformers.models.glm4v.image_processing_glm4v_fast.Glm4vFastImageProcessorKwargs] )

Constructs a fast Glm4V image processor.

preprocess < source >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] videos: typing.Union[list['PIL.Image.Image'], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), list['np.ndarray'], list['torch.Tensor'], list[list['PIL.Image.Image']], list[list['np.ndarrray']], list[list['torch.Tensor']]] = None do_resize: typing.Optional[bool] = None size: typing.Optional[dict[str, int]] = None resample: typing.Union[ForwardRef('PILImageResampling'), ForwardRef('F.InterpolationMode'), NoneType] = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, list[float], NoneType] = None image_std: typing.Union[float, list[float], NoneType] = None patch_size: typing.Optional[int] = None temporal_patch_size: typing.Optional[int] = None merge_size: typing.Optional[int] = None do_convert_rgb: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None device: typing.Optional[ForwardRef('torch.device')] = None disable_grouping: typing.Optional[bool] = False **kwargs )

Parameters

Glm4vProcessor class transformers.Glm4vProcessor < source >

( image_processor = None tokenizer = None video_processor = None chat_template = None **kwargs )

Parameters

Constructs a GLM-4V processor which wraps a GLM-4V image processor and a GLM-4 tokenizer into a single processor. __call__() and decode() for more information.

This method forwards all its arguments to Qwen2TokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.

This method forwards all its arguments to Qwen2TokenizerFast’s decode(). Please refer to the docstring of this method for more information.

post_process_image_text_to_text < source >

( generated_outputs skip_special_tokens = True clean_up_tokenization_spaces = False **kwargs ) list[str]

Parameters

The decoded text.

Post-process the output of the model to decode the text.

Glm4vTextModel class transformers.Glm4vTextModel < source >

( config: Glm4vTextConfig )

Parameters

The bare Glm4V Text Model outputting raw hidden-states without any specific head on to.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward < source >

( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[list[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None **kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] ) transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (None) and inputs.

The Glm4vTextModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Glm4vModel class transformers.Glm4vModel < source >

( config )

Parameters

The bare Glm4V Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward < source >

( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[list[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None pixel_values: typing.Optional[torch.Tensor] = None pixel_values_videos: typing.Optional[torch.FloatTensor] = None image_grid_thw: typing.Optional[torch.LongTensor] = None video_grid_thw: typing.Optional[torch.LongTensor] = None rope_deltas: typing.Optional[torch.LongTensor] = None cache_position: typing.Optional[torch.LongTensor] = None **kwargs: typing_extensions.Unpack[transformers.models.glm4v.modeling_glm4v.KwargsForCausalLM] ) transformers.models.glm4v.modeling_glm4v.Glm4vModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

Returns

transformers.models.glm4v.modeling_glm4v.Glm4vModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.models.glm4v.modeling_glm4v.Glm4vModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (None) and inputs.

The Glm4vModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Glm4vForConditionalGeneration class transformers.Glm4vForConditionalGeneration < source >

( config )

forward < source >

( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[list[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None pixel_values: typing.Optional[torch.Tensor] = None pixel_values_videos: typing.Optional[torch.FloatTensor] = None image_grid_thw: typing.Optional[torch.LongTensor] = None video_grid_thw: typing.Optional[torch.LongTensor] = None rope_deltas: typing.Optional[torch.LongTensor] = None cache_position: typing.Optional[torch.LongTensor] = None **kwargs: typing_extensions.Unpack[transformers.models.glm4v.modeling_glm4v.KwargsForCausalLM] ) transformers.models.glm4v.modeling_glm4v.Glm4vCausalLMOutputWithPast or tuple(torch.FloatTensor)

Parameters

Returns

transformers.models.glm4v.modeling_glm4v.Glm4vCausalLMOutputWithPast or tuple(torch.FloatTensor)

A transformers.models.glm4v.modeling_glm4v.Glm4vCausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Glm4vConfig) and inputs.

The Glm4vForConditionalGeneration forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, Glm4vForConditionalGeneration

>>> model = Glm4vForConditionalGeneration.from_pretrained("THUDM/GLM-4.1V-9B-Thinking")
>>> processor = AutoProcessor.from_pretrained("THUDM/GLM-4.1V-9B-Thinking")

>>> messages = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "What is shown in this image?"},
        ],
    },
]
>>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
>>> inputs = processor(text=[text], images=[image], vision_infos=[vision_infos])

>>> 
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"The image shows a street scene with a red stop sign in the foreground. In the background, there is a large red gate with Chinese characters ..."
< > Update on GitHub

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4