Qwen2.5-VL¶
Qwen2.5-VL 모델은 시각적 질의응답(VQA), 이미지 캡셔닝, 비디오 이해와 같은 작업을 위해 설계된 비전-언어 모델입니다. 이미지와 비디오 입력을 텍스트와 함께 처리할 수 있어 멀티모달 애플리케이션에 매우 유연하게 사용됩니다. RBLN NPU는 Optimum RBLN을 사용하여 Qwen2.5-VL 모델 추론을 가속화할 수 있습니다.
API Reference¶
Classes¶
RBLNQwen2_5_VisionTransformerPretrainedModel
¶
Bases: RBLNModel
RBLN optimized Qwen2.5-VL vision transformer model.
This class provides hardware-accelerated inference for Qwen2.5-VL vision transformers on RBLN devices, supporting image and video encoding for multimodal vision-language tasks with window-based attention mechanisms.
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
Type | Description |
---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained()
function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
Type | Description |
---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained
] class method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
RBLNQwen2_5_VLForConditionalGeneration
¶
Bases: RBLNDecoderOnlyModelForCausalLM
RBLNQwen2_5_VLForConditionalGeneration is a multi-modal model that integrates vision and language processing capabilities, optimized for RBLN NPUs. It is designed for conditional generation tasks that involve both image and text inputs.
This model inherits from [RBLNDecoderOnlyModelForCausalLM
]. Check the superclass documentation for the generic methods the library implements for all its models.
Important Note
This model includes a Large Language Model (LLM). For optimal performance, it is highly recommended to use
tensor parallelism for the language model. This can be achieved by using the rbln_config
parameter in the
from_pretrained
method. Refer to the from_pretrained
documentation and the RBLNQwen2_5_VLForConditionalGenerationConfig class for details.
Examples:
Functions¶
generate(input_ids, attention_mask=None, max_length=None, **kwargs)
¶
The generate function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to generate text from the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_ids
|
LongTensor
|
The input ids to the model. |
required |
attention_mask
|
Optional[LongTensor]
|
The attention mask to the model. |
None
|
max_length
|
Optional[int]
|
The maximum length of the generated text. |
None
|
kwargs
|
Additional arguments passed to the generate function. See the HuggingFace transformers documentation for more details. |
{}
|
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained()
function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
Type | Description |
---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained
] class method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
Type | Description |
---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
Functions¶
Classes¶
RBLNQwen2_5_VLForConditionalGenerationConfig
¶
Bases: RBLNDecoderOnlyModelForCausalLMConfig
Configuration class for RBLNQwen2_5_VLForConditionalGeneration.
This configuration class stores the configuration parameters specific to RBLN-optimized Qwen2.5-VL models for multimodal conditional generation tasks that combine vision and language processing capabilities.
Functions¶
__init__(use_inputs_embeds=True, visual=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
use_inputs_embeds
|
bool
|
Whether or not to use |
True
|
visual
|
Optional[RBLNModelConfig]
|
Configuration for the vision encoder component. |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If |
ValueError
|
If the visual configuration is provided but contains invalid settings, such as an invalid max_seq_lens (e.g., not a positive integer, not a multiple of the window-based attention unit, or insufficient for the expected resolution). |
ValueError
|
If visual is None and no default vision configuration can be inferred for the model architecture. |
ValueError
|
If any inherited parameters violate constraints defined in the parent class, such as batch_size not being a positive integer, prefill_chunk_size not being divisible by 64, or max_seq_len not meeting requirements for Flash Attention. |
load(path, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
kwargs
|
Any
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
RBLNModelConfig |
RBLNModelConfig
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.
RBLNQwen2_5_VisionTransformerPretrainedModelConfig
¶
Bases: RBLNModelConfig
Configuration class for RBLNQwen2_5_VisionTransformerPretrainedModel.
This configuration class stores the configuration parameters specific to RBLN-optimized Qwen2.5-VL vision transformer models with window-based attention mechanisms for processing images and videos.
Functions¶
__init__(max_seq_lens=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_seq_lens
|
Optional[Union[int, List[int]]]
|
Maximum sequence lengths for Vision
Transformer attention. Can be an integer or list of integers, each indicating
the number of patches in a sequence for an image or video. For example, an image
of 224x196 pixels with patch size 14 and window size 112 has its width padded to
224, forming a 224x224 image. This yields 256 patches [(224/14) * (224/14)], so
|
None
|
kwargs
|
Any
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If |
ValueError
|
If |
ValueError
|
If |
ValueError
|
If |
Max Seq Lens
Since Qwen2_5_VLForConditionalGeneration
performs inference on a per-image or per-frame basis,
max_seq_lens
should be set based on the maximum expected resolution of the input images or video frames,
according to the following guidelines:
- Minimum Value:
max_seq_lens
must be greater than or equal to the number of patches generated from the input image. For example, a 224x224 image with a patch size of 14 results in (224 / 14) * (224 / 14) = 256 patches. Therefore,max_seq_lens
must be at least 256. - Alignment Requirement:
max_seq_lens
must be a multiple of(window_size / patch_size)^2
due to the requirements of the window-based attention mechanism. For instance, ifwindow_size
is 112 andpatch_size
is 14, then(112 / 14)^2 = 64
, meaning valid values formax_seq_lens
include 64, 128, 192, 256, etc.
load(path, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
kwargs
|
Any
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
RBLNModelConfig |
RBLNModelConfig
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.