Qwen3-VL¶
The Qwen3-VL model is the latest vision–language generation in the Qwen series. It improves text understanding and generation, visual perception and reasoning, spatial and temporal (video) understanding, and capabilities for visual agents—for example GUI understanding and tool use. It supports a long multimodal context (native 256K tokens, extendable up to 1M), expanded OCR (32 languages), and strong performance on multimodal reasoning including STEM-oriented tasks. The family is offered in dense and MoE sizes, with Instruct and reasoning-focused variants for different deployment needs. RBLN NPUs can accelerate Qwen3-VL inference using Optimum RBLN.
API Reference¶
Classes¶
RBLNQwen3VLVisionModel
¶
Bases: RBLNModel
RBLN optimized Qwen3-VL vision transformer model.
This class provides hardware-accelerated inference for Qwen3-VL vision transformers on RBLN devices, supporting image and video encoding for multimodal vision-language tasks.
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
RBLNQwen3VLForConditionalGeneration
¶
Bases: RBLNQwen3VLModel, RBLNDecoderOnlyModelForCausalLM
RBLNQwen3VLForConditionalGeneration is a multi-modal model that integrates vision and language processing capabilities, optimized for RBLN NPUs. It is designed for conditional generation tasks that involve both image and text inputs.
This model inherits from [RBLNDecoderOnlyModelForCausalLM]. Check the superclass documentation for the generic methods the library implements for all its models.
Important Note
This model includes a Large Language Model (LLM). For optimal performance, it is highly recommended to use
tensor parallelism for the language model. This can be achieved by using the rbln_config parameter in the
from_pretrained method. Refer to the from_pretrained documentation and the RBLNQwen3VLForConditionalGenerationConfig class for details.
Examples:
Functions¶
generate(input_ids, attention_mask=None, generation_config=None, **kwargs)
¶
The generate function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to generate text from the model. Check the HuggingFace transformers documentation for more details.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_ids
|
LongTensor
|
The input ids to the model. |
required |
attention_mask
|
LongTensor
|
The attention mask to the model. |
None
|
generation_config
|
GenerationConfig
|
The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values. |
None
|
kwargs
|
dict[str, Any]
|
Additional arguments passed to the generate function. See the HuggingFace transformers documentation for more details. |
{}
|
Returns:
| Type | Description |
|---|---|
Union[ModelOutput, LongTensor]
|
A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.LongTensor. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
set_adapter(adapter_name)
¶
Sets the active adapter(s) for the model using adapter name(s).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
adapter_name
|
Union[str, List[str]]
|
The name(s) of the adapter(s) to be activated. Can be a single adapter name or a list of adapter names. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the model is not configured with LoRA or if the adapter name is not found. |
Functions¶
Classes¶
RBLNQwen3VLForConditionalGenerationConfig
¶
Bases: RBLNDecoderOnlyModelForCausalLMConfig
Configuration class for RBLNQwen3VLForConditionalGeneration.
This configuration class stores the configuration parameters specific to RBLN-optimized Qwen3-VL models for multimodal conditional generation tasks that combine vision and language processing capabilities.
Functions¶
__init__(use_inputs_embeds=True, visual=None, **kwargs)
¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
use_inputs_embeds
|
bool
|
Whether or not to use |
True
|
visual
|
Optional[RBLNModelConfig]
|
Configuration for the vision encoder component. |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
RBLNQwen3VLVisionModelConfig
¶
Bases: RBLNModelConfig
Configuration class for RBLNQwen3VLVisionModel.
This configuration class stores the configuration parameters specific to RBLN-optimized Qwen3-VL vision transformer models for processing images and videos.
Functions¶
__init__(max_seq_lens=None, **kwargs)
¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_seq_lens
|
Optional[Union[int, List[int]]]
|
Maximum sequence lengths for Vision
Transformer attention. Can be an integer or list of integers, each indicating
the number of patches in a sequence for an image or video. For example, an image
of 224x224 pixels with patch size 16 and spatial_merge_size 2 yields
(224/16/2) * (224/16/2) = 49 merged patches. RBLN optimization runs inference
per image or video frame, so set |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
from_pretrained(path, rbln_config=None, return_unused_kwargs=False, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
rbln_config
|
Optional[Dict[str, Any]]
|
Additional configuration to override. |
None
|
return_unused_kwargs
|
bool
|
Whether to return unused kwargs. |
False
|
kwargs
|
Optional[Dict[str, Any]]
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
RBLNModelConfig |
Union[RBLNModelConfig, Tuple[RBLNModelConfig, Dict[str, Any]]]
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.
Examples:
load(path, rbln_config=None, return_unused_kwargs=False, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Deprecated
This method is deprecated and will be removed in version 0.11.0.
Use from_pretrained instead.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
rbln_config
|
Optional[Dict[str, Any]]
|
Additional configuration to override. |
None
|
return_unused_kwargs
|
bool
|
Whether to return unused kwargs. |
False
|
kwargs
|
Optional[Dict[str, Any]]
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
RBLNModelConfig |
Union[RBLNModelConfig, Tuple[RBLNModelConfig, Dict[str, Any]]]
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.
Examples: