Gemma3¶
Gemma3는 텍스트와 이미지를 입력으로 처리하고 텍스트를 출력으로 생성하는 멀티모달 모델로, 사전 학습된 변형과 지침 조정된 변형 모두에 대해 오픈 웨이트를 제공합니다. RBLN NPU는 Optimum RBLN을 사용하여 Gemma3 모델 추론을 가속화할 수 있습니다.
API Reference¶
Classes¶
RBLNGemma3ForConditionalGeneration
¶
Bases: RBLNModel, RBLNDecoderOnlyGenerationMixin
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
generate(input_ids, attention_mask=None, generation_config=None, **kwargs)
¶
The generate function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to generate text from the model. Check the HuggingFace transformers documentation for more details.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_ids
|
LongTensor
|
The input ids to the model. |
required |
attention_mask
|
LongTensor
|
The attention mask to the model. |
None
|
generation_config
|
GenerationConfig
|
The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values. |
None
|
kwargs
|
dict[str, Any]
|
Additional arguments passed to the generate function. See the HuggingFace transformers documentation for more details. |
{}
|
Returns:
| Type | Description |
|---|---|
Union[ModelOutput, LongTensor]
|
A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.LongTensor. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
RBLNGemma3ForCausalLM
¶
Bases: RBLNDecoderOnlyModelForCausalLM
The Gemma3 Model transformer with a language modeling head (linear layer) on top.
This model inherits from [RBLNDecoderOnlyModelForCausalLM]. Check the superclass documentation for the generic methods the library implements for all its models.
A class to convert and run pre-trained transformers based Gemma3ForCausalLM model on RBLN devices. It implements the methods to convert a pre-trained transformers Gemma3ForCausalLM model into a RBLN transformer model by: - transferring the checkpoint weights of the original into an optimized RBLN graph, - compiling the resulting graph using the RBLN compiler.
Functions¶
generate(input_ids, attention_mask=None, generation_config=None, **kwargs)
¶
The generate function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to generate text from the model. Check the HuggingFace transformers documentation for more details.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_ids
|
LongTensor
|
The input ids to the model. |
required |
attention_mask
|
LongTensor
|
The attention mask to the model. |
None
|
generation_config
|
GenerationConfig
|
The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values. |
None
|
kwargs
|
dict[str, Any]
|
Additional arguments passed to the generate function. See the HuggingFace transformers documentation for more details. |
{}
|
Returns:
| Type | Description |
|---|---|
Union[ModelOutput, LongTensor]
|
A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.LongTensor. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
set_adapter(adapter_name)
¶
Sets the active adapter(s) for the model using adapter name(s).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
adapter_name
|
Union[str, List[str]]
|
The name(s) of the adapter(s) to be activated. Can be a single adapter name or a list of adapter names. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the model is not configured with LoRA or if the adapter name is not found. |
Classes¶
RBLNGemma3ForCausalLMConfig
¶
Bases: RBLNDecoderOnlyModelForCausalLMConfig
Functions¶
__init__(use_position_ids=None, use_attention_mask=None, prefill_chunk_size=None, image_prefill_chunk_size=None, **kwargs)
¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
use_position_ids
|
Optional[bool]
|
Whether or not to use |
None
|
use_attention_mask
|
Optional[bool]
|
Whether or not to use |
None
|
prefill_chunk_size
|
Optional[int]
|
The chunk size used during the prefill phase for processing input sequences. Defaults to 256. Must be a positive integer divisible by 64. Affects prefill performance and memory usage. |
None
|
image_prefill_chunk_size
|
Optional[int]
|
The chunk size used during the prefill phase for
processing images. This config is used when |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
load(path, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
kwargs
|
Any
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
RBLNModelConfig |
RBLNModelConfig
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.
RBLNGemma3ForConditionalGenerationConfig
¶
Bases: RBLNModelConfig
Functions¶
__init__(batch_size=None, vision_tower=None, language_model=None, **kwargs)
¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_size
|
Optional[int]
|
The batch size for inference. Defaults to 1. |
None
|
vision_tower
|
Optional[RBLNModelConfig]
|
Configuration for the vision encoder component. |
None
|
language_model
|
Optional[RBLNModelConfig]
|
Configuration for the language model component. |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
load(path, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
kwargs
|
Any
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
RBLNModelConfig |
RBLNModelConfig
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.