콘텐츠로 이동

Gemma3

Gemma3는 텍스트와 이미지를 입력으로 처리하고 텍스트를 출력으로 생성하는 멀티모달 모델로, 사전 학습된 변형과 지침 조정된 변형 모두에 대해 오픈 웨이트를 제공합니다. RBLN NPU는 Optimum RBLN을 사용하여 Gemma3 모델 추론을 가속화할 수 있습니다.

API Reference

Classes

RBLNGemma3ForConditionalGeneration

Bases: RBLNModel

Functions

from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
config Optional[PretrainedConfig]

The configuration object associated with the model.

None
rbln_config Optional[Union[RBLNModelConfig, Dict]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Any

Additional keyword arguments. Arguments with the prefix rbln_ are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
RBLNModel

A RBLN model instance ready for inference on RBLN NPU devices.

from_pretrained(model_id, export=None, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export Optional[bool]

A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id.

None
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Any

Additional keyword arguments. Arguments with the prefix rbln_ are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
RBLNModel

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory, push_to_hub=False, **kwargs)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, Path]

Directory where to save the model file.

required
push_to_hub bool

Whether or not to push your model to the HuggingFace model hub after saving it.

False

RBLNGemma3ForCausalLM

Bases: RBLNDecoderOnlyModelForCausalLM

The Gemma3 Model transformer with a language modeling head (linear layer) on top. This model inherits from [RBLNDecoderOnlyModelForCausalLM]. Check the superclass documentation for the generic methods the library implements for all its models.

A class to convert and run pre-trained transformers based Gemma3ForCausalLM model on RBLN devices. It implements the methods to convert a pre-trained transformers Gemma3ForCausalLM model into a RBLN transformer model by: - transferring the checkpoint weights of the original into an optimized RBLN graph, - compiling the resulting graph using the RBLN compiler.

Functions

generate(input_ids, attention_mask=None, max_length=None, **kwargs)

The generate function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to generate text from the model.

Parameters:

Name Type Description Default
input_ids LongTensor

The input ids to the model.

required
attention_mask Optional[LongTensor]

The attention mask to the model.

None
max_length Optional[int]

The maximum length of the generated text.

None
kwargs

Additional arguments passed to the generate function. See the HuggingFace transformers documentation for more details.

{}
from_pretrained(model_id, export=None, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export Optional[bool]

A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id.

None
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Any

Additional keyword arguments. Arguments with the prefix rbln_ are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
RBLNModel

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory, push_to_hub=False, **kwargs)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, Path]

Directory where to save the model file.

required
push_to_hub bool

Whether or not to push your model to the HuggingFace model hub after saving it.

False
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
config Optional[PretrainedConfig]

The configuration object associated with the model.

None
rbln_config Optional[Union[RBLNModelConfig, Dict]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Any

Additional keyword arguments. Arguments with the prefix rbln_ are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
RBLNModel

A RBLN model instance ready for inference on RBLN NPU devices.

Classes

RBLNGemma3ForCausalLMConfig

Bases: RBLNDecoderOnlyModelForCausalLMConfig

Functions

__init__(use_position_ids=None, use_attention_mask=None, prefill_chunk_size=None, image_prefill_chunk_size=None, **kwargs)

Parameters:

Name Type Description Default
use_position_ids Optional[bool]

Whether or not to use position_ids, which is indices of positions of each input sequence tokens in the position embeddings.

None
use_attention_mask Optional[bool]

Whether or not to use attention_mask to to avoid performing attention on padding token indices.

None
prefill_chunk_size Optional[int]

The chunk size used during the prefill phase for processing input sequences. Defaults to 256. Must be a positive integer divisible by 64. Affects prefill performance and memory usage.

None
image_prefill_chunk_size Optional[int]

The chunk size used during the prefill phase for processing images. This config is used when use_image_prefill is True. Currently, the prefill_chunk_size and image_prefill_chunk_size should be the same value.

None
kwargs Any

Additional arguments passed to the parent RBLNDecoderOnlyModelForCausalLMConfig.

{}

Raises:

Type Description
ValueError

If use_attention_mask or use_position_ids are False.

load(path, **kwargs) classmethod

Load a RBLNModelConfig from a path.

Parameters:

Name Type Description Default
path str

Path to the RBLNModelConfig file or directory containing the config file.

required
kwargs Any

Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration.

{}

Returns:

Name Type Description
RBLNModelConfig RBLNModelConfig

The loaded configuration instance.

Note

This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.

RBLNGemma3ForConditionalGenerationConfig

Bases: RBLNModelConfig

Functions

__init__(batch_size=None, vision_tower=None, language_model=None, **kwargs)

Parameters:

Name Type Description Default
batch_size Optional[int]

The batch size for inference. Defaults to 1.

None
vision_tower Optional[RBLNModelConfig]

Configuration for the vision encoder component.

None
language_model Optional[RBLNModelConfig]

Configuration for the language model component.

None
kwargs Any

Additional arguments passed to the parent RBLNModelConfig.

{}

Raises:

Type Description
ValueError

If batch_size is not a positive integer.

load(path, **kwargs) classmethod

Load a RBLNModelConfig from a path.

Parameters:

Name Type Description Default
path str

Path to the RBLNModelConfig file or directory containing the config file.

required
kwargs Any

Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration.

{}

Returns:

Name Type Description
RBLNModelConfig RBLNModelConfig

The loaded configuration instance.

Note

This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.