Skip to content

T5

T5 (Text-to-Text Transfer Transformer) is a transformer-based language model that frames all NLP tasks as a text-to-text problem. It was pre-trained on a diverse set of unsupervised and supervised tasks, making it highly versatile for various language tasks. RBLN NPUs can accelerate T5 model inference using Optimum RBLN.

Key Classes

API Reference

Classes

RBLNT5EncoderModel

Bases: RBLNTransformerEncoderForFeatureExtraction

The T5 Model transformer with an encoder-only architecture for feature extraction. This model inherits from [RBLNTransformerEncoderForFeatureExtraction]. Check the superclass documentation for the generic methods the library implements for all its models.

Important Note

This model supports various sizes of the T5EncoderModel. For optimal performance, it is highly recommended to adjust the tensor parallelism setting based on the model size. Please refer to the Optimum RBLN Overview for guidance on choosing the appropriate tensor parallelism size for your model.

Examples:

1
2
3
4
5
6
7
8
9
from optimum.rbln import RBLNT5EncoderModel

model = RBLNT5EncoderModel.from_pretrained(
    "sentence-transformers/sentence-t5-xxl",
    export=True,
    rbln_tensor_parallel_size=4,
)

model.save_pretrained("compiled-sentence-t5-xxl")

Functions

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

RBLNT5ForConditionalGeneration

Bases: RBLNModelForSeq2SeqLM

The T5 Model transformer with a language modeling head for conditional generation. This model inherits from [RBLNModelForSeq2SeqLM]. Check the superclass documentation for the generic methods the library implements for all its models.

Important Note

This model supports various sizes of the T5ForConditionalGeneration. For optimal performance, it is highly recommended to adjust the tensor parallelism setting based on the model size. Please refer to the Optimum RBLN Overview for guidance on choosing the appropriate tensor parallelism size for your model.

Examples:

1
2
3
4
5
6
7
8
9
from optimum.rbln import RBLNT5ForConditionalGeneration

model = RBLNT5ForConditionalGeneration.from_pretrained(
    "google-t5/t5-11b",
    export=True,
    rbln_tensor_parallel_size=4,
)

model.save_pretrained("compiled-sentence-t5-xxl")

Functions

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

Classes

RBLNT5EncoderModelConfig

Bases: RBLNTransformerEncoderForFeatureExtractionConfig

Functions

__init__(max_seq_len=None, batch_size=None, model_input_names=None, **kwargs)

Parameters:

Name Type Description Default
max_seq_len Optional[int]

Maximum sequence length supported by the model.

None
batch_size Optional[int]

The batch size for inference. Defaults to 1.

None
model_input_names Optional[List[str]]

Names of the input tensors for the model. Defaults to class-specific rbln_model_input_names if not provided.

None
**kwargs Dict[str, Any]

Additional arguments passed to the parent RBLNModelConfig.

{}

Raises:

Type Description
ValueError

If batch_size is not a positive integer.

RBLNT5ForConditionalGenerationConfig

Bases: RBLNModelForSeq2SeqLMConfig

Functions

__init__(batch_size=None, enc_max_seq_len=None, dec_max_seq_len=None, use_attention_mask=None, pad_token_id=None, **kwargs)

Parameters:

Name Type Description Default
batch_size Optional[int]

The batch size for inference. Defaults to 1.

None
enc_max_seq_len Optional[int]

Maximum sequence length for the encoder.

None
dec_max_seq_len Optional[int]

Maximum sequence length for the decoder.

None
use_attention_mask Optional[bool]

Whether to use attention masks during inference. This is automatically set to True for RBLN-CA02 devices.

None
pad_token_id Optional[int]

The ID of the padding token in the vocabulary.

None
**kwargs Dict[str, Any]

Additional arguments passed to the parent RBLNModelConfig.

{}