T5¶
T5 (Text-to-Text Transfer Transformer)는 모든 NLP 작업을 텍스트-투-텍스트 문제로 다루는 트랜스포머 기반 언어 모델입니다. 다양한 비지도 및 지도 학습 작업에서 사전 훈련되어 다양한 언어 작업에 매우 유연하게 적용될 수 있습니다. RBLN NPU는 Optimum RBLN을 사용하여 T5 모델 추론을 가속화할 수 있습니다.
주요 클래스¶
RBLNT5EncoderModel
: RBLN NPU에서 특징 추출을 위한 T5 인코더 모델 구현RBLNT5EncoderModelConfig
: T5 인코더 모델의 설정 클래스RBLNT5ForConditionalGeneration
: 조건부 텍스트 생성 작업을 위한 T5 모델RBLNT5ForConditionalGenerationConfig
: T5 조건부 생성 모델의 설정 클래스
API 참조¶
Classes¶
RBLNT5EncoderModel
¶
Bases: RBLNTransformerEncoderForFeatureExtraction
The T5 Model transformer with an encoder-only architecture for feature extraction.
This model inherits from [RBLNTransformerEncoderForFeatureExtraction
]. Check the superclass documentation for the generic methods the library implements for all its models.
Important Note
This model supports various sizes of the T5EncoderModel. For optimal performance, it is highly recommended to adjust the tensor parallelism setting based on the model size. Please refer to the Optimum RBLN Overview for guidance on choosing the appropriate tensor parallelism size for your model.
Examples:
Functions¶
from_pretrained(model_id, export=False, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained()
function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
bool
|
A boolean flag to indicate whether the model should be compiled. |
False
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_model(model, *, rbln_config=None, **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[from_pretrained
] class method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_directory
|
Union[str, PathLike]
|
The directory to save the model and its configuration files. Will be created if it doesn't exist. |
required |
RBLNT5ForConditionalGeneration
¶
Bases: RBLNModelForSeq2SeqLM
The T5 Model transformer with a language modeling head for conditional generation.
This model inherits from [RBLNModelForSeq2SeqLM
]. Check the superclass documentation for the generic methods the library implements for all its models.
Important Note
This model supports various sizes of the T5ForConditionalGeneration. For optimal performance, it is highly recommended to adjust the tensor parallelism setting based on the model size. Please refer to the Optimum RBLN Overview for guidance on choosing the appropriate tensor parallelism size for your model.
Examples:
Functions¶
from_pretrained(model_id, export=False, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained()
function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
bool
|
A boolean flag to indicate whether the model should be compiled. |
False
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_model(model, *, rbln_config=None, **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[from_pretrained
] class method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_directory
|
Union[str, PathLike]
|
The directory to save the model and its configuration files. Will be created if it doesn't exist. |
required |
Classes¶
RBLNT5EncoderModelConfig
¶
Bases: RBLNTransformerEncoderForFeatureExtractionConfig
Functions¶
__init__(max_seq_len=None, batch_size=None, model_input_names=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max_seq_len
|
Optional[int]
|
Maximum sequence length supported by the model. |
None
|
batch_size
|
Optional[int]
|
The batch size for inference. Defaults to 1. |
None
|
model_input_names
|
Optional[List[str]]
|
Names of the input tensors for the model. Defaults to class-specific rbln_model_input_names if not provided. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If batch_size is not a positive integer. |
RBLNT5ForConditionalGenerationConfig
¶
Bases: RBLNModelForSeq2SeqLMConfig
Functions¶
__init__(batch_size=None, enc_max_seq_len=None, dec_max_seq_len=None, use_attention_mask=None, pad_token_id=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch_size
|
Optional[int]
|
The batch size for inference. Defaults to 1. |
None
|
enc_max_seq_len
|
Optional[int]
|
Maximum sequence length for the encoder. |
None
|
dec_max_seq_len
|
Optional[int]
|
Maximum sequence length for the decoder. |
None
|
use_attention_mask
|
Optional[bool]
|
Whether to use attention masks during inference. This is automatically set to True for RBLN-CA02 devices. |
None
|
pad_token_id
|
Optional[int]
|
The ID of the padding token in the vocabulary. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|