Whisper¶
Whisper는 다양한 오디오 데이터셋으로 훈련된 다목적 음성 인식 모델입니다. 다국어 음성 인식, 음성 번역 및 언어 식별을 수행할 수 있습니다. 이 모델은 인코더-디코더 아키텍처를 사용하며 다양한 음향 환경에서 강력한 성능을 보여줍니다. RBLN NPU는 Optimum RBLN을 사용하여 Whisper 모델 추론을 가속화할 수 있습니다.
API 참조¶
Classes¶
RBLNWhisperForConditionalGeneration
¶
Bases: RBLNModel, RBLNWhisperGenerationMixin
Whisper model for speech recognition and transcription optimized for RBLN NPU.
This model inherits from [RBLNModel]. It implements the methods to convert and run
pre-trained transformers based Whisper model on RBLN devices by:
- transferring the checkpoint weights of the original into an optimized RBLN graph,
- compiling the resulting graph using the RBLN compiler.
Example (Short form):
Functions¶
from_model(model, config=None, rbln_config=None, model_save_dir=None, subfolder='', **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
config
|
Optional[PretrainedConfig]
|
The configuration object associated with the model. |
None
|
rbln_config
|
Optional[Union[RBLNModelConfig, Dict]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
generate(input_features=None, attention_mask=None, generation_config=None, return_segments=None, return_timestamps=None, return_token_timestamps=None, **kwargs)
¶
The generate function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to generate text from the model. Check the HuggingFace transformers documentation for more details.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_features
|
Tensor
|
The input features to the model. |
None
|
attention_mask
|
Tensor
|
Attention mask needs to be passed when doing long-form transcription using a batch size > 1. |
None
|
generation_config
|
GenerationConfig
|
The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values. |
None
|
return_segments
|
bool
|
Whether to return segments. |
None
|
return_timestamps
|
bool
|
Whether to return the timestamps with the text. For audios longer than 30 seconds, it is necessary to set return_timestamps=True. |
None
|
return_token_timestamps
|
bool
|
Whether to return token timestamps. |
None
|
kwargs
|
dict[str, Any]
|
Additional arguments passed to the generate function. |
{}
|
Returns:
| Type | Description |
|---|---|
Union[ModelOutput, Dict[str, Any], LongTensor]
|
Transcribes or translates log-mel input features to a sequence of auto-regressively generated token ids. |
from_pretrained(model_id, export=None, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
Optional[bool]
|
A boolean flag to indicate whether the model should be compiled. If None, it will be determined based on the existence of the compiled model files in the model_id. |
None
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime.
This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Any
|
Additional keyword arguments. Arguments with the prefix |
{}
|
Returns:
| Type | Description |
|---|---|
RBLNModel
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory, push_to_hub=False, **kwargs)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[~optimum.rbln.modeling_base.RBLNBaseModel.from_pretrained] class method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
save_directory
|
Union[str, Path]
|
Directory where to save the model file. |
required |
push_to_hub
|
bool
|
Whether or not to push your model to the HuggingFace model hub after saving it. |
False
|
Classes¶
RBLNWhisperForConditionalGenerationConfig
¶
Bases: RBLNModelConfig
Configuration class for RBLNWhisperForConditionalGeneration.
This configuration class stores the configuration parameters specific to RBLN-optimized Whisper models for speech recognition and transcription tasks.
Functions¶
__init__(batch_size=None, token_timestamps=None, use_attention_mask=None, enc_max_seq_len=None, dec_max_seq_len=None, kvcache_num_blocks=None, kvcache_block_size=None, **kwargs)
¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_size
|
int
|
The batch size for inference. Defaults to 1. |
None
|
token_timestamps
|
bool
|
Whether to output token timestamps during generation. Defaults to False. |
None
|
use_attention_mask
|
bool
|
Whether to use attention masks during inference. This is automatically |
None
|
enc_max_seq_len
|
int
|
Maximum sequence length for the encoder. |
None
|
dec_max_seq_len
|
int
|
Maximum sequence length for the decoder. |
None
|
kvcache_num_blocks
|
int
|
The total number of blocks to allocate for the PagedAttention KV cache for the SelfAttention. Defaults to batch_size. |
None
|
kvcache_block_size
|
int
|
Sets the size (in number of tokens) of each block in the PagedAttention KV cache for the SelfAttention. Defaults to dec_max_seq_len. |
None
|
kwargs
|
Any
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If batch_size is not a positive integer. |
load(path, **kwargs)
classmethod
¶
Load a RBLNModelConfig from a path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the RBLNModelConfig file or directory containing the config file. |
required |
kwargs
|
Any
|
Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
RBLNModelConfig |
RBLNModelConfig
|
The loaded configuration instance. |
Note
This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.