콘텐츠로 이동

Whisper

Whisper는 다양한 오디오 데이터셋으로 훈련된 다목적 음성 인식 모델입니다. 다국어 음성 인식, 음성 번역 및 언어 식별을 수행할 수 있습니다. 이 모델은 인코더-디코더 아키텍처를 사용하며 다양한 음향 환경에서 강력한 성능을 보여줍니다. RBLN NPU는 Optimum RBLN을 사용하여 Whisper 모델 추론을 가속화할 수 있습니다.

주요 클래스

API 참조

Classes

RBLNWhisperForConditionalGeneration

Bases: RBLNModel

Whisper model for speech recognition and transcription optimized for RBLN NPU.

This model inherits from [RBLNModel]. It implements the methods to convert and run pre-trained transformers based Whisper model on RBLN devices by: - transferring the checkpoint weights of the original into an optimized RBLN graph, - compiling the resulting graph using the RBLN compiler.

Example (Short form):

import torch
from transformers import AutoProcessor
from datasets import load_dataset
from optimum.rbln import RBLNWhisperForConditionalGeneration

# Load processor and dataset
model_id = "openai/whisper-tiny"
processor = AutoProcessor.from_pretrained(model_id)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")

# Prepare input features
input_features = processor(
    ds[0]["audio"]["array"],
    sampling_rate=ds[0]["audio"]["sampling_rate"],
    return_tensors="pt"
).input_features

# Load and compile model (or load pre-compiled model)
model = RBLNWhisperForConditionalGeneration.from_pretrained(
    model_id=model_id,
    export=True,
    rbln_batch_size=1
)

# Generate transcription
outputs = model.generate(input_features=input_features, return_timestamps=True)
transcription = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print(f"Transcription: {transcription}")

Functions

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

Classes

RBLNWhisperForConditionalGenerationConfig

Bases: RBLNModelConfig

Functions

__init__(batch_size=None, token_timestamps=None, use_attention_mask=None, enc_max_seq_len=None, dec_max_seq_len=None, **kwargs)

Parameters:

Name Type Description Default
batch_size int

The batch size for inference. Defaults to 1.

None
token_timestamps bool

Whether to output token timestamps during generation. Defaults to False.

None
use_attention_mask bool

Whether to use attention masks during inference. This is automatically set to True for RBLN-CA02 devices.

None
enc_max_seq_len int

Maximum sequence length for the encoder.

None
dec_max_seq_len int

Maximum sequence length for the decoder.

None
**kwargs Dict[str, Any]

Additional arguments passed to the parent RBLNModelConfig.

{}

Raises:

Type Description
ValueError

If batch_size is not a positive integer.