변분 오토인코더 (VAE)¶
변분 오토인코더(VAE)는 Stable Diffusion과 같은 확산 모델의 핵심 구성 요소로, 이미지를 잠재 공간으로 인코딩하고 잠재 표현을 다시 이미지로 디코딩하는 역할을 합니다. RBLN NPU는 Optimum RBLN을 사용하여 VAE 추론을 가속화할 수 있습니다.
주요 클래스¶
RBLNAutoencoderKL
: RBLN NPU에서 VAE를 실행하기 위한 주요 모델 클래스RBLNAutoencoderKLConfig
: VAE 모델의 설정 클래스
확산 모델에서의 사용¶
Stable Diffusion과 같은 확산 기반 이미지 생성 모델에서 VAE는 두 가지 주요 기능을 수행합니다:
- 인코더: 이미지-이미지 변환이나 인페인팅과 같은 작업을 위해 입력 이미지를 잠재 표현으로 변환
- 디코더: 생성의 마지막 단계에서 노이즈가 제거된 잠재 표현을 다시 픽셀 공간으로 변환
API 참조¶
Classes¶
RBLNAutoencoderKL
¶
Bases: RBLNModel
RBLN implementation of AutoencoderKL (VAE) for diffusion models.
This model is used to accelerate AutoencoderKL (VAE) models from diffusers library on RBLN NPUs. It can be configured to include both encoder and decoder, or just the decoder part for latent-to-image conversion.
This class inherits from [RBLNModel
]. Check the superclass documentation for the generic methods
the library implements for all its models.
Functions¶
encode(x, return_dict=True, **kwargs)
¶
Encode an input image into a latent representation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
FloatTensor
|
The input image to encode |
required |
return_dict
|
bool
|
Whether to return output as a dictionary. Defaults to True. |
True
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the encoder. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
FloatTensor |
Union[FloatTensor, AutoencoderKLOutput]
|
The latent representation or AutoencoderKLOutput if return_dict=True |
decode(z, return_dict=True, **kwargs)
¶
Decode a latent representation into an image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z
|
FloatTensor
|
The latent representation to decode |
required |
return_dict
|
bool
|
Whether to return output as a dictionary. Defaults to True. |
True
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the decoder. |
{}
|
Returns:
Type | Description |
---|---|
Union[FloatTensor, DecoderOutput]
|
The decoded image or DecoderOutput if return_dict=True |
from_pretrained(model_id, export=False, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained()
function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
bool
|
A boolean flag to indicate whether the model should be compiled. |
False
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_model(model, *, rbln_config=None, **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[from_pretrained
] class method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_directory
|
Union[str, PathLike]
|
The directory to save the model and its configuration files. Will be created if it doesn't exist. |
required |
Classes¶
RBLNAutoencoderKLConfig
¶
Bases: RBLNModelConfig
Configuration class for RBLN Variational Autoencoder (VAE) models.
This class inherits from RBLNModelConfig and provides specific configuration options for VAE models used in diffusion-based image generation.
Functions¶
__init__(batch_size=None, sample_size=None, uses_encoder=None, vae_scale_factor=None, in_channels=None, latent_channels=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch_size
|
Optional[int]
|
The batch size for inference. Defaults to 1. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
The spatial dimensions (height, width) of the input/output images. If an integer is provided, it's used for both height and width. |
None
|
uses_encoder
|
Optional[bool]
|
Whether to include the encoder part of the VAE in the model. When False, only the decoder is used (for latent-to-image conversion). |
None
|
vae_scale_factor
|
Optional[float]
|
The scaling factor between pixel space and latent space. Determines how much smaller the latent representations are compared to the original images. |
None
|
in_channels
|
Optional[int]
|
Number of input channels for the model. |
None
|
latent_channels
|
Optional[int]
|
Number of channels in the latent space. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|