VQModel (Vector Quantized Model)¶
The VQModel (Vector Quantized Model) is often used in diffusion models like Kandinsky V2.2. It functions similarly to a Variational Autoencoder (VAE) but uses vector quantization instead of a continuous latent space. It's responsible for encoding images into discrete latent representations and decoding those representations back into images. RBLN NPUs can accelerate VQModel inference using Optimum RBLN.
Key Classes¶
RBLNVQModel
: The main model class for running VQModel on RBLN NPURBLNVQModelConfig
: Configuration class for VQModel models
API Reference¶
Classes¶
RBLNVQModel
¶
Bases: RBLNModel
RBLN implementation of VQModel used in Kandinsky models.
This model accelerates the VQModel (Vector Quantized Model), which serves a similar purpose to the VAE in other diffusion models (encoding images to latents and decoding latents to images), but utilizes vector quantization.
It can be configured to include both encoder and decoder, or just the decoder part.
This class inherits from [RBLNModel
].
Functions¶
encode(x, return_dict=True, **kwargs)
¶
Encode an input image into a quantized latent representation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
FloatTensor
|
The input image to encode. |
required |
return_dict
|
bool
|
Whether to return output as a dictionary. Defaults to True. |
True
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the encoder/quantizer. |
{}
|
Returns:
Type | Description |
---|---|
Union[FloatTensor, VQEncoderOutput]
|
Union[FloatTensor, object]: The quantized latent representation or a specific output object. |
decode(z, return_dict=True, **kwargs)
¶
Decode a quantized latent representation back into an image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z
|
FloatTensor
|
The quantized latent representation to decode. |
required |
return_dict
|
bool
|
Whether to return output as a dictionary. Defaults to True. |
True
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the decoder. |
{}
|
Returns:
Type | Description |
---|---|
Union[FloatTensor, DecoderOutput]
|
Union[FloatTensor, DecoderOutput]: The decoded image or a DecoderOutput object. |
from_pretrained(model_id, export=False, rbln_config=None, **kwargs)
classmethod
¶
The from_pretrained()
function is utilized in its standard form as in the HuggingFace transformers library.
User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
Union[str, Path]
|
The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler. |
required |
export
|
bool
|
A boolean flag to indicate whether the model should be compiled. |
False
|
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
from_model(model, *, rbln_config=None, **kwargs)
classmethod
¶
Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
PreTrainedModel
|
The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class. |
required |
rbln_config
|
Optional[Union[Dict, RBLNModelConfig]]
|
Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., |
None
|
kwargs
|
Dict[str, Any]
|
Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library. |
{}
|
The method performs the following steps:
- Compiles the PyTorch model into an optimized RBLN graph
- Configures the model for the specified NPU device
- Creates the necessary runtime objects if requested
- Saves the compiled model and configurations
Returns:
Type | Description |
---|---|
Self
|
A RBLN model instance ready for inference on RBLN NPU devices. |
save_pretrained(save_directory)
¶
Saves a model and its configuration file to a directory, so that it can be re-loaded using the
[from_pretrained
] class method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
save_directory
|
Union[str, PathLike]
|
The directory to save the model and its configuration files. Will be created if it doesn't exist. |
required |
Classes¶
RBLNVQModelConfig
¶
Bases: RBLNModelConfig
Configuration class for RBLN VQModel models, used in Kandinsky.
This class inherits from RBLNModelConfig and provides specific configuration options for VQModel, which acts similarly to a VAE but uses vector quantization.
Functions¶
__init__(batch_size=None, sample_size=None, uses_encoder=None, scaling_factor=None, in_channels=None, latent_channels=None, num_vq_embeddings=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch_size
|
Optional[int]
|
The batch size for inference. Defaults to 1. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
The spatial dimensions (height, width) of the input/output images. If an integer is provided, it's used for both height and width. |
None
|
uses_encoder
|
Optional[bool]
|
Whether to include the encoder part of the VQModel in the model. When False, only the decoder is used (for latent-to-image conversion). |
None
|
scaling_factor
|
Optional[int]
|
The integer downsampling factor between pixel space and latent space. |
None
|
in_channels
|
Optional[int]
|
Number of input channels for the model. |
None
|
latent_channels
|
Optional[int]
|
Number of channels in the latent space. |
None
|
num_vq_embeddings
|
Optional[int]
|
Number of embeddings in the VQ codebook. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|