콘텐츠로 이동

VQModel (Vector Quantized Model)

VQModel(Vector Quantized Model)은 Kandinsky V2.2와 같은 확산 모델에서 자주 사용됩니다. VAE(Variational Autoencoder)와 유사하게 작동하지만 연속적인 잠재 공간 대신 벡터 양자화를 사용합니다. 이미지를 이산적인 잠재 표현으로 인코딩하고 이러한 표현을 다시 이미지로 디코딩하는 역할을 합니다. RBLN NPU는 Optimum RBLN을 사용하여 VQModel 추론을 가속화할 수 있습니다.

주요 클래스

API 참조

Classes

RBLNVQModel

Bases: RBLNModel

RBLN implementation of VQModel used in Kandinsky models.

This model accelerates the VQModel (Vector Quantized Model), which serves a similar purpose to the VAE in other diffusion models (encoding images to latents and decoding latents to images), but utilizes vector quantization.

It can be configured to include both encoder and decoder, or just the decoder part.

This class inherits from [RBLNModel].

Functions

encode(x, return_dict=True, **kwargs)

Encode an input image into a quantized latent representation.

Parameters:

Name Type Description Default
x FloatTensor

The input image to encode.

required
return_dict bool

Whether to return output as a dictionary. Defaults to True.

True
**kwargs Dict[str, Any]

Additional arguments to pass to the encoder/quantizer.

{}

Returns:

Type Description
Union[FloatTensor, VQEncoderOutput]

Union[FloatTensor, object]: The quantized latent representation or a specific output object.

decode(z, return_dict=True, **kwargs)

Decode a quantized latent representation back into an image.

Parameters:

Name Type Description Default
z FloatTensor

The quantized latent representation to decode.

required
return_dict bool

Whether to return output as a dictionary. Defaults to True.

True
**kwargs Dict[str, Any]

Additional arguments to pass to the decoder.

{}

Returns:

Type Description
Union[FloatTensor, DecoderOutput]

Union[FloatTensor, DecoderOutput]: The decoded image or a DecoderOutput object.

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

Classes

RBLNVQModelConfig

Bases: RBLNModelConfig

Configuration class for RBLN VQModel models, used in Kandinsky.

This class inherits from RBLNModelConfig and provides specific configuration options for VQModel, which acts similarly to a VAE but uses vector quantization.

Functions

__init__(batch_size=None, sample_size=None, uses_encoder=None, scaling_factor=None, in_channels=None, latent_channels=None, num_vq_embeddings=None, **kwargs)

Parameters:

Name Type Description Default
batch_size Optional[int]

The batch size for inference. Defaults to 1.

None
sample_size Optional[Tuple[int, int]]

The spatial dimensions (height, width) of the input/output images. If an integer is provided, it's used for both height and width.

None
uses_encoder Optional[bool]

Whether to include the encoder part of the VQModel in the model. When False, only the decoder is used (for latent-to-image conversion).

None
scaling_factor Optional[int]

The integer downsampling factor between pixel space and latent space.

None
in_channels Optional[int]

Number of input channels for the model.

None
latent_channels Optional[int]

Number of channels in the latent space.

None
num_vq_embeddings Optional[int]

Number of embeddings in the VQ codebook.

None
**kwargs Dict[str, Any]

Additional arguments passed to the parent RBLNModelConfig.

{}