Skip to content

Variational Autoencoder (VAE)

The Variational Autoencoder (VAE) is a key component in diffusion models like Stable Diffusion, responsible for encoding images into latent space and decoding latent representations back into images. RBLN NPUs can accelerate VAE inference using Optimum RBLN.

Key Classes

Usage in Diffusion Models

In diffusion-based image generation models like Stable Diffusion, the VAE serves two primary functions:

  1. Encoder: Converts input images to latent representations for tasks like image-to-image or inpainting
  2. Decoder: Converts the denoised latent representations back to pixel space as the final step in generation

API Reference

Classes

RBLNAutoencoderKL

Bases: RBLNModel

RBLN implementation of AutoencoderKL (VAE) for diffusion models.

This model is used to accelerate AutoencoderKL (VAE) models from diffusers library on RBLN NPUs. It can be configured to include both encoder and decoder, or just the decoder part for latent-to-image conversion.

This class inherits from [RBLNModel]. Check the superclass documentation for the generic methods the library implements for all its models.

Functions

encode(x, return_dict=True, **kwargs)

Encode an input image into a latent representation.

Parameters:

Name Type Description Default
x FloatTensor

The input image to encode

required
return_dict bool

Whether to return output as a dictionary. Defaults to True.

True
**kwargs Dict[str, Any]

Additional arguments to pass to the encoder.

{}

Returns:

Name Type Description
FloatTensor Union[FloatTensor, AutoencoderKLOutput]

The latent representation or AutoencoderKLOutput if return_dict=True

decode(z, return_dict=True, **kwargs)

Decode a latent representation into an image.

Parameters:

Name Type Description Default
z FloatTensor

The latent representation to decode

required
return_dict bool

Whether to return output as a dictionary. Defaults to True.

True
**kwargs Dict[str, Any]

Additional arguments to pass to the decoder.

{}

Returns:

Type Description
Union[FloatTensor, DecoderOutput]

The decoded image or DecoderOutput if return_dict=True

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

Classes

RBLNAutoencoderKLConfig

Bases: RBLNModelConfig

Configuration class for RBLN Variational Autoencoder (VAE) models.

This class inherits from RBLNModelConfig and provides specific configuration options for VAE models used in diffusion-based image generation.

Functions

__init__(batch_size=None, sample_size=None, uses_encoder=None, vae_scale_factor=None, in_channels=None, latent_channels=None, **kwargs)

Parameters:

Name Type Description Default
batch_size Optional[int]

The batch size for inference. Defaults to 1.

None
sample_size Optional[Tuple[int, int]]

The spatial dimensions (height, width) of the input/output images. If an integer is provided, it's used for both height and width.

None
uses_encoder Optional[bool]

Whether to include the encoder part of the VAE in the model. When False, only the decoder is used (for latent-to-image conversion).

None
vae_scale_factor Optional[float]

The scaling factor between pixel space and latent space. Determines how much smaller the latent representations are compared to the original images.

None
in_channels Optional[int]

Number of input channels for the model.

None
latent_channels Optional[int]

Number of channels in the latent space.

None
**kwargs Dict[str, Any]

Additional arguments passed to the parent RBLNModelConfig.

{}

Classes

RBLNAutoencoderKLCosmos

Bases: RBLNModel

RBLN implementation of AutoencoderKLCosmos for diffusion models.

This model is used to accelerate AutoencoderKLCosmos models from diffusers library on RBLN NPUs. It can be configured to include both encoder and decoder, or just the decoder part for latent-to-video conversion.

This class inherits from [RBLNModel]. Check the superclass documentation for the generic methods the library implements for all its models.

Functions

encode(x, return_dict=True, **kwargs)

Encode an input video into a latent representation.

Parameters:

Name Type Description Default
x FloatTensor

The input video to encode

required
return_dict bool

Whether to return output as a dictionary. Defaults to True.

True
**kwargs Dict[str, Any]

Additional arguments to pass to the encoder.

{}

Returns:

Name Type Description
FloatTensor Union[FloatTensor, AutoencoderKLOutput]

The latent representation or AutoencoderKLOutput if return_dict=True

decode(z, return_dict=True)

Decode a latent representation into a video.

Parameters:

Name Type Description Default
z FloatTensor

The latent representation to decode

required
return_dict bool

Whether to return output as a dictionary. Defaults to True.

True

Returns:

Type Description
Union[FloatTensor, DecoderOutput]

The decoded video or DecoderOutput if return_dict=True

from_pretrained(model_id, export=False, rbln_config=None, **kwargs) classmethod

The from_pretrained() function is utilized in its standard form as in the HuggingFace transformers library. User can use this function to load a pre-trained model from the HuggingFace library and convert it to a RBLN model to be run on RBLN NPUs.

Parameters:

Name Type Description Default
model_id Union[str, Path]

The model id of the pre-trained model to be loaded. It can be downloaded from the HuggingFace model hub or a local path, or a model id of a compiled model using the RBLN Compiler.

required
export bool

A boolean flag to indicate whether the model should be compiled.

False
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

from_model(model, *, rbln_config=None, **kwargs) classmethod

Converts and compiles a pre-trained HuggingFace library model into a RBLN model. This method performs the actual model conversion and compilation process.

Parameters:

Name Type Description Default
model PreTrainedModel

The PyTorch model to be compiled. The object must be an instance of the HuggingFace transformers PreTrainedModel class.

required
rbln_config Optional[Union[Dict, RBLNModelConfig]]

Configuration for RBLN model compilation and runtime. This can be provided as a dictionary or an instance of the model's configuration class (e.g., RBLNLlamaForCausalLMConfig for Llama models). For detailed configuration options, see the specific model's configuration class documentation.

None
kwargs Dict[str, Any]

Additional keyword arguments. Arguments with the prefix 'rbln_' are passed to rbln_config, while the remaining arguments are passed to the HuggingFace library.

{}

The method performs the following steps:

  1. Compiles the PyTorch model into an optimized RBLN graph
  2. Configures the model for the specified NPU device
  3. Creates the necessary runtime objects if requested
  4. Saves the compiled model and configurations

Returns:

Type Description
Self

A RBLN model instance ready for inference on RBLN NPU devices.

save_pretrained(save_directory)

Saves a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained] class method.

Parameters:

Name Type Description Default
save_directory Union[str, PathLike]

The directory to save the model and its configuration files. Will be created if it doesn't exist.

required

Classes

RBLNAutoencoderKLCosmosConfig

Bases: RBLNModelConfig

Configuration class for RBLN Cosmos Variational Autoencoder (VAE) models.

Functions

__init__(batch_size=None, uses_encoder=None, num_frames=None, height=None, width=None, num_channels_latents=None, vae_scale_factor_temporal=None, vae_scale_factor_spatial=None, use_slicing=None, **kwargs)

Parameters:

Name Type Description Default
batch_size Optional[int]

The batch size for inference. Defaults to 1.

None
uses_encoder Optional[bool]

Whether to include the encoder part of the VAE in the model. When False, only the decoder is used (for latent-to-video conversion).

None
num_frames Optional[int]

The number of frames in the generated video. Defaults to 121.

None
height Optional[int]

The height in pixels of the generated video. Defaults to 704.

None
width Optional[int]

The width in pixels of the generated video. Defaults to 1280.

None
num_channels_latents Optional[int]

The number of channels in latent space.

None
vae_scale_factor_temporal Optional[int]

The scaling factor between time space and latent space. Determines how much shorter the latent representations are compared to the original videos.

None
vae_scale_factor_spatial Optional[int]

The scaling factor between pixel space and latent space. Determines how much smaller the latent representations are compared to the original videos.

None
use_slicing Optional[bool]

Enable sliced VAE encoding and decoding. If True, the VAE will split the input tensor in slices to compute encoding or decoding in several steps.

None
**kwargs Dict[str, Any]

Additional arguments passed to the parent RBLNModelConfig.

{}

Raises:

Type Description
ValueError

If batch_size is not a positive integer.