Kandinsky V2.2¶
Kandinsky V2.2 is a text-to-image latent diffusion model. RBLN NPUs can accelerate Kandinsky V2.2 pipelines using Optimum RBLN.
Supported Pipelines¶
Optimum RBLN supports several Kandinsky V2.2 pipelines:
- Text-to-Image: Generate images from text prompts (using Prior + Decoder)
- Image-to-Image: Modify existing images based on text prompts (using Prior + Img2Img Decoder)
- Inpainting: Fill masked regions of an image guided by text prompts (using Prior + Inpaint Decoder)
Key Classes¶
RBLNKandinskyV22PriorPipeline
: Pipeline for the Prior stage (Text/Image -> Image Embedding)RBLNKandinskyV22PriorPipelineConfig
: Configuration for the Prior pipelineRBLNKandinskyV22Pipeline
: Text-to-image Decoder pipeline (Image Embedding -> Image)RBLNKandinskyV22PipelineConfig
: Configuration for the Text-to-image Decoder pipelineRBLNKandinskyV22Img2ImgPipeline
: Image-to-image Decoder pipelineRBLNKandinskyV22Img2ImgPipelineConfig
: Configuration for the Image-to-image Decoder pipelineRBLNKandinskyV22InpaintPipeline
: Inpainting Decoder pipelineRBLNKandinskyV22InpaintPipelineConfig
: Configuration for the Inpainting Decoder pipelineRBLNKandinskyV22CombinedPipeline
: Combined pipeline (Prior + Text-to-Image Decoder)RBLNKandinskyV22CombinedPipelineConfig
: Configuration for the Combined pipelineRBLNKandinskyV22Img2ImgCombinedPipeline
: Combined pipeline (Prior + Image-to-Image Decoder)RBLNKandinskyV22Img2ImgCombinedPipelineConfig
: Configuration for the Combined pipeline (Prior + Image-to-Image Decoder)RBLNKandinskyV22InpaintCombinedPipeline
: Combined pipeline (Prior + Inpainting Decoder)RBLNKandinskyV22InpaintCombinedPipelineConfig
: Configuration for the Combined pipeline (Prior + Inpainting Decoder)
Important: Batch Size Configuration for Guidance Scale¶
Batch Size and Guidance Scale
When using Kandinsky V2.2 with a guidance scale > 1.0 (the default), both the UNet's and Prior's effective batch sizes are doubled during runtime because of the classifier-free guidance technique.
Since RBLN NPU uses static graph compilation, these components' batch sizes at compilation time must match their runtime batch sizes, or you'll encounter errors during inference.
Default Behavior¶
By default, if you don't explicitly specify the UNet's or Prior's batch size, Optimum RBLN will:
- Assume you'll use the default guidance scale (which is > 1.0)
- Automatically set the UNet's and Prior's batch sizes to 2× your pipeline's batch size
If you plan to use the default guidance scale, this automatic configuration will work correctly. However, if you plan to use a different guidance scale or want more control, you should explicitly configure the batch sizes.
Example: Explicitly Setting Batch Sizes (Guidance Scale = 1.0)¶
If you plan to use a guidance scale of exactly 1.0 (which doesn't use classifier-free guidance), you should explicitly set the batch sizes to match your inference batch size:
Usage Examples¶
Option 1: Using Separate Prior and Decoder Pipelines¶
This approach gives you more control over the intermediate image embeddings:
Option 2: Using Combined Pipeline¶
The combined pipeline integrates both Prior and Decoder into a single seamless workflow:
API Reference¶
Classes¶
RBLNKandinskyV22PriorPipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22PriorPipeline
RBLN wrapper for Kandinsky V2.2 Prior pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNKandinskyV22Pipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22Pipeline
RBLN wrapper for Kandinsky V2.2 text-to-image pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNKandinskyV22Img2ImgPipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22Img2ImgPipeline
RBLN wrapper for Kandinsky V2.2 image-to-image pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNKandinskyV22InpaintPipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22InpaintPipeline
RBLN wrapper for Kandinsky V2.2 inpainting pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNKandinskyV22CombinedPipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22CombinedPipeline
RBLN wrapper for Kandinsky V2.2 Combined (Prior + Text-to-Image Decoder) pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNKandinskyV22Img2ImgCombinedPipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22Img2ImgCombinedPipeline
RBLN wrapper for Kandinsky V2.2 Combined (Prior + Image-to-Image Decoder) pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNKandinskyV22InpaintCombinedPipeline
¶
Bases: RBLNDiffusionMixin
, KandinskyV22InpaintCombinedPipeline
RBLN wrapper for Kandinsky V2.2 Combined (Prior + Inpainting Decoder) pipeline.
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
Classes¶
RBLNKandinskyV22PipelineBaseConfig
¶
Bases: RBLNModelConfig
Base configuration class for Kandinsky V2.2 decoder pipelines.
Functions¶
__init__(unet=None, movq=None, *, sample_size=None, batch_size=None, guidance_scale=None, image_size=None, img_height=None, img_width=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Configuration for the UNet model component. Initialized as RBLNUNet2DConditionModelConfig if not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Configuration for the MoVQ (VQ-GAN) model component. Initialized as RBLNVQModelConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the UNet batch size is automatically doubled to accommodate classifier-free guidance.
RBLNKandinskyV22PipelineConfig
¶
Bases: RBLNKandinskyV22PipelineBaseConfig
Configuration class for the Kandinsky V2.2 text-to-image decoder pipeline.
Functions¶
__init__(unet=None, movq=None, *, sample_size=None, batch_size=None, guidance_scale=None, image_size=None, img_height=None, img_width=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Configuration for the UNet model component. Initialized as RBLNUNet2DConditionModelConfig if not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Configuration for the MoVQ (VQ-GAN) model component. Initialized as RBLNVQModelConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the UNet batch size is automatically doubled to accommodate classifier-free guidance.
RBLNKandinskyV22Img2ImgPipelineConfig
¶
Bases: RBLNKandinskyV22PipelineBaseConfig
Configuration class for the Kandinsky V2.2 image-to-image decoder pipeline.
Functions¶
__init__(unet=None, movq=None, *, sample_size=None, batch_size=None, guidance_scale=None, image_size=None, img_height=None, img_width=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Configuration for the UNet model component. Initialized as RBLNUNet2DConditionModelConfig if not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Configuration for the MoVQ (VQ-GAN) model component. Initialized as RBLNVQModelConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the UNet batch size is automatically doubled to accommodate classifier-free guidance.
RBLNKandinskyV22InpaintPipelineConfig
¶
Bases: RBLNKandinskyV22PipelineBaseConfig
Configuration class for the Kandinsky V2.2 inpainting decoder pipeline.
Functions¶
__init__(unet=None, movq=None, *, sample_size=None, batch_size=None, guidance_scale=None, image_size=None, img_height=None, img_width=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Configuration for the UNet model component. Initialized as RBLNUNet2DConditionModelConfig if not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Configuration for the MoVQ (VQ-GAN) model component. Initialized as RBLNVQModelConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the UNet batch size is automatically doubled to accommodate classifier-free guidance.
RBLNKandinskyV22PriorPipelineConfig
¶
Bases: RBLNModelConfig
Configuration class for the Kandinsky V2.2 Prior pipeline.
Functions¶
__init__(text_encoder=None, image_encoder=None, prior=None, *, batch_size=None, guidance_scale=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the text encoder component. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
image_encoder
|
Optional[RBLNCLIPVisionModelWithProjectionConfig]
|
Configuration for the image encoder component. Initialized as RBLNCLIPVisionModelWithProjectionConfig if not provided. |
None
|
prior
|
Optional[RBLNPriorTransformerConfig]
|
Configuration for the prior transformer component. Initialized as RBLNPriorTransformerConfig if not provided. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Note
When guidance_scale > 1.0, the prior batch size is automatically doubled to accommodate classifier-free guidance.
RBLNKandinskyV22CombinedPipelineBaseConfig
¶
Bases: RBLNModelConfig
Base configuration class for Kandinsky V2.2 combined pipelines.
Functions¶
__init__(prior_pipe=None, decoder_pipe=None, *, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, prior_prior=None, prior_image_encoder=None, prior_text_encoder=None, unet=None, movq=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prior_pipe
|
Optional[RBLNKandinskyV22PriorPipelineConfig]
|
Configuration for the prior pipeline. Initialized as RBLNKandinskyV22PriorPipelineConfig if not provided. |
None
|
decoder_pipe
|
Optional[RBLNKandinskyV22PipelineConfig]
|
Configuration for the decoder pipeline. Initialized as RBLNKandinskyV22PipelineConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
prior_prior
|
Optional[RBLNPriorTransformerConfig]
|
Direct configuration for the prior transformer. Used if prior_pipe is not provided. |
None
|
prior_image_encoder
|
Optional[RBLNCLIPVisionModelWithProjectionConfig]
|
Direct configuration for the image encoder. Used if prior_pipe is not provided. |
None
|
prior_text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Direct configuration for the text encoder. Used if prior_pipe is not provided. |
None
|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Direct configuration for the UNet. Used if decoder_pipe is not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Direct configuration for the MoVQ (VQ-GAN) model. Used if decoder_pipe is not provided. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
RBLNKandinskyV22CombinedPipelineConfig
¶
Bases: RBLNKandinskyV22CombinedPipelineBaseConfig
Configuration class for the Kandinsky V2.2 combined text-to-image pipeline.
Functions¶
__init__(prior_pipe=None, decoder_pipe=None, *, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, prior_prior=None, prior_image_encoder=None, prior_text_encoder=None, unet=None, movq=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prior_pipe
|
Optional[RBLNKandinskyV22PriorPipelineConfig]
|
Configuration for the prior pipeline. Initialized as RBLNKandinskyV22PriorPipelineConfig if not provided. |
None
|
decoder_pipe
|
Optional[RBLNKandinskyV22PipelineConfig]
|
Configuration for the decoder pipeline. Initialized as RBLNKandinskyV22PipelineConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
prior_prior
|
Optional[RBLNPriorTransformerConfig]
|
Direct configuration for the prior transformer. Used if prior_pipe is not provided. |
None
|
prior_image_encoder
|
Optional[RBLNCLIPVisionModelWithProjectionConfig]
|
Direct configuration for the image encoder. Used if prior_pipe is not provided. |
None
|
prior_text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Direct configuration for the text encoder. Used if prior_pipe is not provided. |
None
|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Direct configuration for the UNet. Used if decoder_pipe is not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Direct configuration for the MoVQ (VQ-GAN) model. Used if decoder_pipe is not provided. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
RBLNKandinskyV22InpaintCombinedPipelineConfig
¶
Bases: RBLNKandinskyV22CombinedPipelineBaseConfig
Configuration class for the Kandinsky V2.2 combined inpainting pipeline.
Functions¶
__init__(prior_pipe=None, decoder_pipe=None, *, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, prior_prior=None, prior_image_encoder=None, prior_text_encoder=None, unet=None, movq=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prior_pipe
|
Optional[RBLNKandinskyV22PriorPipelineConfig]
|
Configuration for the prior pipeline. Initialized as RBLNKandinskyV22PriorPipelineConfig if not provided. |
None
|
decoder_pipe
|
Optional[RBLNKandinskyV22PipelineConfig]
|
Configuration for the decoder pipeline. Initialized as RBLNKandinskyV22PipelineConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
prior_prior
|
Optional[RBLNPriorTransformerConfig]
|
Direct configuration for the prior transformer. Used if prior_pipe is not provided. |
None
|
prior_image_encoder
|
Optional[RBLNCLIPVisionModelWithProjectionConfig]
|
Direct configuration for the image encoder. Used if prior_pipe is not provided. |
None
|
prior_text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Direct configuration for the text encoder. Used if prior_pipe is not provided. |
None
|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Direct configuration for the UNet. Used if decoder_pipe is not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Direct configuration for the MoVQ (VQ-GAN) model. Used if decoder_pipe is not provided. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
RBLNKandinskyV22Img2ImgCombinedPipelineConfig
¶
Bases: RBLNKandinskyV22CombinedPipelineBaseConfig
Configuration class for the Kandinsky V2.2 combined image-to-image pipeline.
Functions¶
__init__(prior_pipe=None, decoder_pipe=None, *, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, prior_prior=None, prior_image_encoder=None, prior_text_encoder=None, unet=None, movq=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prior_pipe
|
Optional[RBLNKandinskyV22PriorPipelineConfig]
|
Configuration for the prior pipeline. Initialized as RBLNKandinskyV22PriorPipelineConfig if not provided. |
None
|
decoder_pipe
|
Optional[RBLNKandinskyV22PipelineConfig]
|
Configuration for the decoder pipeline. Initialized as RBLNKandinskyV22PipelineConfig if not provided. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the UNet model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. |
None
|
prior_prior
|
Optional[RBLNPriorTransformerConfig]
|
Direct configuration for the prior transformer. Used if prior_pipe is not provided. |
None
|
prior_image_encoder
|
Optional[RBLNCLIPVisionModelWithProjectionConfig]
|
Direct configuration for the image encoder. Used if prior_pipe is not provided. |
None
|
prior_text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Direct configuration for the text encoder. Used if prior_pipe is not provided. |
None
|
unet
|
Optional[RBLNUNet2DConditionModelConfig]
|
Direct configuration for the UNet. Used if decoder_pipe is not provided. |
None
|
movq
|
Optional[RBLNVQModelConfig]
|
Direct configuration for the MoVQ (VQ-GAN) model. Used if decoder_pipe is not provided. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|