Stable Diffusion 3¶
Stable Diffusion 3 (SD3) is the latest generation text-to-image model from Stability AI, featuring a Multimodal Diffusion Transformer (MMDiT) architecture. It excels at handling complex prompts involving multiple subjects, spatial relationships, and diverse text styles. SD3 utilizes three distinct text encoders (CLIP-L, OpenCLIP-G, T5-XXL) for enhanced prompt interpretation. RBLN NPUs can accelerate Stable Diffusion 3 pipelines using Optimum RBLN.
Supported Pipelines¶
Optimum RBLN supports several Stable Diffusion 3 pipelines:
- Text-to-Image: Generate high-quality images from text prompts.
- Image-to-Image: Modify existing images based on text prompts.
- Inpainting: Fill masked regions of an image guided by text prompts.
Key Classes¶
RBLNStableDiffusion3Pipeline
: Text-to-image pipeline for Stable Diffusion 3.RBLNStableDiffusion3PipelineConfig
: Configuration for the text-to-image pipeline.RBLNStableDiffusion3Img2ImgPipeline
: Image-to-image pipeline for Stable Diffusion 3.RBLNStableDiffusion3Img2ImgPipelineConfig
: Configuration for the image-to-image pipeline.RBLNStableDiffusion3InpaintPipeline
: Inpainting pipeline for Stable Diffusion 3.RBLNStableDiffusion3InpaintPipelineConfig
: Configuration for the inpainting pipeline.
Important: Batch Size Configuration for Guidance Scale¶
Batch Size and Guidance Scale
When using Stable Diffusion 3 with a guidance scale > 1.0 (the default is typically around 5.0-7.0), the MMDiT Transformer's effective batch size is doubled during runtime because of the classifier-free guidance technique.
Since RBLN NPU uses static graph compilation, the Transformer's batch size at compilation time must match its runtime batch size, or you'll encounter errors during inference.
Default Behavior¶
By default, if you don't explicitly specify the Transformer's batch size, Optimum RBLN will:
- Assume you'll use a guidance scale > 1.0.
- Automatically set the Transformer's batch size to 2× your pipeline's batch size.
If you plan to use the default guidance scale (which is > 1.0), this automatic configuration will work correctly. However, if you plan to use a different guidance scale or want more control, you should explicitly configure the Transformer's batch size.
Example: Explicitly Setting the Transformer Batch Size (guidance_scale > 1.0)¶
Example: Disabling Guidance (guidance_scale = 0.0)¶
If you plan to use a guidance scale of exactly 0.0 (disabling classifier-free guidance), you should explicitly set the Transformer batch size to match your inference batch size:
Usage Example (Text-to-Image)¶
API Reference¶
Classes¶
RBLNStableDiffusion3Pipeline
¶
Bases: RBLNDiffusionMixin
, StableDiffusion3Pipeline
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNStableDiffusion3InpaintPipeline
¶
Bases: RBLNDiffusionMixin
, StableDiffusion3InpaintPipeline
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
RBLNStableDiffusion3Img2ImgPipeline
¶
Bases: RBLNDiffusionMixin
, StableDiffusion3Img2ImgPipeline
Functions¶
from_pretrained(model_id, *, export=False, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs)
classmethod
¶
Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.
This method has two distinct operating modes:
- When
export=True
: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model - When
export=False
: Loads an already compiled RBLN model frommodel_id
without recompilation
It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id
|
str
|
The model ID or path to the pretrained model to load. Can be either:
|
required |
export
|
bool
|
If True, takes a PyTorch model from |
False
|
model_save_dir
|
Optional[PathLike]
|
Directory to save the compiled model artifacts. Only used when |
None
|
rbln_config
|
Dict[str, Any]
|
Configuration options for RBLN compilation. Can include settings for specific submodules
such as |
{}
|
lora_ids
|
Optional[Union[str, List[str]]]
|
LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused
into the model weights during compilation. Only used when |
None
|
lora_weights_names
|
Optional[Union[str, List[str]]]
|
Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when |
None
|
lora_scales
|
Optional[Union[float, List[float]]]
|
Scaling factor(s) to apply to the LoRA adapter(s). Only used when |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used. |
{}
|
Returns:
Type | Description |
---|---|
Self
|
A compiled diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin. |
Classes¶
RBLNStableDiffusion3PipelineBaseConfig
¶
Bases: RBLNModelConfig
Functions¶
__init__(transformer=None, text_encoder=None, text_encoder_2=None, text_encoder_3=None, vae=None, *, max_seq_len=None, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transformer
|
Optional[RBLNSD3Transformer2DModelConfig]
|
Configuration for the transformer model component. Initialized as RBLNSD3Transformer2DModelConfig if not provided. |
None
|
text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the primary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_2
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the secondary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_3
|
Optional[RBLNT5EncoderModelConfig]
|
Configuration for the tertiary text encoder. Initialized as RBLNT5EncoderModelConfig if not provided. |
None
|
vae
|
Optional[RBLNAutoencoderKLConfig]
|
Configuration for the VAE model component. Initialized as RBLNAutoencoderKLConfig if not provided. |
None
|
max_seq_len
|
Optional[int]
|
Maximum sequence length for text inputs. Defaults to 256. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the transformer model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. Deprecated parameter. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the transformer batch size is automatically doubled to accommodate classifier-free guidance.
RBLNStableDiffusion3PipelineConfig
¶
Bases: RBLNStableDiffusion3PipelineBaseConfig
Functions¶
__init__(transformer=None, text_encoder=None, text_encoder_2=None, text_encoder_3=None, vae=None, *, max_seq_len=None, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transformer
|
Optional[RBLNSD3Transformer2DModelConfig]
|
Configuration for the transformer model component. Initialized as RBLNSD3Transformer2DModelConfig if not provided. |
None
|
text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the primary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_2
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the secondary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_3
|
Optional[RBLNT5EncoderModelConfig]
|
Configuration for the tertiary text encoder. Initialized as RBLNT5EncoderModelConfig if not provided. |
None
|
vae
|
Optional[RBLNAutoencoderKLConfig]
|
Configuration for the VAE model component. Initialized as RBLNAutoencoderKLConfig if not provided. |
None
|
max_seq_len
|
Optional[int]
|
Maximum sequence length for text inputs. Defaults to 256. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the transformer model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. Deprecated parameter. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the transformer batch size is automatically doubled to accommodate classifier-free guidance.
RBLNStableDiffusion3Img2ImgPipelineConfig
¶
Bases: RBLNStableDiffusion3PipelineBaseConfig
Functions¶
__init__(transformer=None, text_encoder=None, text_encoder_2=None, text_encoder_3=None, vae=None, *, max_seq_len=None, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transformer
|
Optional[RBLNSD3Transformer2DModelConfig]
|
Configuration for the transformer model component. Initialized as RBLNSD3Transformer2DModelConfig if not provided. |
None
|
text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the primary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_2
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the secondary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_3
|
Optional[RBLNT5EncoderModelConfig]
|
Configuration for the tertiary text encoder. Initialized as RBLNT5EncoderModelConfig if not provided. |
None
|
vae
|
Optional[RBLNAutoencoderKLConfig]
|
Configuration for the VAE model component. Initialized as RBLNAutoencoderKLConfig if not provided. |
None
|
max_seq_len
|
Optional[int]
|
Maximum sequence length for text inputs. Defaults to 256. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the transformer model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. Deprecated parameter. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the transformer batch size is automatically doubled to accommodate classifier-free guidance.
RBLNStableDiffusion3InpaintPipelineConfig
¶
Bases: RBLNStableDiffusion3PipelineBaseConfig
Functions¶
__init__(transformer=None, text_encoder=None, text_encoder_2=None, text_encoder_3=None, vae=None, *, max_seq_len=None, sample_size=None, image_size=None, batch_size=None, img_height=None, img_width=None, guidance_scale=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transformer
|
Optional[RBLNSD3Transformer2DModelConfig]
|
Configuration for the transformer model component. Initialized as RBLNSD3Transformer2DModelConfig if not provided. |
None
|
text_encoder
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the primary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_2
|
Optional[RBLNCLIPTextModelWithProjectionConfig]
|
Configuration for the secondary text encoder. Initialized as RBLNCLIPTextModelWithProjectionConfig if not provided. |
None
|
text_encoder_3
|
Optional[RBLNT5EncoderModelConfig]
|
Configuration for the tertiary text encoder. Initialized as RBLNT5EncoderModelConfig if not provided. |
None
|
vae
|
Optional[RBLNAutoencoderKLConfig]
|
Configuration for the VAE model component. Initialized as RBLNAutoencoderKLConfig if not provided. |
None
|
max_seq_len
|
Optional[int]
|
Maximum sequence length for text inputs. Defaults to 256. |
None
|
sample_size
|
Optional[Tuple[int, int]]
|
Spatial dimensions for the transformer model. |
None
|
image_size
|
Optional[Tuple[int, int]]
|
Dimensions for the generated images. Cannot be used together with img_height/img_width. |
None
|
batch_size
|
Optional[int]
|
Batch size for inference, applied to all submodules. |
None
|
img_height
|
Optional[int]
|
Height of the generated images. |
None
|
img_width
|
Optional[int]
|
Width of the generated images. |
None
|
guidance_scale
|
Optional[float]
|
Scale for classifier-free guidance. Deprecated parameter. |
None
|
**kwargs
|
Dict[str, Any]
|
Additional arguments passed to the parent RBLNModelConfig. |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
If both image_size and img_height/img_width are provided. |
Note
When guidance_scale > 1.0, the transformer batch size is automatically doubled to accommodate classifier-free guidance.