Skip to content

Stable Video Diffusion

Stable Video Diffusion is an image-to-video latent diffusion model that synthesizes a short video from a single input image. Optimum RBLN provides accelerated Stable Video Diffusion pipelines on RBLN NPUs.

Supported Pipelines

Optimum RBLN supports the following Stable Video Diffusion pipelines:

  • Image-to-Video: Generate a video sequence from an input image

Important: Batch Size Configuration for Guidance Scale

Batch Size and Guidance Scale

When running Stable Video Diffusion with max_guidance_scale > 1.0 (default: 3.0), classifier-free guidance doubles the UNet's effective batch size during inference.

Because RBLN NPUs rely on static graph compilation, the UNet batch size configured at compile time must match the runtime batch size. Otherwise, the compiled graph cannot execute and inference fails.

Default Behavior

If you do not set the UNet batch size explicitly, Optimum RBLN will:

  • Assume you'll use the default max_guidance_scale (3.0)
  • Automatically set the UNet batch size to 2× the pipeline batch size

If you plan to use the default max guidance scale (which is > 1.0), this automatic configuration will work correctly. However, if you plan to use a different guidance scale or want more control, you should explicitly configure the UNet's batch size.

Example: Explicitly Setting the UNet Batch Size

from optimum.rbln import RBLNStableVideoDiffusionPipelineConfig, RBLNStableVideoDiffusionPipeline
from diffusers.utils import export_to_video, load_image

# For max_guidance_scale > 1.0 (default: 3.0)
# Double the UNet batch size at compile time
config = RBLNStableVideoDiffusionPipelineConfig(
    batch_size=2,  # Inference batch size
    height=576,
    width=1024,
    unet=dict(batch_size=4, device=1)  # Configure double batch size for classifier-free guidance
)

pipe = RBLNStableVideoDiffusionPipeline.from_pretrained(
    "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
    export=True,
    rbln_config=config
)

# Standard inference with default max_guidance_scale
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png"
image = load_image(url).resize((1024, 576))
generator = torch.manual_seed(42)
frames = pipe(image=[image, image], generator=generator).frames[0]

# Save the generated video
export_to_video(frames, "svd_guided.mp4", fps=7)
print("Video saved as svd_guided.mp4")

Example: Using max_guidance_scale = 1.0

If you plan to use a max guidance scale of exactly 1.0 (which doesn't use classifier-free guidance), you should explicitly set the UNet batch size to match your inference batch size:

from optimum.rbln import RBLNStableVideoDiffusionPipelineConfig, RBLNStableVideoDiffusionPipeline
from diffusers.utils import export_to_video, load_image

config = RBLNStableVideoDiffusionPipelineConfig(
    height=576,
    width=1024,
    unet=dict(batch_size=1)  # Match runtime batch size when guidance is disabled
)

pipe = RBLNStableVideoDiffusionPipeline.from_pretrained(
    "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
    export=True,
    rbln_config=config
)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png"
image = load_image(url).resize((1024, 576))
generator = torch.manual_seed(42)
frames = pipe(image=image, generator=generator, max_guidance_scale=1.0).frames[0]

export_to_video(frames, "svd_no_guidance.mp4", fps=7)
print("Video saved as svd_no_guidance.mp4")

Usage Example

from optimum.rbln import RBLNStableVideoDiffusionPipelineConfig, RBLNStableVideoDiffusionPipeline
from diffusers.utils import export_to_video, load_image

# Create a configuration with your preferred resolution
config = RBLNStableVideoDiffusionPipelineConfig(
    height=576,
    width=1024,
)

# Compile Stable Video Diffusion for RBLN NPU
pipe = RBLNStableVideoDiffusionPipeline.from_pretrained(
    "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
    export=True,
    rbln_config=config
)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png"
image = load_image(url).resize((1024, 576))
generator = torch.manual_seed(42)
frames = pipe(image=image, generator=generator).frames[0]

export_to_video(frames, "svd_output.mp4", fps=7)
print("Video saved as svd_output.mp4")

API Reference

Classes

RBLNStableVideoDiffusionPipeline

Bases: RBLNDiffusionMixin, StableVideoDiffusionPipeline

RBLN-accelerated implementation of Stable Video Diffusion pipeline for image-to-video generation.

This pipeline compiles Stable Video Diffusion models to run efficiently on RBLN NPUs, enabling high-performance inference for generating videos from images with optimized memory usage and throughput.

Functions

from_pretrained(model_id, *, export=None, model_save_dir=None, rbln_config={}, lora_ids=None, lora_weights_names=None, lora_scales=None, **kwargs) classmethod

Load a pretrained diffusion pipeline from a model checkpoint, with optional compilation for RBLN NPUs.

This method has two distinct operating modes
  • When export=True: Takes a PyTorch-based diffusion model, compiles it for RBLN NPUs, and loads the compiled model
  • When export=False: Loads an already compiled RBLN model from model_id without recompilation

It supports various diffusion pipelines including Stable Diffusion, Kandinsky, ControlNet, and other diffusers-based models.

Parameters:

Name Type Description Default
model_id `str`

The model ID or path to the pretrained model to load. Can be either:

  • A model ID from the HuggingFace Hub
  • A local path to a saved model directory
required
export bool

If True, takes a PyTorch model from model_id and compiles it for RBLN NPU execution. If False, loads an already compiled RBLN model from model_id without recompilation.

None
model_save_dir Optional[PathLike]

Directory to save the compiled model artifacts. Only used when export=True. If not provided and export=True, a temporary directory is used.

None
rbln_config Dict[str, Any]

Configuration options for RBLN compilation. Can include settings for specific submodules such as text_encoder, unet, and vae. Configuration can be tailored to the specific pipeline being compiled.

{}
lora_ids Optional[Union[str, List[str]]]

LoRA adapter ID(s) to load and apply before compilation. LoRA weights are fused into the model weights during compilation. Only used when export=True.

None
lora_weights_names Optional[Union[str, List[str]]]

Names of specific LoRA weight files to load, corresponding to lora_ids. Only used when export=True.

None
lora_scales Optional[Union[float, List[float]]]

Scaling factor(s) to apply to the LoRA adapter(s). Only used when export=True.

None
kwargs Any

Additional arguments to pass to the underlying diffusion pipeline constructor or the RBLN compilation process. These may include parameters specific to individual submodules or the particular diffusion pipeline being used.

{}

Returns:

Type Description
RBLNDiffusionMixin

A compiled or loaded diffusion pipeline that can be used for inference on RBLN NPU. The returned object is an instance of the class that called this method, inheriting from RBLNDiffusionMixin.

Functions

Classes

RBLNStableVideoDiffusionPipelineConfig

Bases: RBLNModelConfig

Functions

__init__(image_encoder=None, unet=None, vae=None, *, batch_size=None, height=None, width=None, num_frames=None, decode_chunk_size=None, guidance_scale=None, **kwargs)

Parameters:

Name Type Description Default
image_encoder Optional[RBLNCLIPVisionModelWithProjectionConfig]

Configuration for the image encoder component. Initialized as RBLNCLIPVisionModelWithProjectionConfig if not provided.

None
unet Optional[RBLNUNetSpatioTemporalConditionModelConfig]

Configuration for the UNet model component. Initialized as RBLNUNetSpatioTemporalConditionModelConfig if not provided.

None
vae Optional[RBLNAutoencoderKLTemporalDecoderConfig]

Configuration for the VAE model component. Initialized as RBLNAutoencoderKLTemporalDecoderConfig if not provided.

None
batch_size Optional[int]

Batch size for inference, applied to all submodules.

None
height Optional[int]

Height of the generated images.

None
width Optional[int]

Width of the generated images.

None
num_frames Optional[int]

The number of frames in the generated video.

None
decode_chunk_size Optional[int]

The number of frames to decode at once during VAE decoding. Useful for managing memory usage during video generation.

None
guidance_scale Optional[float]

Scale for classifier-free guidance.

None
kwargs Any

Additional arguments passed to the parent RBLNModelConfig.

{}

Raises:

Type Description
ValueError

If both image_size and height/width are provided.

Note

When guidance_scale > 1.0, the UNet batch size is automatically doubled to accommodate classifier-free guidance.

load(path, **kwargs) classmethod

Load a RBLNModelConfig from a path.

Parameters:

Name Type Description Default
path str

Path to the RBLNModelConfig file or directory containing the config file.

required
kwargs Any

Additional keyword arguments to override configuration values. Keys starting with 'rbln_' will have the prefix removed and be used to update the configuration.

{}

Returns:

Name Type Description
RBLNModelConfig RBLNModelConfig

The loaded configuration instance.

Note

This method loads the configuration from the specified path and applies any provided overrides. If the loaded configuration class doesn't match the expected class, a warning will be logged.