Skip to content

Tutorials

Explore the following tutorials for a better understanding on how to use the RBLN SDK.

RBLN Compiler

These tutorials demonstrate how to use RBLN Python API (Compile and Runtime APIs) for PyTorch and TensorFlow models.

RBLN C/C++ Runtime API

These tutorials demonstrate how to deploy precompiled models using RBLN C/C++ Runtime API.

RBLN Optimum (HuggingFace Model Support)

These tutorials demonstrate how to compile and run inference on HuggingFace models using optimum-rbln.

RBLN Profiler

These tutorials demonstrate how to profile and analyze models during inference using RBLN Runtime.

RBLN PyTorch

These tutorials demonstrate how to run torch-rbln.

Model Serving

These tutorials demonstrate how to serve precompiled AI models using Nvidia Triton Inference Server and TorchServe, both of which support vLLM.