Skip to content

PyTorch TorchVision ResNet50

PyTorch is one of the most popular open-source deep learning frameworks. TorchVision, an extension library of PyTorch, provides a rich set of pre-trained models and related datasets.

In this tutorial, we will learn how to compile and deploy the ResNet50 model (ImageNet classification) in the TorchVision library on RBLN NPU with RBLN SDK.

The tutorial is divided into two parts:

  1. How to compile the PyTorch ResNet50 model and save to local storage
  2. How to deploy the compiled model in the runtime-based inference environment

Prerequisite

Before we start, please make sure you have installed the following pip packages in your system:

Note

If you want to skip the details and quickly compile and deploy the models on RBLN NPU, you can directly jump to the summary section in this tutorial. The code summarized in this section includes all the necessary steps required to compile and deploy the model so it can be used as a quick starting point for your own project.

Native RBLN API

Step 1. How to compile

Prepare the model

First, we can import the ResNet50 model from the TorchVision library.

1
2
3
4
5
6
7
8
import torch
from torchvision.models import resnet50, ResNet50_Weights
import rebel  # RBLN Compiler

# Instantiate TorchVision ResNet50 model
weights = ResNet50_Weights.DEFAULT
model = resnet50(weights=weights)
model.eval()

Compile the model

Once a torch model torch.nn.Module is prepared, we can simply compile it with rebel.compile_from_torch() method.

1
2
3
4
5
6
7
8
# Compile the model
compiled_model = rebel.compile_from_torch(
    model,
    [("input", [1, 3, 224, 224], torch.float32)],
    # If the NPU is installed on your host machine, you can omit the `npu` argument.
    # The function will automatically detect and use the installed NPU.
    npu="RBLN-CA12",
)

If the NPU is installed on your host machine, you can omit the npu argument in the rebel.compile_from_torch() function. In this case, the function will automatically detect and use the installed NPU. However, if the NPU is not installed on your host machine, you need to specify the target NPU using the npu argument to avoid any errors.

Currently, there are two supported NPU names: RBLN-CA02, RBLN-CA12. If you are unsure about the name of your target NPU, you can check it by running the rbln-stat command in the shell on the host machine where the NPU is installed.

Save the compiled model

To save the compiled model into the local storage, we can use the compiled_model.save() method as below.

# Save the compiled model to local storage
compiled_model.save("resnet50.rbln")

Step 2. How to deploy

Now, we can deploy the model by loading the compiled model, running the inference, and checking the output results.

Prepare the input

We need to prepare the preprocessed image as an input data required for the pre-trained ResNet50 model. torchvision.io.image.read_image() is used for loading an input image and ResNet50_Weights.DEFAULT.transforms() is used as a default preprocessing transforms for ResNet50.

import torch
from torchvision.io.image import read_image
from torchvision.models import resnet50, ResNet50_Weights
import urllib.request
import rebel  # RBLN Runtime

# Prepare the input
img_url = "https://rbln-public.s3.ap-northeast-2.amazonaws.com/images/tabby.jpg"
img_path = "./tabby.jpg"
with urllib.request.urlopen(img_url) as response, open(img_path, "wb") as f:
    f.write(response.read())
img = read_image(img_path)
weights = ResNet50_Weights.DEFAULT
preprocess = weights.transforms()
batch = preprocess(img).unsqueeze(0)

Run inference

The RBLN Runtime module rebel.Runtime() is used to load the compiled model. It can be initialized in two ways:

1
2
3
4
5
# By passing the path of the saved model as an input argument
module = rebel.Runtime("resnet50.rbln", tensor_type="pt")

# By directly using a compiled model:
module = rebel.Runtime(compiled_model, tensor_type="pt")

The tensor_type argument in rebel.Runtime() specifies the type of tensor to be used for input and output data. It can be set to either "pt" for PyTorch tensors or "np" for NumPy arrays.

We can use the run() method of the instantiated runtime module rebel.Runtime() for running inference. Additionally, the forward() method and the __call__ magic method can also be used to run inference, maintaining compatibility with PyTorch's interface.

1
2
3
4
5
6
7
8
# Run inference using the `run()` method
rebel_result = module.run(batch)

# Alternatively, use the `forward()` method
rebel_result = module.forward(batch)

# Or use the `__call__` magic method
rebel_result = module(batch)

Using forward() or __call__ allows you to use the loaded RBLN model in the same way as a PyTorch model, enabling seamless integration with existing PyTorch code.

You can see fundamental information of the runtime module, such as input/output shapes and the compiled model size, by using the print(module) function.

Check results

The output rebel_result is a pytorch tensor with size of (1, 1000), where each element is the score of the corresponding category in the ImageNet dataset. We can use torch.topk() to get an index of Top1 class with its score. The category name of the Top1 class can be retrieved by this index from the dictionary ResNet50_Weights.DEFAULT.meta["categories"].

1
2
3
4
# Check results
score, class_id = torch.topk(rebel_result, 1, dim=1)
category_name = weights.meta["categories"][class_id]
print("Top1 category: ", category_name)

The results will look like:

Top1 category:  tabby

Summary

Here is the complete code snippet for compilation of the TorchVision ResNet50 model:

import torch
from torchvision.models import resnet50, ResNet50_Weights
import rebel  # RBLN Compiler

# Instantiate TorchVision ResNet50 model
weights = ResNet50_Weights.DEFAULT
model = resnet50(weights=weights)
model.eval()

# Compile the model
compiled_model = rebel.compile_from_torch(
    model,
    [("input", [1, 3, 224, 224], torch.float32)],
    # If the NPU is installed on your host machine, you can omit the `npu` argument.
    # The function will automatically detect and use the installed NPU.
    npu="RBLN-CA12",
)

# Save the compiled model to local storage
compiled_model.save("resnet50.rbln")

The complete code for deployment of the compiled ResNet50 model is as follows:

import torch
from torchvision.io.image import read_image
from torchvision.models import resnet50, ResNet50_Weights
import urllib.request
import rebel  # RBLN Runtime

# Prepare the input
img_url = "https://rbln-public.s3.ap-northeast-2.amazonaws.com/images/tabby.jpg"
img_path = "./tabby.jpg"
with urllib.request.urlopen(img_url) as response, open(img_path, "wb") as f:
    f.write(response.read())
img = read_image(img_path)
weights = ResNet50_Weights.DEFAULT
preprocess = weights.transforms()
batch = preprocess(img).unsqueeze(0)

# Load the compiled model
module = rebel.Runtime("resnet50.rbln", tensor_type="pt")

# Run inference
rebel_result = module(batch)

# Check results
score, class_id = torch.topk(rebel_result, 1, dim=1)
category_name = weights.meta["categories"][class_id]
print("Top1 category: ", category_name)

torch.compile() API

The RBLN SDK not only offers its native API but also supports PyTorch's torch.compile feature. This integration allows developers to harness the power of PyTorch's just-in-time (JIT) compilation for optimized model execution directly within the RBLN SDK. By incorporating RBLN's custom backend into any workflow that utilizes torch.compile, you can achieve enhanced performance while maintaining full compatibility with RBLN's native features.

Prepare the Model

The process of preparing a model for torch.compile is identical to using the native RBLN API. In this example, we'll use the ResNet50 model from the TorchVision library.

First, import the necessary libraries and instantiate the ResNet50 model with pre-trained weights.

1
2
3
4
5
6
7
8
import torch
from torchvision.models import resnet50, ResNet50_Weights
import rebel  # RBLN Compiler

# Load the pre-trained ResNet50 model from TorchVision
weights = ResNet50_Weights.DEFAULT
model = resnet50(weights=weights)
model.eval()

Prepare the Input

Next, you'll need to prepare the input data. This step is also identical to using the native API. We'll use torchvision.io.image.read_image() to load an image and apply the default preprocessing transforms for the ResNet50 model.

import torch
from torchvision.io.image import read_image
import urllib.request

# Download the sample image
img_url = "https://rbln-public.s3.ap-northeast-2.amazonaws.com/images/tabby.jpg"
img_path = "./tabby.jpg"
with urllib.request.urlopen(img_url) as response, open(img_path, "wb") as f:
    f.write(response.read())

# Load and preprocess the image
img = read_image(img_path)
weights = ResNet50_Weights.DEFAULT
preprocess = weights.transforms()
batch = preprocess(img).unsqueeze(0)

Compiling and Running the Model

With the model and input prepared, you're ready to compile and run the model using torch.compile(). Unlike native RBLN API workflows, torch.compile() is a JIT compiler, which means the compilation happens at runtime during the first forward pass. However, you can still control certain aspects of the compilation process, such as caching, using the RBLN backend.

# Compile the model with the RBLN backend
model = torch.compile(model, 
                      backend="rbln",  # Set the target backend to 'rbln'
                      options={"cache_dir": "PATH/TO/rbln_cache_dir"},  # Specify a directory for caching compiled artifacts
                      dynamic=False)  # Disable dynamic shape support, as the RBLN backend currently does not support it

# Run the model (The first forward pass triggers the JIT compilation)
rbln_result = model(batch)

# Display results
class_idx = torch.argmax(rbln_result).item()
print("Top-1 Classification Index: ", class_idx)  # Expected output: 281, corresponding to "tabby, tabby cat"

Understanding torch.compile() Parameters

backend="rbln":

  • Description: Specifies the backend to use for model compilation.
  • Purpose: By setting this to "rbln", you direct the compilation process to utilize the RBLN SDK’s custom backend, which is optimized for performance within the RBLN environment.

options={"cache_dir": "PATH/TO/rbln_cache_dir", "npu" : "TARGET_NPU_DEVICE"}:

  • Description: Provides additional options for the compilation process.
  • Purpose:

    • cache_dir : "cache_dir" option specifies the directory where compiled artifacts should be stored.
      • Usage: This is similar to using compiled_model.save("resnet50.rbln") in the native API, creating an RBLN artifact at the specified path.
      • Caching: If a compiled model already exists in the specified directory, the RBLN backend will use the cached version instead of recompiling the model. This helps to reduce compilation time and overhead when the model is reused.
    • npu : The identifier of the target NPU for compilation. Refer to the npu option in the native API documentation for more details on specifying the device identifier.

dynamic=False:

  • Description: Indicates whether the model should support dynamic input shapes.
  • Purpose:
    • Setting dynamic to False is recommended for the RBLN backend because it currently does not support dynamic shapes.
    • Behavior: With this option set to False, the model assumes fixed input shapes, and any inputs with different shapes will trigger a recompilation. This ensures that the compilation is optimized for the specific shapes used in inference but means that you may need to recompile if the input shapes change.

Summary

Here is the complete code snippet for the TorchVision ResNet50 model:

import argparse
import urllib.request
import rebel  # noqa: F401  # Needed to use torch dynamo's "rbln" backend.
import torch
import torchvision
from torchvision.io.image import read_image

def parsing_argument():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        "--model_name",
        type=str,
        default="resnet50",
        help="(str) Type of TorchVision model name.",
    )
    return parser.parse_args()

def main():
    args = parsing_argument()
    model_name = args.model_name

    # Instantiate TorchVision model
    weights = torchvision.models.get_model_weights(model_name).DEFAULT
    model = getattr(torchvision.models, model_name)(weights=weights).eval()

    # Prepare input image
    img_url = "https://rbln-public.s3.ap-northeast-2.amazonaws.com/images/tabby.jpg"
    img_path = "./tabby.jpg"
    with urllib.request.urlopen(img_url) as response, open(img_path, "wb") as f:
        f.write(response.read())
    img = read_image(img_path)
    preprocess = weights.transforms()
    batch = preprocess(img).unsqueeze(0)

    # Compile the model
    model = torch.compile(model, backend="rbln", options={"cache_dir": "./rbln_cache_dir/"}, dynamic=False)

    # First call of forward invokes the compilation
    model(batch)

    # After that, you can use the compiled model for RBLN hardware
    rbln_result = model(batch)

    # Display results
    score, class_id = torch.topk(rbln_result, 1, dim=1)
    category_name = weights.meta["categories"][class_id]
    print("Top-1 category: ", category_name)

if __name__ == "__main__":
    main()