Skip to content

Serving LLM with Triton Inference Server

This tutorial introduces how to serve large language models (LLMs) with the Triton Inference Server using the pre-compiled Llama2-7B model.

Note

This tutorial assumes that you have completed the Nvidia Triton Inference Server and Llama2-7B tutorials.

Getting started with Llama2-7B

Step 1. Prepare the compiled Llama2-7B model

Place the compiled directory from the Llama2-7B tutorial into python_backend/examples/rbln/llama-2-7b-chat-hf/1.

$ mkdir -p python_backend/examples/rbln/llama-2-7b-chat-hf/1
$ cp -r rbln-Llama-2-7b-chat-hf python_backend/examples/rbln/llama-2-7b-chat-hf/1/

Step 2. Write the Llama2-7B TritonPythonModel

Below is a model.py example demonstrating LLM static batching, utilizing gRPC for client communication to enable decoupled model execution. Save this code to python_backend/examples/rbln/llama-2-7b-chat-hf/1/model.py.

# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

import json
import os

import numpy as np
import triton_python_backend_utils as pb_utils
from optimum.rbln import BatchTextIteratorStreamer, RBLNLlamaForCausalLM
from transformers import AutoTokenizer
from threading import Thread

DEFAULT_PROMPT = "what is the first letter in alphabet?"

class TritonPythonModel:
    def initialize(self, args):
        """`initialize` is called only once when the model is being loaded.

        Parameters
        ----------
        args : dict
          Both keys and values are strings. The dictionary keys and values are:
          * model_config: A JSON string containing the model configuration
          * model_instance_kind: A string containing model instance kind
          * model_instance_device_id: A string containing model instance device ID
          * model_instance_name: A string containing model instance name in form of <model_name>_<instance_group_id>_<instance_id>
          * model_repository: Model repository path
          * model_version: Model version
          * model_name: Model name
        """

        self.model_config = model_config = json.loads(args["model_config"])
        self.max_batch_size = model_config["max_batch_size"]

        output0_config = pb_utils.get_output_config_by_name(model_config, "OUTPUT__0")
        self.output0_dtype = pb_utils.triton_string_to_numpy(output0_config["data_type"])
        model_dir = os.path.join(
            args["model_repository"],
            args["model_version"],
            "rbln-Llama-2-7b-chat-hf",
        )

        self.model = RBLNLlamaForCausalLM.from_pretrained(
            model_id=model_dir,
            export=False,
        )
        self.tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", pad_token="[PAD]", padding_side="left")
        self.streamer = BatchTextIteratorStreamer(
            tokenizer=self.tokenizer, batch_size=self.max_batch_size, skip_prompt=True, skip_special_tokens=True
        )

    def execute(self, requests):
        """`execute` MUST be implemented in every Python model. `execute`
        function receives a list of pb_utils.InferenceRequest as the only
        argument. This function is called when an inference request is made
        for this model. Depending on the batching configuration (e.g. Dynamic
        Batching) used, `requests` may contain multiple requests. Every
        Python model, must create one pb_utils.InferenceResponse for every
        pb_utils.InferenceRequest in `requests`. If there is an error, you can
        set the error argument when creating a pb_utils.InferenceResponse

        Parameters
        ----------
        requests : list
          A list of pb_utils.InferenceRequest

        Returns
        -------
        list
          A list of pb_utils.InferenceResponse. The length of this list must
          be the same as `requests`
        """
        inputs = []
        num_requests = len(requests)
        batch_sentences = [DEFAULT_PROMPT] * self.max_batch_size
        for i in range(num_requests):
            sentence = pb_utils.get_input_tensor_by_name(requests[i], "INPUT__0").as_numpy()[0][0]
            sentence = str(sentence.decode("utf-8")).strip()
            batch_sentences[i] = sentence
            print(sentence)

        output0_dtype = self.output0_dtype
        inputs = self.tokenizer(batch_sentences, return_tensors="pt", padding=True)

        generation_kwargs = dict(
            **inputs,
            streamer=self.streamer,
            do_sample=False,
            max_length=self.model.max_seq_len,
        )

        thread = Thread(target=self.model.generate, kwargs=generation_kwargs)
        thread.start()

        for new_text in self.streamer:
            for i in range(num_requests):
                out_data = np.array([new_text[i].encode("utf-8")])
                out_tensor = pb_utils.Tensor("OUTPUT__0", out_data.astype(output0_dtype))
                inference_response = pb_utils.InferenceResponse(output_tensors=[out_tensor])
                response_sender = requests[i].get_response_sender()
                response_sender.send(inference_response)

        for i in range(num_requests):
            response_sender = requests[i].get_response_sender()
            out_data = np.array(["".encode("utf-8")])
            out_tensor = pb_utils.Tensor("OUTPUT__0", out_data.astype(output0_dtype))
            inference_response = pb_utils.InferenceResponse(output_tensors=[out_tensor])
            response_sender.send(
                inference_response,
                flags=pb_utils.TRITONSERVER_RESPONSE_COMPLETE_FINAL,
            )
        return None

    def finalize(self):
        print("Cleaning up...")
Next, save the following config.pbtxt file in python_backend/examples/rbln/llama-2-7b-chat-hf/config.pbtxt.

name: "llama-2-7b-chat-hf"
backend: "python"

input [
  {
    name: "INPUT__0"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]
output [
  {
    name: "OUTPUT__0"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]

instance_group [
    {
      count: 1
      kind: KIND_MODEL
    }
]

max_batch_size: 1

model_transaction_policy {
  decoupled: True
}
Note that model_transaction_policy should be set to decoupled to enable streaming inference.

If you have successfully completed the steps so far, you will have the following directory structure:

1
2
3
4
5
+--llama-2-7b-chat-hf/
|      +-- config.pbtxt
|      +-- 1/
|      |   +-- model.py
|      |   +-- rbln-Llama-2-7b-chat-hf/

Step 3. Run the inference server in the container

Follow Step 3 from the Triton Inference Server tutorial. Additionally, install optimum-rbln inside the container:

$ pip3 install -i https://pypi.rbln.ai/simple/ optimum-rbln

Step 4. Make an inference request via gRPC

Below is a client.py example to make inference requests to Llama-2:

# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

from dataclasses import dataclass

import asyncio
import fire
import sys

import numpy as np
import tritonclient.grpc.aio as grpcclient
from tritonclient.utils import *

@dataclass
class Flags:
    model: str 
    url: str
    input_prompts: str
    results_file: str
    offset: int
    iterations: int
    verbose: bool

class LLMClient:
    def __init__(self, flags: Flags):
        self._client = grpcclient.InferenceServerClient(url=flags.url, verbose=flags.verbose)
        self._flags = flags
        self._loop = asyncio.get_event_loop()
        self._results_dict = {}

    async def async_request_iterator(self, prompts):
        try:
            for iter in range(self._flags.iterations):
                for i, prompt in enumerate(prompts):
                    prompt_id = self._flags.offset + (len(prompts) * iter) + i
                    self._results_dict[str(prompt_id)] = []
                    yield self.create_request(
                        prompt,
                        prompt_id,
                    )
        except Exception as error:
            print(f"Caught an error in the request iterator: {error}")

    async def stream_infer(self, prompts):
        try:
            # Start streaming
            response_iterator = self._client.stream_infer(
                inputs_iterator=self.async_request_iterator(prompts),
            )
            async for response in response_iterator:
                yield response
        except InferenceServerException as error:
            print(error)
            sys.exit(1)

    async def process_stream(self, prompts):
        # Clear results in between process_stream calls
        self.results_dict = []
        success = True
        # Read response from the stream
        async for response in self.stream_infer(prompts):
            result, error = response
            if error:
                print(f"Encountered error while processing: {error}")
                success = False
            else:
                output = result.as_numpy("OUTPUT__0")
                for i in output:
                    self._results_dict[result.get_response().id].append(i)
                    print(i.decode("utf-8"), end="", flush=True)

        return success

    async def run(self):
        with open(self._flags.input_prompts, "r") as file:
            print(f"Loading inputs from `{self._flags.input_prompts}`...")
            prompts = file.readlines()

        success = await self.process_stream(prompts)

        with open(self._flags.results_file, "w") as file:
            for id in self._results_dict.keys():
                for result in self._results_dict[id]:
                    file.write(result.decode("utf-8"))

                file.write("\n")
                file.write("\n=========\n\n")
            print(f"\n=========\nStoring results into `{self._flags.results_file}`...")

        if self._flags.verbose:
            with open(self._flags.results_file, "r") as file:
                print(f"\nContents of `{self._flags.results_file}` ===>")
                print(file.read())
        if success:
            print("PASS")
        else:
            print("FAIL")

    def run_async(self):
        self._loop.run_until_complete(self.run())

    def create_request(
        self,
        prompt,
        request_id,
    ):
        inputs = []
        prompt_data = np.array([prompt.encode("utf-8")])
        print(prompt_data)
        try:
            in0 = grpcclient.InferInput("INPUT__0", [1, 1], "BYTES")
            in0.set_data_from_numpy(prompt_data.reshape(1, 1))
            inputs.append(in0)

        except Exception as error:
            print(f"Encountered an error during request creation: {error}")

        # Add requested outputs
        outputs = []
        outputs.append(grpcclient.InferRequestedOutput("OUTPUT__0"))

        # Issue the asynchronous sequence inference.
        return {
            "model_name": self._flags.model,
            "inputs": inputs,
            "outputs": outputs,
            "request_id": str(request_id),
        }


def main(
    model: str = "llama-2-7b-chat-hf",
    url: str = "localhost:8001",
    input_prompts: str = "prompts.txt",
    results_file: str = "results.txt",
    offset: int = 0,
    iterations: int = 1,
    verbose: bool = False,
):
    """LLM request example via gRPC in streaming mode.

    Args:
        model (str, optional): Triton server model name. Defaults to "llama-2-7b-chat-hf".
        url (str, optional): Inference server URL and its gRPC port. Defaults to "localhost:8001".
        input_prompts (str, optional): Text file with input prompts. Defaults to "prompts.txt".
        results_file (str, optional): The file with output results. Defaults to "results.txt".
        offset (int, optional): Add offset to request IDs used. Defaults to 0.
        iterations (int, optional): Number of iterations through the prompts file. Defaults to 0.
        verbose (bool, optional): Enable verbose output. Defaults to False.
    """
    client = LLMClient(Flags(model, url, input_prompts, results_file, offset, iterations, verbose))
    client.run_async()


if __name__ == "__main__":
    fire.Fire(main)
  1. Install the required Python packages:
    $ pip3 install tritonclient==2.41.1 gevent geventhttpclient fire grpcio
    
  2. Create a text file named prompts.txt and add your prompts.
    echo "Hey, are you conscious? Can you talk to me?" > prompts.txt
    
  3. Run client.py
    $ python3 client.py
    

Continuous Batching Support with vllm-rbln

To serve Large Language Models (LLMs) with maximum utilization, a popular serving optimization technique known as continuous batching is required. This tutorial will guide you through implementing continuous batching with vllm-rbln to improve LLM serving costs. vllm-rbln is an extension to the well-known LLM serving library vLLM, modified to enable vllm to work with optimum-rbln.

Prerequisites

Step 1. Compiling Llama2-7B with vllm Option

First, compile the Llama2-7B model with vllm option using optimum-rbln.

from optimum.rbln import RBLNLlamaForCausalLM

# Export huggingFace pytorch llama2 model to RBLN compiled model
model_id = "meta-llama/Llama-2-7b-chat-hf"
compiled_model = RBLNLlamaForCausalLM.from_pretrained(
    model_id=model_id,
    export=True,
    rbln_max_seq_len=4096,        
    rbln_tensor_parallel_size=4,  # number of ATOM+ for Rebellions Scalable Design (RSD)
    rbln_batch_size=4,            # batch_size > 1 is recommended for continuous batching
    rbln_batching="vllm",         # compile with `vllm` option for continuous batching
)
Ensure rbln_batching is set to vllm and choose an appropriate batch size for your serving needs. Here, it is set to 4.

If you are using Backend.AI, refer to Step 2. If you are using an on-premise server, skip Step 2 and proceed directly to Step 3.

Step 2. Setting Up the Backend.AI Environment

  1. Start a session via Backend.AI.
  2. Select Triton Server (ngc-triton) as your environment. You can see the version of 24.01 / vllm / x86_64 / python-py3.

Step 3. Prepare Nvidia Triton vllm_backend and Modify Model Configurations for Llama2-7B

A. Clone the Nvidia Triton Inference Server vllm_backend repository:

$ git clone https://github.com/triton-inference-server/vllm_backend.git -b r24.01

B. Place the precompiled rbln-Llama-2-7b directory into the cloned vllm_backend/samples/model_repository/vllm_model/1 directory:

$ cp -R /PATH/TO/YOUR/rbln-Llama-2-7b /PATH/TO/YOUR/CLONED/vllm_backend/samples/model_repository/vllm_model/1

C. Modify model.json

Modify vllm_backend/samples/model_repository/vllm_model/1/model.json.

1
2
3
4
5
6
7
8
9
{
    "model":"meta-llama/Llama-2-7b-hf",     
    "device":"rbln",                        
    "max_num_seqs": 4,                      
    "compiled_model_dir":"/ABSOLUTE/PATH/TO/rbln-Llama-2-7b-chat-hf", 
    "max_num_batched_tokens": 4096,
    "max_model_len" :4096,
    "block_size" :4096
}
  • model : The name or path of a HuggingFace Transformers model
  • device : Device type for vLLM execution. Please set this to rbln.
  • max_num_seqs : Maximum number of sequences per iteration. This MUST match the compiled batch_size
  • compiled_model_dir: Absolute path to compiled model directory for RBLN (optimum-rbln)

Note that max_model_len, block_size, and max_num_batched_tokens arguments are required to be same as max sequence length when targeting RBLN device.

Step 4. Run the Inference Server

We are now ready to run the inference server. If you are using Backend.AI, please refer to the A. Backend.AI section. If you are not a Backend.AI user, proceed to the B. On-premise server section.

A. Backend.AI

Before proceeding, install the required dependencies:

$ pip3 install -i https://pypi.rbln.ai/simple/ "rebel-compiler>=0.5.2" "optimum-rbln>=0.1.4" vllm-rbln
Start the Triton server:
$ tritonserver --model-repository PATH/TO/YOUR/vllm_backend/samples/model_repository

You will see the following messages that indicate successful initiation of the server:

1
2
3
Started GRPCInferenceService at 0.0.0.0:8001
Started HTTPService at 0.0.0.0:8000
Started Metrics Service at 0.0.0.0:8002

B. On-premise server

If you are not using Backend.AI, follow these steps to start the inference server in the Docker container. (Backend.AI users can skip to Step 5.)

To access the RBLN NPU devices, the inference server container must be run in privileged mode. Add a mount option for the cloned vllm_backend repository as below:

1
2
3
sudo docker run --privileged --shm-size=1g --ulimit memlock=-1 \
   -v /PATH/TO/YOUR/vllm_backend:/opt/tritonserver/vllm_backend \
   -p 8000:8000 -p 8001:8001 -p 8002:8002 --ulimit stack=67108864 -ti nvcr.io/nvidia/tritonserver:24.01-vllm-python-py3

Install the required dependencies inside the container:

$ pip3 install -i https://pypi.rbln.ai/simple/ "rebel-compiler>=0.5.2" "optimum-rbln>=0.1.4" vllm-rbln
Start the Triton Server inside the container:
$ tritonserver --model-repository /opt/tritonserver/vllm_backend/samples/model_repository

You will see the following messages indicating successful initiation of the server:

1
2
3
Started GRPCInferenceService at 0.0.0.0:8001
Started HTTPService at 0.0.0.0:8000
Started Metrics Service at 0.0.0.0:8002

Step 5. Requesting Inference via gRPC API

  1. Before proceeding, install the required dependencies:
    $ pip3 install tritonclient==2.41.1 gevent geventhttpclient fire grpcio
    
  2. Run client.py
    $ cd /PATH/TO/YOUR/vllm_backend/samples
    $ python3 client.py -s
    
  3. Check if the results are correct by opening results.txt file.

If you need to change other sampling paramaters (such as temperature, top_p, top_k, max_tokens, early_stopping...) please refer to client.py.