Skip to content

Serving Large Language Model with Triton Inference Server

This tutorial describes how to serve one of the most famous large language models (LLMs), Llama2-7B, using Triton Inference Server.

Note

This tutorial assumes that you have completed the Nvidia Triton Inference Server and Llama2-7B tutorials.

Serving with Triton Inference Server

Step 0. Clone python_backend

Clone python_backend of triton-inference-server as instructed in the step 1 of Nvidia Triton Inference Server.

$ git clone https://github.com/triton-inference-server/python_backend -b r24.01

Step 1. Prepare the compiled Llama2-7B model

Place the compiled directory from the Llama2-7B tutorial into python_backend/examples/rbln/llama-2-7b-chat-hf/1.

$ mkdir -p python_backend/examples/rbln/llama-2-7b-chat-hf/1
$ cp -r rbln-Llama-2-7b-chat-hf python_backend/examples/rbln/llama-2-7b-chat-hf/1/

After you have finished the preparation step, your directory should look like the following:

+--python_backend/
|   +-- examples/
|   |   +-- rbln/
|   |   |   +-- llama-2-7b-chat-hf/
|   |   |   |   +-- 1/
|   |   |   |   |   +-- rbln-Llama-2-7b-chat-hf/
|   |   |   |   |   |   +-- compiled_model.rbln
|   |   |   |   |   |   +-- config.json
|   |   |   |   |   |   +-- (and others)
|   |   +-- (and others)
|   +-- (and others)

Step 2. Write the Llama2-7B TritonPythonModel

First, create a file at python_backend/examples/rbln/llama-2-7b-chat-hf/config.pbtxt and copy the following content to the new file. This file describes the input/output signature and some properties of the model.

config.pbtxt
name: "llama-2-7b-chat-hf"
backend: "python"

input [  # (1)
  {
    name: "INPUT__0"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]
output [  # (2)
  {
    name: "OUTPUT__0"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]

instance_group [
    {
      count: 1
      kind: KIND_MODEL
    }
]

max_batch_size: 1

model_transaction_policy {
  decoupled: True  # (3)
}
  1. Describes the input signature of the model. It means that the model takes 1 input with name INPUT__0 and its type should be a string.
  2. Describes the output signature of the model. It means that the model outputs 1 string with name OUTPUT__0.
  3. model_transaction_policy.decoupled should be set to True to enable streaming inference.

Next, create a file at python_backend/examples/rbln/llama-2-7b-chat-hf/1/model.py and copy the following script to the new file. This script runs the LLM model with static batching enabled. Also, it utilizes gRPC for client communication to enable decoupled model execution.

model.py
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

import json
import os

import numpy as np
import triton_python_backend_utils as pb_utils
from optimum.rbln import BatchTextIteratorStreamer, RBLNLlamaForCausalLM
from transformers import AutoTokenizer
from threading import Thread

DEFAULT_PROMPT = "what is the first letter in alphabet?"

class TritonPythonModel:
    def initialize(self, args):
        """`initialize` is called only once when the model is being loaded.

        Parameters
        ----------
        args : dict
          Both keys and values are strings. The dictionary keys and values are:
          * model_config: A JSON string containing the model configuration
          * model_instance_kind: A string containing model instance kind
          * model_instance_device_id: A string containing model instance device ID
          * model_instance_name: A string containing model instance name in form of <model_name>_<instance_group_id>_<instance_id>
          * model_repository: Model repository path
          * model_version: Model version
          * model_name: Model name
        """

        self.model_config = model_config = json.loads(args["model_config"])
        self.max_batch_size = model_config["max_batch_size"]

        output0_config = pb_utils.get_output_config_by_name(model_config, "OUTPUT__0")
        self.output0_dtype = pb_utils.triton_string_to_numpy(output0_config["data_type"])
        model_dir = os.path.join(
            args["model_repository"],
            args["model_version"],
            "rbln-Llama-2-7b-chat-hf",
        )

        self.model = RBLNLlamaForCausalLM.from_pretrained(
            model_id=model_dir,
            export=False,
        )
        self.tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", pad_token="[PAD]", padding_side="left")
        self.streamer = BatchTextIteratorStreamer(
            tokenizer=self.tokenizer, batch_size=self.max_batch_size, skip_prompt=True, skip_special_tokens=True
        )

    def execute(self, requests):
        """`execute` MUST be implemented in every Python model. `execute`
        function receives a list of pb_utils.InferenceRequest as the only
        argument. This function is called when an inference request is made
        for this model. Depending on the batching configuration (e.g. Dynamic
        Batching) used, `requests` may contain multiple requests. Every
        Python model, must create one pb_utils.InferenceResponse for every
        pb_utils.InferenceRequest in `requests`. If there is an error, you can
        set the error argument when creating a pb_utils.InferenceResponse

        Parameters
        ----------
        requests : list
          A list of pb_utils.InferenceRequest

        Returns
        -------
        list
          A list of pb_utils.InferenceResponse. The length of this list must
          be the same as `requests`
        """
        inputs = []
        num_requests = len(requests)
        batch_sentences = [DEFAULT_PROMPT] * self.max_batch_size
        for i in range(num_requests):
            sentence = pb_utils.get_input_tensor_by_name(requests[i], "INPUT__0").as_numpy()[0][0]
            sentence = str(sentence.decode("utf-8")).strip()
            batch_sentences[i] = sentence
            print(sentence)

        output0_dtype = self.output0_dtype
        inputs = self.tokenizer(batch_sentences, return_tensors="pt", padding=True)

        generation_kwargs = dict(
            **inputs,
            streamer=self.streamer,
            do_sample=False,
            max_length=self.model.max_seq_len,
        )

        thread = Thread(target=self.model.generate, kwargs=generation_kwargs)
        thread.start()

        for new_text in self.streamer:
            for i in range(num_requests):
                out_data = np.array([new_text[i].encode("utf-8")])
                out_tensor = pb_utils.Tensor("OUTPUT__0", out_data.astype(output0_dtype))
                inference_response = pb_utils.InferenceResponse(output_tensors=[out_tensor])
                response_sender = requests[i].get_response_sender()
                response_sender.send(inference_response)

        for i in range(num_requests):
            response_sender = requests[i].get_response_sender()
            out_data = np.array(["".encode("utf-8")])
            out_tensor = pb_utils.Tensor("OUTPUT__0", out_data.astype(output0_dtype))
            inference_response = pb_utils.InferenceResponse(output_tensors=[out_tensor])
            response_sender.send(
                inference_response,
                flags=pb_utils.TRITONSERVER_RESPONSE_COMPLETE_FINAL,
            )
        return None

    def finalize(self):
        print("Cleaning up...")

If you have successfully completed the steps so far, you will have the following directory structure:

+--python_backend/
|   +-- examples/
|   |   +-- rbln/
|   |   |   +-- llama-2-7b-chat-hf/
|   |   |   |   +-- config.pbtxt   ============== (new file)
|   |   |   |   +-- 1/
|   |   |   |   |   +-- model.py   ============== (new file)
|   |   |   |   |   +-- rbln-Llama-2-7b-chat-hf/
|   |   |   |   |   |   +-- compiled_model.rbln
|   |   |   |   |   |   +-- config.json
|   |   |   |   |   |   +-- (and others)
|   |   +-- (and others)
|   +-- (and others)

Step 3. Run the inference server in the container

Follow Step 3 from the Triton Inference Server tutorial. Additionally, install optimum-rbln inside the container:

$ pip3 install -i https://pypi.rbln.ai/simple/ optimum-rbln

Step 4. Make an inference request via gRPC

In this section, we describe how to make inference requests using grpc client in python. The following client code requires tritonclient and grpcio packages to run. Run the following command to prepare all the required packages for running the scripts.

$ pip3 install tritonclient==2.41.1 grpcio

The following script shows how to make an inference request.

simple_client.py
import asyncio
import numpy as np
import tritonclient.grpc.aio as grpcclient

async def try_request():
  url = "<host and port number of the triton inference server>"  # e.g. "localhost:8001"
  client = grpcclient.InferenceServerClient(url=url, verbose=False)

  model_name = "llama-2-7b-chat-hf"

  def create_request(prompt, request_id):
    prompt_data = np.array([prompt.encode("utf-8")])

    input = grpcclient.InferInput("INPUT__0", [1, 1], "BYTES")
    input.set_data_from_numpy(prompt_data.reshape(1, 1))
    inputs = [input]

    output = grpcclient.InferRequestedOutput("OUTPUT__0")
    outputs = [output]

    return {
      "model_name": model_name,
      "inputs": inputs,
      "outputs": outputs,
      "request_id": request_id
    }

  prompt = "What is the first letter of English alphabets?"

  async def requests_gen():
    yield create_request(prompt, "req-0")

  response_stream = client.stream_infer(requests_gen())

  async for response in response_stream:
    result, error = response
    if error:
      print("Error occurred!")
    else:
      output = result.as_numpy("OUTPUT__0")
      for i in output:
          print(i.decode("utf-8"), end="", flush=True)

asyncio.run(try_request())

Continuous Batching

To serve Large Language Models (LLMs) with maximum utilization, a popular serving optimization technique known as continuous batching is required. LLM Serving with Continous Batching Enabled doc explains how to run the Llama2-7B model with vLLM, which implements continuous batching.