Llama2-7B with Continuous Batching
To serve Large Language Models (LLMs) with maximum utilization, a popular serving optimization technique known as continuous batching is required.
This tutorial will guide you through implementing continuous batching with vllm-rbln
to improve LLM serving costs.
You can check out the actual commands required to compile the model and initialize triton vllm_backend on our model zoo.
Prerequisites¶
- Ubuntu 20.04 LTS (Debian bullseye) or higher
- RBLN NPUs equipped (RBLN ATOM+)
- Python (supports 3.9 - 3.12)
- Docker
- RBLN SDK Driver >= 1.2.92
- rebel-compiler >= 0.7.1
- optimum-rbln >= 0.2.0
- vllm-rbln >= 0.2.0
Note
Since the vllm-rbln
package does not depend on the vllm
package, duplicate installations may cause operational issues. If you installed the vllm
package after vllm-rbln
, please reinstall the vllm-rbln
package to ensure proper functionality.
Compile Llama2-7B¶
You need to compile the Llama2-7B model using optimum-rbln.
Choose an appropriate batch size for your serving needs. Here, it is set to 4.
Triton Inference Server with vLLM enabled¶
vLLM provides the backend for the Triton Inference Server.
If you are using Backend.AI, refer to Step 1. If you are using an on-premise server, skip Step 1 and proceed directly to Step 2.
Step 1. Setting Up the Backend.AI Environment¶
- Start a session via Backend.AI.
- Select Triton Server (
ngc-triton
) as your environment. You can see the version of24.12 / vllm / x86_64 / python-py3
.
Step 2. Prepare Nvidia Triton vllm_backend
and Modify Model Configurations for Llama2-7B¶
A. Clone the Nvidia Triton Inference Server vllm_backend
repository:
Note
Nvidia Triton Inference Server's vLLM backend has its own model.py
. Separate user-defined model.py
is not required.
B. Place the precompiled rbln-Llama-2-7b-chat-hf
directory into the cloned vllm_backend/samples/model_repository/vllm_model/1
directory:
Your directory should look like the following at this point:
Note
- The vLLM backend for Nvidia Triton Server doesn't need a
model.py
file unlike other vision model backends. All model processing logic is pre-included in the Docker container atbackends/vllm/model.py
, so you only needmodel.json
for configuration. - You can either use the default
config.pbtxt
from the repository or create a new one using the template below. Note that input and output formats must match exactly as shown, since they're required by the vLLM backend (see Step 4: gRPC Client Inference Request).
C. Modify model.json
Modify vllm_backend/samples/model_repository/vllm_model/1/model.json
.
model
: Compile model's absolute path.device
: Device type for vLLM execution. Please set this torbln
.max_num_seqs
: Maximum number of sequences per iteration. This MUST match the compiledbatch_size
- When targeting RBLN device, the
max_model_len
,block_size
, andmax_num_batched_tokens
fields should be set to the same value as the max sequence length.
Step 3. Run the Inference Server¶
We are now ready to run the inference server. If you are using Backend.AI, please refer to the A. Backend.AI section. If you are not a Backend.AI user, proceed to the B. On-premise server section.
A. Backend.AI¶
Before proceeding, install the required dependencies:
You will see the following messages that indicate successful initiation of the server:
B. On-premise server¶
If you are not using Backend.AI
, follow these steps to start the inference server in the Docker container. (Backend.AI
users can skip to Step 5.)
To access the RBLN NPU devices, the inference server container must be run in privileged mode. Add a mount option for the cloned vllm_backend
repository as below:
Install the required dependencies inside the container:
You will see the following messages indicating successful initiation of the server:
Step 4. Requesting Inference via gRPC API¶
vLLM backend has its own model.py
, while we defined model.py
in tutorial Resnet50. The input parameter was called INPUT__0
and the output was called OUTPUT__0
. But the input parameter of vLLM has the name text_input
and the output is called text_output
. Our client should be modified accordingly. Please refer to vLLM model.py for more details.
The following shows the client code for vLLM backend. This client also requires tritonclient
and grpcio
packages.
If you need to change other sampling paramaters (such as temperature
, top_p
, top_k
, max_tokens
, early_stopping
...) please refer to VLLM's python client.