Skip to content

OpenAI Compatible Server

Overview

vLLM provides an OpenAI compatible HTTP server that implements OpenAI's completions API and chat API. Please refer to the vLLM documentation for more detail about OpenAI compatible server. In this tutorial, we will guide you through setting up an OpenAI compatible server using the Llama3-8B and Llama3.1-8B models with Eager and Flash Attention, respectively. You'll learn how to deploy these models to create your own OpenAI API server.

Setup & Install

Before you begin, ensure that your system environment is properly configured and that all required packages are installed. This includes:

Note

Please note that rebel-compiler requires an RBLN Portal account.

Standard Model Example: Llama3-8B

Compile Llama3-8B

To begin, import the RBLNLlamaForCausalLM class from optimum-rbln. This class's from_pretrained() method downloads the Llama 3 model from the HuggingFace Hub and compiles it using the RBLN Compiler. When exporting the model, specify the following parameters:

  • export: Must be True to compile the model.
  • rbln_batch_size: Defines the batch size for compilation.
  • rbln_max_seq_len: Defines the maximum sequence length.
  • rbln_tensor_parallel_size: Defines the number of NPUs to be used for inference.

After compilation, save the model artifacts to disk using the save_pretrained() method. This will create a directory (e.g., rbln-Llama-3-8B-Instruct) containing the compiled model. You need to compile the Llama3-8B model using optimum-rbln.

from optimum.rbln import RBLNLlamaForCausalLM

# Define the HuggingFace model ID
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"

# Compile the model for 4 RBLN NPUs
compiled_model = RBLNLlamaForCausalLM.from_pretrained(
    model_id=model_id,
    export=True,
    rbln_batch_size=4,
    rbln_max_seq_len=8192,
    rbln_tensor_parallel_size=4,
)

compiled_model.save_pretrained("rbln-Llama-3-8B-Instruct")

Run OpenAI API server

First, make sure that vllm-rbln is installed. Then you can start the API server by running the vllm.entrypoints.openai.api_server module as shown below.

1
2
3
4
5
6
7
$ python3 -m vllm.entrypoints.openai.api_server \
  --model rbln-Llama-3-8B-Instruct \
  --device rbln \
  --max-num-seqs 4 \
  --max-num-batched-tokens 8192 \
  --max-model-len 8192 \
  --block-size 8192
  • model: Absolute path of the compiled model.
  • device: Device type for vLLM execution. Please set this to rbln.
  • max_num_seqs: Maximum number of sequences per iteration. This MUST match the compiled argument batch_size
  • block_size: This should be set to the same value as max_model_len (When applying Flash Attention, this needs to be set differently; please refer to the example.)
  • When targeting RBLN device with Eager Attention mode, the block_size and max_num_batched_tokens fields should be set to the same value as max_model_len.
  • You may want to add --api-key <Random string to be used as API key> to enable authentication.

Once your API server is on, you can call the API server using OpenAI python and node.js clients or curl command like the following.

$ curl http://<host and port number of the server>/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API key, if specified when running the server>" \
  -d '{
    "model": "rbln-Llama-3-8B-Instruct",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'

Note

When running an API server, the --model value is used as the unique ID for that API server. Therefore, the "model" value in the curl command should be exactly the same as the --model value used when starting the API server.

Advanced Example: Llama3.1-8B with Flash Attention

Flash Attention improves memory efficiency and throughput, enabling better performance for models handling long contexts. In optimum-rbln, Flash Attention mode is activated by adding the rbln_kvcache_partition_len parameter during compilation.

Compile Llama3.1-8B

from optimum.rbln import RBLNLlamaForCausalLM

model_id = "meta-llama/Llama-3.1-8B-Instruct"

# Compile and export
model = RBLNLlamaForCausalLM.from_pretrained(
    model_id=model_id,
    export=True,
    rbln_batch_size=1,
    rbln_max_seq_len=131_072,
    rbln_tensor_parallel_size=8,
    rbln_kvcache_partition_len=16_384,
)

# Save compiled results to disk
model.save_pretrained("rbln-Llama-3-1-8B-Instruct")

Run OpenAI API server

First, make sure that vllm-rbln is installed. Then you can start the API server by running vllm.entrypoints.openai.api_server module as shown below.

1
2
3
4
5
6
7
$ python3 -m vllm.entrypoints.openai.api_server \
  --model rbln-Llama-3-1-8B-Instruct \
  --device rbln \
  --max-num-seqs 1 \
  --max-num-batched-tokens 131072 \
  --max-model-len 131072 \
  --block-size 16384
  • model: Absolute path of the compiled model.
  • device: Device type for vLLM execution. Please set this to rbln.
  • max_num_seqs: Maximum number of sequences per iteration. This MUST match the compiled argument batch_size.
  • block_size: The size of the block for Paged Attention. When using Flash Attention, the block size must be equal to rbln_kvcache_partition_len.
  • The max_num_batched_tokens fields should be set to the same value as max_model_len.
  • You may want to add --api-key <Random string to be used as API key> to enable authentication.

Once your API server is running, you can call the API server using OpenAI python and node.js clients or curl commands like the following.

$ curl http://<host and port number of the server>/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API key, if specified when running the server>" \
  -d '{
    "model": "rbln-Llama-3-1-8B-Instruct",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'

Note

When running an API server, the --model value is used as the unique ID for that API server. Therefore, the "model" value in the curl command should be exactly the same as the --model value used when starting the API server.

Please refer to the OpenAI Docs for more information.