OpenAI Compatible Server¶
Overview¶
vLLM provides an OpenAI compatible HTTP server that implements OpenAI's completions API and chat API. Please refer to the vLLM documentation for more detail about OpenAI compatible server. In this tutorial, we will guide you through setting up an OpenAI compatible server using the Llama3-8B and Llama3.1-8B models with Eager and Flash Attention, respectively. You'll learn how to deploy these models to create your own OpenAI API server.
Setup & Install¶
Before you begin, ensure that your system environment is properly configured and that all required packages are installed. This includes:
- System Requirements:
- Python: 3.9–3.12
- RBLN Driver
- Packages Requirements:
- torch
- transformers
- numpy
- RBLN Compiler
- optimum-rbln
- huggingface_hub[cli]
- vllm-rbln
- vllm - It is automatically installed when you install
vllm-rbln
.
- vllm - It is automatically installed when you install
- Installation Command:
Note
Please note that rebel-compiler
requires an RBLN Portal account.
Standard Model Example: Llama3-8B¶
Compile Llama3-8B¶
To begin, import the RBLNLlamaForCausalLM
class from optimum-rbln
. This class's from_pretrained()
method downloads the Llama 3
model from the HuggingFace Hub and compiles it using the RBLN Compiler
. When exporting the model, specify the following parameters:
export
: Must beTrue
to compile the model.rbln_batch_size
: Defines the batch size for compilation.rbln_max_seq_len
: Defines the maximum sequence length.rbln_tensor_parallel_size
: Defines the number of NPUs to be used for inference.
After compilation, save the model artifacts to disk using the save_pretrained()
method. This will create a directory (e.g., rbln-Llama-3-8B-Instruct
) containing the compiled model.
You need to compile the Llama3-8B model using optimum-rbln.
Run OpenAI API server¶
First, make sure that vllm-rbln
is installed. Then you can start the API server by running the vllm.entrypoints.openai.api_server
module as shown below.
model
: Absolute path of the compiled model.device
: Device type for vLLM execution. Please set this torbln
.max_num_seqs
: Maximum number of sequences per iteration. This MUST match the compiled argumentbatch_size
block_size
: This should be set to the same value asmax_model_len
(When applying Flash Attention, this needs to be set differently; please refer to the example.)- When targeting RBLN device with Eager Attention mode, the
block_size
andmax_num_batched_tokens
fields should be set to the same value asmax_model_len
. - You may want to add
--api-key <Random string to be used as API key>
to enable authentication.
Once your API server is on, you can call the API server using OpenAI python and node.js clients or curl command like the following.
Note
When running an API server, the --model
value is used as the unique ID for that API server.
Therefore, the "model"
value in the curl command should be exactly the same as the --model
value used when starting the API server.
Advanced Example: Llama3.1-8B with Flash Attention¶
Flash Attention improves memory efficiency and throughput, enabling better performance for models handling long contexts. In optimum-rbln
, Flash Attention mode is activated by adding the rbln_kvcache_partition_len
parameter during compilation.
Compile Llama3.1-8B¶
Run OpenAI API server¶
First, make sure that vllm-rbln
is installed. Then you can start the API server by running vllm.entrypoints.openai.api_server
module as shown below.
model
: Absolute path of the compiled model.device
: Device type for vLLM execution. Please set this torbln
.max_num_seqs
: Maximum number of sequences per iteration. This MUST match the compiled argumentbatch_size
.block_size
: The size of the block for Paged Attention. When using Flash Attention, the block size must be equal torbln_kvcache_partition_len
.- The
max_num_batched_tokens
fields should be set to the same value asmax_model_len
. - You may want to add
--api-key <Random string to be used as API key>
to enable authentication.
Once your API server is running, you can call the API server using OpenAI python and node.js clients or curl commands like the following.
Note
When running an API server, the --model
value is used as the unique ID for that API server.
Therefore, the "model"
value in the curl command should be exactly the same as the --model
value used when starting the API server.
Please refer to the OpenAI Docs for more information.