Skip to content

Commit

Permalink
Merge pull request #1459 from madeline-underwood/vLLM
Browse files Browse the repository at this point in the history
vLLM_AP approved
  • Loading branch information
pareenaverma authored Dec 20, 2024
2 parents 37c3ead + 78ba5bc commit 0f845e3
Show file tree
Hide file tree
Showing 6 changed files with 43 additions and 47 deletions.
16 changes: 6 additions & 10 deletions content/learning-paths/servers-and-cloud-computing/vLLM/_index.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,18 @@
---
title: Large language models (LLMs) on Arm servers with vLLM

draft: true
cascade:
draft: true
title: Build and Run a Virtual Large Language Model on Arm Servers

minutes_to_complete: 45

who_is_this_for: This is an introductory topic for software developers and AI engineers interested in learning how to use vLLM (Virtual Large Language Model) on Arm servers.
who_is_this_for: This is an introductory topic for software developers and AI engineers interested in learning how to use a vLLM (Virtual Large Language Model) on Arm servers.

learning_objectives:
- Build vLLM from source on an Arm server.
- Build a vLLM from source on an Arm server.
- Download a Qwen LLM from Hugging Face.
- Run local batch inference using vLLM.
- Create and interact with an OpenAI compatible server provided by vLLM on your Arm server..
- Run local batch inference using a vLLM.
- Create and interact with an OpenAI-compatible server provided by a vLLM on your Arm server.

prerequisites:
- An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider or a local Arm Linux computer with at least 8 CPUs and 16 GB RAM.
- An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider, or a local Arm Linux computer with at least 8 CPUs and 16 GB RAM.

author_primary: Jason Andrews

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
next_step_guidance: >
Thank you for completing this learning path on how to build and run vLLM on Arm servers. You might be interested in learning how to further optimize and benchmark LLM performance on Arm-based platforms.
Thank you for completing this Learning Path on how to build and run vLLM on Arm servers. You might be interested in learning how to further optimize and benchmark LLM performance on Arm-based platforms.
recommended_path: "/learning-paths/servers-and-cloud-computing/benchmark-nlp/"

Expand Down
20 changes: 10 additions & 10 deletions content/learning-paths/servers-and-cloud-computing/vLLM/_review.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ review:
question: >
What is the primary purpose of vLLM?
answers:
- "Operating System Development"
- "Large Language Model Inference and Serving"
- "Database Management"
- "Operating System Development."
- "Large Language Model Inference and Serving."
- "Database Management."
correct_answer: 2
explanation: >
vLLM is designed for fast and efficient Large Language Model inference and serving.
Expand All @@ -16,10 +16,10 @@ review:
question: >
In addition to Python, which extra programming languages are required by the vLLM build system?
answers:
- "Java"
- "Rust"
- "C++"
- "Rust and C++"
- "Java."
- "Rust."
- "C++."
- "Rust and C++."
correct_answer: 4
explanation: >
The vLLM build system requires the Rust toolchain and GCC for its compilation.
Expand All @@ -28,9 +28,9 @@ review:
question: >
What is the VLLM_TARGET_DEVICE environment variable set to for building vLLM for Arm CPUs?
answers:
- "cuda"
- "gpu"
- "cpu"
- "cuda."
- "gpu."
- "cpu."
correct_answer: 3
explanation: >
The VLLM_TARGET_DEVICE environment variable needs to be set to cpu to target the Arm processor.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,27 +8,27 @@ layout: learningpathall

## Use a model from Hugging Face

vLLM is designed to work seamlessly with models from the Hugging Face Hub,
vLLM is designed to work seamlessly with models from the Hugging Face Hub.

The first time you run vLLM it downloads the required model. This means you don't have to explicitly download any models.
The first time you run vLLM, it downloads the required model. This means that you do not have to explicitly download any models.

If you want to use a model that requires you to request access or accept terms, you need to log in to Hugging Face using a token.
If you want to use a model that requires you to request access or accept the terms, you need to log in to Hugging Face using a token.

```bash
huggingface-cli login
```

Enter your Hugging Face token. You can generate a token from [Hugging Face Hub](https://huggingface.co/) by clicking your profile on the top right corner and selecting `Access Tokens`.
Enter your Hugging Face token. You can generate a token from [Hugging Face Hub](https://huggingface.co/) by clicking your profile on the top right corner and selecting **Access Tokens**.

You also need to visit the Hugging Face link printed in the login output and accept the terms by clicking the "Agree and access repository" button or filling out the request for access form (depending on the model).
You also need to visit the Hugging Face link printed in the login output and accept the terms by clicking the **Agree and access repository** button or filling out the request-for-access form, depending on the model.

To run batched inference without the need for a login, you can use the `Qwen/Qwen2.5-0.5B-Instruct` model.

## Create a batch script

To run inference with multiple prompts you can create a simple Python script to load a model and run the prompts.
To run inference with multiple prompts, you can create a simple Python script to load a model and run the prompts.

Use a text editor to save the Python script below in a file called `batch.py`.
Use a text editor to save the Python script below in a file called `batch.py`:

```python
import json
Expand Down Expand Up @@ -72,7 +72,7 @@ Run the Python script:
python ./batch.py
```

The output shows vLLM starting, the model loading, and the batch processing of the 3 prompts:
The output shows vLLM starting, the model loading, and the batch processing of the three prompts:

```output
INFO 12-12 22:52:57 config.py:441] This model supports multiple tasks: {'generate', 'reward', 'embed', 'score', 'classify'}. Defaulting to 'generate'.
Expand Down Expand Up @@ -107,4 +107,4 @@ Processed prompts: 100%|██████████████████

You can try with other prompts and models such as `meta-llama/Llama-3.2-1B`.

Continue to learn how to setup an OpenAI compatible server.
Continue to learn how to set up an OpenAI-compatible server.
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
---
title: Run an OpenAI compatible server
title: Run an OpenAI-compatible server
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

Instead of a batch run from Python, you can create an OpenAI compatible server. This allows you to leverage the power of large language models without relying on external APIs.
Instead of a batch run from Python, you can create an OpenAI-compatible server. This allows you to leverage the power of Large Language Models without relying on external APIs.

Running a local LLM offers several advantages:

Cost-Effective: Avoids the costs associated with using external APIs, especially for high-usage scenarios.  
Privacy: Keeps your data and prompts within your local environment, enhancing privacy and security.
Offline Capability: Enables operation without an internet connection, making it ideal for scenarios with limited or unreliable network access.
* Cost-effective - it avoids the costs associated with using external APIs, especially for high-usage scenarios.  
* Privacy - it keeps your data and prompts within your local environment, which enhances privacy and security.
* Offline Capability - it enables operation without an internet connection, making it ideal for scenarios with limited or unreliable network access.

OpenAI compatibility means you can reuse existing software which was designed to communicate with OpenAI and have it talk to your local vLLM service.
OpenAI compatibility means that you can reuse existing software which was designed to communicate with OpenAI and use it to communicate with your local vLLM service.

Run vLLM with the same `Qwen/Qwen2.5-0.5B-Instruct` model:

Expand Down Expand Up @@ -72,12 +72,12 @@ curl http://0.0.0.0:8000/v1/chat/completions \
}'
```

The server processes the request and the output prints the results:
The server processes the request, and the output prints the results:

```output
"id":"chatcmpl-6677cb4263b34d18b436b9cb8c6a5a65","object":"chat.completion","created":1734044182,"model":"Qwen/Qwen2.5-0.5B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"Certainly! Here is a simple \"Hello, World!\" program in C:\n\n```c\n#include <stdio.h>\n\nint main() {\n printf(\"Hello, World!\\n\");\n return 0;\n}\n```\n\nThis program defines a function called `main` which contains the body of the program. Inside the `main` function, it calls the `printf` function to display the text \"Hello, World!\" to the console. The `return 0` statement indicates that the program was successful and the program has ended.\n\nTo compile and run this program:\n\n1. Save the code above to a file named `hello.c`.\n2. Open a terminal or command prompt.\n3. Navigate to the directory where you saved the file.\n4. Compile the program using the following command:\n ```\n gcc hello.c -o hello\n ```\n5. Run the compiled program using the following command:\n ```\n ./hello\n ```\n Or simply type `hello` in the terminal.\n\nYou should see the output:\n\n```\nHello, World!\n```","tool_calls":[]},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":26,"total_tokens":241,"completion_tokens":215,"prompt_tokens_details":null},"prompt_logprobs":null}
```

There are many other experiments you can try. Most Hugging Face models have a `Use this model` button on the top right of the model card with the instructions for vLLM. You can now use these instructions on your Arm Linux computer.
There are many other experiments you can try. Most Hugging Face models have a **Use this model** button on the top-right of the model card with the instructions for vLLM. You can now use these instructions on your Arm Linux computer.

You can also try out OpenAI compatible chat clients to connect to the served model.
You can also try out OpenAI-compatible chat clients to connect to the served model.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Build vLLM from source code
title: Build a vLLM from Source Code
weight: 2

### FIXED, DO NOT MODIFY
Expand All @@ -8,13 +8,13 @@ layout: learningpathall

## Before you begin

You can follow the instructions for this Learning Path using an Arm server running Ubuntu 24.04 LTS with at least 8 cores, 16GB of RAM, and 50GB of disk storage.
To follow the instructions for this Learning Path, you will need an Arm server running Ubuntu 24.04 LTS with at least 8 cores, 16GB of RAM, and 50GB of disk storage.

## What is vLLM?

[vLLM](https://github.com/vllm-project/vllm) stands for Virtual Large Language Model, and is a fast and easy-to-use library for inference and model serving.

vLLM can be used in batch mode or by running an OpenAI compatible server.
You can use vLLM in batch mode, or by running an OpenAI-compatible server.

In this Learning Path, you will learn how to build vLLM from source and run inference on an Arm-based server, highlighting its effectiveness.

Expand All @@ -33,7 +33,7 @@ Set the default GCC to version 12:
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 10 --slave /usr/bin/g++ g++ /usr/bin/g++-12
```

Install Rust, refer to the [Rust install guide](/install-guides/rust/) if necessary.
Next, install Rust. For more information, see the [Rust install guide](/install-guides/rust/).

```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
Expand All @@ -42,7 +42,7 @@ source "$HOME/.cargo/env"

Four environment variables are required. You can enter these at the command line or add them to your `$HOME/.bashrc` file and source the file.

To add them at the command line:
To add them at the command line, use the following:

```bash
export CCACHE_DIR=/home/ubuntu/.cache/ccache
Expand All @@ -58,9 +58,9 @@ python -m venv env
source env/bin/activate
```

Your command line prompt has `(env)` in front of it indicating you are in the Python virtual environment.
Your command-line prompt is prefixed by `(env)`, which indicates that you are in the Python virtual environment.

Update Pip and install Python packages:
Now update Pip and install Python packages:

```bash
pip install --upgrade pip
Expand All @@ -69,7 +69,7 @@ pip install py-cpuinfo

### How do I download vLLM and build it?

Clone the vLLM repository from GitHub:
First, clone the vLLM repository from GitHub:

```bash
git clone https://github.com/vllm-project/vllm.git
Expand Down

0 comments on commit 0f845e3

Please sign in to comment.