Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: vllm can't run qwen 32B inference #193

Open
kunger97 opened this issue Aug 17, 2024 · 2 comments
Open

[Usage]: vllm can't run qwen 32B inference #193

kunger97 opened this issue Aug 17, 2024 · 2 comments
Assignees
Labels
external Issues or PRs submitted by external users

Comments

@kunger97
Copy link

Your current environment

PyTorch version: 2.2.0a0+git8964477
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35

Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-116-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               10
On-line CPU(s) list:                  0-9
Vendor ID:                            GenuineIntel
Model name:                           Intel(R) Xeon(R) Platinum 8368 CPU @ 2.40GHz
CPU family:                           6
Model:                                106
Thread(s) per core:                   1
Core(s) per socket:                   10
Socket(s):                            1
Stepping:                             6
BogoMIPS:                             4799.99
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities
Virtualization:                       VT-x
Hypervisor vendor:                    KVM
Virtualization type:                  full
L1d cache:                            320 KiB (10 instances)
L1i cache:                            320 KiB (10 instances)
L2 cache:                             40 MiB (10 instances)
L3 cache:                             16 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-9
Vulnerability Gather data sampling:   Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] habana-torch-dataloader==1.15.0.479
[pip3] habana-torch-plugin==1.15.0.479
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.2.0a0+git8964477
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.2.0+08901ad
[pip3] torchdata==0.7.1+5e6f7b7
[pip3] torchmetrics==1.4.1
[pip3] torchtext==0.17.0+400da5c
[pip3] torchvision==0.17.0+b2383d4
[pip3] transformers==4.44.0
[pip3] triton==3.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

How would you like to use vllm

I'm try to run llm serve use gaudi2 on intel devcloud, i have install vllm-fork and i'm useing command below and it seams shows hpu oom?

PT_HPU_LAZY_MODE=1 vllm serve Qwen/Qwen1.5-32B-Chat --dtype bfloat16 --block-size 128 --device hpu

I also try qwen 13B, it works normally.
In addition, when I use optimal-habana to perform inference, it can generate text normally.

============================= HABANA PT BRIDGE CONFIGURATION =========================== 
 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH = 
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG = 
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 10
CPU RAM       : 100936596 KB
------------------------------------------------------------------------------
INFO 08-17 07:58:41 selector.py:85] Using HabanaAttention backend.
INFO 08-17 07:58:41 loader.py:284] Loading weights on hpu ...
Loading safetensors checkpoint shards:   0% Completed | 0/14 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:   7% Completed | 1/14 [00:11<02:33, 11.83s/it]
Loading safetensors checkpoint shards:  14% Completed | 2/14 [00:23<02:21, 11.81s/it]
Loading safetensors checkpoint shards:  21% Completed | 3/14 [00:35<02:09, 11.79s/it]
Loading safetensors checkpoint shards:  29% Completed | 4/14 [00:47<01:58, 11.81s/it]
Loading safetensors checkpoint shards:  36% Completed | 5/14 [00:53<01:27,  9.71s/it]
Loading safetensors checkpoint shards:  43% Completed | 6/14 [01:04<01:21, 10.13s/it]
Loading safetensors checkpoint shards:  50% Completed | 7/14 [01:15<01:13, 10.44s/it]
Loading safetensors checkpoint shards:  57% Completed | 8/14 [01:26<01:04, 10.69s/it]
Loading safetensors checkpoint shards:  64% Completed | 9/14 [01:37<00:54, 10.93s/it]
Loading safetensors checkpoint shards:  71% Completed | 10/14 [01:49<00:44, 11.12s/it]
Loading safetensors checkpoint shards:  79% Completed | 11/14 [02:01<00:33, 11.30s/it]
Loading safetensors checkpoint shards:  86% Completed | 12/14 [02:13<00:22, 11.49s/it]
Loading safetensors checkpoint shards:  93% Completed | 13/14 [02:25<00:11, 11.72s/it]
Loading safetensors checkpoint shards: 100% Completed | 14/14 [02:38<00:00, 12.06s/it]
Loading safetensors checkpoint shards: 100% Completed | 14/14 [02:38<00:00, 11.30s/it]

INFO 08-17 08:01:20 habana_model_runner.py:433] Pre-loading model weights on hpu:0 took 60.59 GiB of device memory (60.59 GiB/94.62 GiB used) and 1.67 GiB of host memory (11.17 GiB/96.26 GiB used)
Traceback (most recent call last):
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/GaudiVllm/bin/vllm", line 33, in <module>
    sys.exit(load_entry_point('vllm', 'console_scripts', 'vllm')())
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/scripts.py", line 149, in main
    args.dispatch_function(args)
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/scripts.py", line 29, in serve
    asyncio.run(run_server(args))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/entrypoints/openai/api_server.py", line 289, in run_server
    app = await init_app(args, llm_engine)
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/entrypoints/openai/api_server.py", line 229, in init_app
    if llm_engine is not None else AsyncLLMEngine.from_engine_args(
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/engine/async_llm_engine.py", line 479, in from_engine_args
    engine = cls(
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/engine/async_llm_engine.py", line 380, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/engine/async_llm_engine.py", line 560, in _init_engine
    return engine_class(*args, **kwargs)
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/engine/llm_engine.py", line 252, in __init__
    self.model_executor = executor_class(
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/executor/executor_base.py", line 47, in __init__
    self._init_executor()
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/executor/habana_executor.py", line 27, in _init_executor
    self._init_worker()
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/executor/habana_executor.py", line 71, in _init_worker
    self.driver_worker.load_model()
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/worker/habana_worker.py", line 121, in load_model
    self.model_runner.load_model()
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/vllm-fork/vllm/worker/habana_model_runner.py", line 453, in load_model
    torch.hpu.synchronize()
  File "/home/u76dd8763cba39e0b015b7769dfb6510/GaudiWorkspace/GaudiVllm/lib/python3.10/site-packages/habana_frameworks/torch/hpu/__init__.py", line 138, in synchronize
    return _hpu_C.synchronize_device()
RuntimeError: [Rank:0] FATAL ERROR :: MODULE:PT_DEVMEM Allocation failed for size::560988160 (535)MB
inc shutdown
inc shutdown
inc shutdown
@kunger97 kunger97 changed the title [Usage]: [Usage]: vllm can't run qwen 32B inference Aug 17, 2024
@kzawora-intel
Copy link

kzawora-intel commented Aug 26, 2024

@afierka-intel can you check this out? I remember you've experienced a similar bug in weight loading phase of large models (llama405b or mixtral8x7b) on HPU. Should be simple to check if it's caused by the same bug.

@kzawora-intel kzawora-intel added the external Issues or PRs submitted by external users label Aug 29, 2024
@LeoZhao-Intel
Copy link

refer this change in #55 could fix bug, need same change in qwen at least.

like this:

diff --git a/vllm/model_executor/models/qwen2.py b/vllm/model_executor/models/qwen2.py
index f38be0e9..c42b67d4 100644
--- a/vllm/model_executor/models/qwen2.py
+++ b/vllm/model_executor/models/qwen2.py
@@ -371,3 +371,6 @@ class Qwen2ForCausalLM(nn.Module):
weight_loader = getattr(param, "weight_loader",
default_weight_loader)
weight_loader(param, loaded_weight)
+

  •        if is_hpu():
    
  •            torch.hpu.synchronize()
    

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
external Issues or PRs submitted by external users
Projects
None yet
Development

No branches or pull requests

4 participants