Skip to content

Conversation

@sunchendd
Copy link

@sunchendd sunchendd commented Nov 28, 2025

What this PR does / why we need it?

Fix the Eagle3 inference failure issue.
error message: "EngineCore encountered an issue. See stack trace (above) for the root cause."

Fixes #4323

How was this patch tested?

vllm serve /nfs/1_AscendPackage/05_weights_public/Qwen3-32B \ --served-model-name Qwen3-32B \ -tp 4 \ --host "0.0.0.0" \ --port "8000" \ --trust-remote-code \ --speculative-config '{"method":"eagle3","model":"/home/scd/qwen3_32b_eagle3/","num_speculative_tokens":4,"draft_tensor_parallel_size":1}' \ --max-num-batched-tokens 4096 \ --max-model-len 4096

curl http://localhost:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "Qwen3-32B",
        "prompt": "hi, where is the capital of France?",
        "max_tokens": 10,
        "temperature": 0
    }' | python3 -m json.tool

vLLM version: v0.11.0
vLLM-ascend version: v0.11.0rc2

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an inference failure with Eagle3 speculative decoding. The changes involve correctly setting the attention state for Eagle3, adjusting how attention masks are generated for prefill with cache hits, and reusing pre-computed attention masks. Overall, the changes appear to correctly address the issue. However, I've found a critical bug in the calculation of max_seq_len within get_splitfuse_attn_mask which could lead to a runtime error. I've provided a specific suggestion to fix this.

Comment on lines 108 to 112
max_seq_len = max(seq_lens, default=0)
if hasattr(max_seq_len, "item"):
max_seq_len = int(max_seq_len.item())
else:
max_seq_len = int(max_seq_len) if max_seq_len else 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The method for calculating max_seq_len is incorrect when seq_lens is a tensor. The max() built-in function cannot be used on a tensor to find its maximum value; seq_lens.max() should be used instead. The current implementation will raise a ValueError if seq_lens is a tensor. This logic can be simplified and corrected to properly handle a tensor input as specified by the type hint.

Suggested change
max_seq_len = max(seq_lens, default=0)
if hasattr(max_seq_len, "item"):
max_seq_len = int(max_seq_len.item())
else:
max_seq_len = int(max_seq_len) if max_seq_len else 0
max_seq_len = seq_lens.max().item() if seq_lens.numel() > 0 else 0

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@sunchendd sunchendd force-pushed the v0.11.0-dev branch 4 times, most recently from bd75832 to 64e221d Compare November 28, 2025 09:45
@sunchendd sunchendd changed the title Fix the Eagle3 inference failure issue. [Bugfix] Fix the Eagle3 inference failure issue. Nov 29, 2025
@zhangxinyuehfad
Copy link
Contributor

Qwen/Qwen3-32B eagle3 accuracy failed

The output of `python collect_env.py`

Collecting environment information...
PyTorch version: 2.7.1+cpu
Is debug build: False

OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: Could not collect
CMake version: version 4.2.0
Libc version: glibc-2.35

Python version: 3.11.13 (main, Nov 20 2025, 16:57:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-182.0.0.95.r1941_123.hce2.aarch64-aarch64-with-glibc2.35

CPU:
Architecture:                         aarch64
CPU op-mode(s):                       64-bit
Byte Order:                           Little Endian
CPU(s):                               320
On-line CPU(s) list:                  0-319
Vendor ID:                            HiSilicon
Model:                                0
Thread(s) per core:                   1
Core(s) per cluster:                  80
Socket(s):                            -
Cluster(s):                           4
Stepping:                             0x0
Frequency boost:                      disabled
CPU max MHz:                          3000.0000
CPU min MHz:                          400.0000
BogoMIPS:                             200.00
Flags:                                fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint svei8mm svef32mm svef64mm svebf16 i8mm bf16 dgh rng ecv
L1d cache:                            20 MiB (320 instances)
L1i cache:                            20 MiB (320 instances)
L2 cache:                             400 MiB (320 instances)
L3 cache:                             560 MiB (8 instances)
NUMA node(s):                         8
NUMA node0 CPU(s):                    0-39
NUMA node1 CPU(s):                    40-79
NUMA node2 CPU(s):                    80-119
NUMA node3 CPU(s):                    120-159
NUMA node4 CPU(s):                    160-199
NUMA node5 CPU(s):                    200-239
NUMA node6 CPU(s):                    240-279
NUMA node7 CPU(s):                    280-319
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; __user pointer sanitization
Vulnerability Spectre v2:             Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] mypy==1.11.1
[pip3] mypy_extensions==1.1.0
[pip3] numpy==1.26.4
[pip3] pyzmq==27.1.0
[pip3] sentence-transformers==5.1.2
[pip3] torch==2.7.1+cpu
[pip3] torch_npu==2.7.1
[pip3] torchvision==0.22.1
[pip3] transformers==4.57.1
[pip3] zmq==0.0.0
[conda] Could not collect
vLLM Version: 0.11.0
vLLM Ascend Version: 0.11.0rc3.dev7+g6e4dd8e1a (git sha: 6e4dd8e1a)

ENV Variables:
ATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1
ATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0
ATB_OPSRUNNER_SETUP_CACHE_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_GLOBAL=1
ATB_DEVICE_TILING_BUFFER_BLOCK_NUM=32
ASCEND_VISIBLE_DEVICES=7,2,9,15,4,6,11,5,13,0,3,12,8,1,10,14
ATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0
ASCEND_RUNTIME_OPTIONS=
ATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5
ATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1
ASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest
ATB_COMPARE_TILING_EVERY_KERNEL=0
ASCEND_OPP_PATH=/usr/local/Ascend/ascend-toolkit/latest/opp
LD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_1/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling/lib/linux/aarch64:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling:/usr/local/Ascend/driver/lib64/common/:/usr/local/Ascend/driver/lib64/driver/:
ASCEND_AICPU_PATH=/usr/local/Ascend/ascend-toolkit/latest
ATB_STREAM_SYNC_EVERY_OPERATION_ENABLE=0
ASCEND_HOME_PATH=/usr/local/Ascend/ascend-toolkit/latest
ATB_MATMUL_SHUFFLE_K_ENABLE=1
ATB_WORKSPACE_MEM_ALLOC_ALG_TYPE=1
ATB_HOST_TILING_BUFFER_BLOCK_NUM=128
ATB_SHARE_MEMORY_NAME_SUFFIX=
TORCH_DEVICE_BACKEND_AUTOLOAD=1
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1


NPU:
+------------------------------------------------------------------------------------------------+
| npu-smi 25.2.1                   Version: 25.2.1                                               |
+---------------------------+---------------+----------------------------------------------------+
| NPU   Name                | Health        | Power(W)    Temp(C)           Hugepages-Usage(page)|
| Chip  Phy-ID              | Bus-Id        | AICore(%)   Memory-Usage(MB)  HBM-Usage(MB)        |
+===========================+===============+====================================================+
| 0     Ascend910           | OK            | 168.2       35                0    / 0             |
| 0     0                   | 0000:9D:00.0  | 0           0    / 0          19311/ 65536         |
+------------------------------------------------------------------------------------------------+
| 0     Ascend910           | OK            | -           35                0    / 0             |
| 1     1                   | 0000:9F:00.0  | 0           0    / 0          19054/ 65536         |
+===========================+===============+====================================================+
| 1     Ascend910           | OK            | 162.2       35                0    / 0             |
| 0     2                   | 0000:99:00.0  | 0           0    / 0          19308/ 65536         |
+------------------------------------------------------------------------------------------------+
| 1     Ascend910           | OK            | -           34                0    / 0             |
| 1     3                   | 0000:9B:00.0  | 0           0    / 0          19057/ 65536         |
+===========================+===============+====================================================+
| 2     Ascend910           | OK            | 159.7       35                0    / 0             |
| 0     4                   | 0000:95:00.0  | 0           0    / 0          3144 / 65536         |
+------------------------------------------------------------------------------------------------+
| 2     Ascend910           | OK            | -           35                0    / 0             |
| 1     5                   | 0000:97:00.0  | 0           0    / 0          2894 / 65536         |
+===========================+===============+====================================================+
| 3     Ascend910           | OK            | 167.3       36                0    / 0             |
| 0     6                   | 0000:91:00.0  | 0           0    / 0          3142 / 65536         |
+------------------------------------------------------------------------------------------------+
| 3     Ascend910           | OK            | -           36                0    / 0             |
| 1     7                   | 0000:93:00.0  | 0           0    / 0          2892 / 65536         |
+===========================+===============+====================================================+
| 4     Ascend910           | OK            | 169.8       35                0    / 0             |
| 0     8                   | 0000:8D:00.0  | 0           0    / 0          3144 / 65536         |
+------------------------------------------------------------------------------------------------+
| 4     Ascend910           | OK            | -           36                0    / 0             |
| 1     9                   | 0000:8F:00.0  | 0           0    / 0          2892 / 65536         |
+===========================+===============+====================================================+
| 5     Ascend910           | OK            | 170.5       35                0    / 0             |
| 0     10                  | 0000:89:00.0  | 0           0    / 0          3143 / 65536         |
+------------------------------------------------------------------------------------------------+
| 5     Ascend910           | OK            | -           35                0    / 0             |
| 1     11                  | 0000:8B:00.0  | 0           0    / 0          2891 / 65536         |
+===========================+===============+====================================================+
| 6     Ascend910           | OK            | 159.6       36                0    / 0             |
| 0     12                  | 0000:85:00.0  | 0           0    / 0          3148 / 65536         |
+------------------------------------------------------------------------------------------------+
| 6     Ascend910           | OK            | -           36                0    / 0             |
| 1     13                  | 0000:87:00.0  | 0           0    / 0          2888 / 65536         |
+===========================+===============+====================================================+
| 7     Ascend910           | OK            | 169.2       35                0    / 0             |
| 0     14                  | 0000:81:00.0  | 0           0    / 0          3139 / 65536         |
+------------------------------------------------------------------------------------------------+
| 7     Ascend910           | OK            | -           35                0    / 0             |
| 1     15                  | 0000:83:00.0  | 0           0    / 0          2897 / 65536         |
+===========================+===============+====================================================+
+---------------------------+---------------+----------------------------------------------------+
| NPU     Chip              | Process id    | Process name             | Process memory(MB)      |
+===========================+===============+====================================================+
| 0       0                 | 2934          | VLLMWorker_TP            | 16223                   |
| 0       1                 | 2935          | VLLMWorker_TP            | 16223                   |
+===========================+===============+====================================================+
| 1       0                 | 2936          | VLLMWorker_TP            | 16223                   |
| 1       1                 | 2937          | VLLMWorker_TP            | 16223                   |
+===========================+===============+====================================================+
| No running processes found in NPU 2                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 3                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 4                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 5                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 6                                                            |
+===========================+===============+====================================================+
| No running processes found in NPU 7                                                            |
+===========================+===============+====================================================+

CANN:
package_name=Ascend-cann-toolkit
version=8.3.RC2
innerversion=V100R001C23SPC002B210
compatible_version=[V100R001C15],[V100R001C18],[V100R001C19],[V100R001C20],[V100R001C21],[V100R001C23]
arch=aarch64
os=linux
path=/usr/local/Ascend/ascend-toolkit/8.3.RC2/aarch64-linux


command:

vllm serve Qwen/Qwen3-32B  --served-model-name Qwen3-32B  --tensor-parallel-size 4  --trust-remote-code  --speculative-config '{"method":"eagle3","model":"/root/.cache/modelscope/hub/models/AngelSlim/Qwen3-32B_eagle3","num_speculative_tokens":4,"draft_tensor_parallel_size":1}'  --max-num-batched-tokens 4096  --max-model-len 4096
Qwen/Qwen3-32B eagle3 accuracy failed
(APIServer pid=10368) INFO 12-02 14:43:59 [serving_chat.py:139] Using default chat sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
(APIServer pid=10368) Downloading Model from https://www.modelscope.cn to directory: /root/.cache/modelscope/hub/models/Qwen/Qwen3-32B
(APIServer pid=10368) INFO 12-02 14:44:00 [serving_completion.py:76] Using default completion sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
(APIServer pid=10368) INFO 12-02 14:44:00 [api_server.py:1912] Starting vLLM API server 0 on http://0.0.0.0:8000
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:34] Available routes are:
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /openapi.json, Methods: HEAD, GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /docs, Methods: HEAD, GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /docs/oauth2-redirect, Methods: HEAD, GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /redoc, Methods: HEAD, GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /health, Methods: GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /load, Methods: GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /ping, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /ping, Methods: GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /tokenize, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /detokenize, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/models, Methods: GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /version, Methods: GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/responses, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/responses/{response_id}, Methods: GET
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/responses/{response_id}/cancel, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/chat/completions, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/completions, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/embeddings, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /pooling, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /classify, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /score, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/score, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/audio/transcriptions, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/audio/translations, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /rerank, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v1/rerank, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /v2/rerank, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /scale_elastic_ep, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /is_scaling_elastic_ep, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /invocations, Methods: POST
(APIServer pid=10368) INFO 12-02 14:44:00 [launcher.py:42] Route: /metrics, Methods: GET
(APIServer pid=10368) INFO:     Started server process [10368]
(APIServer pid=10368) INFO:     Waiting for application startup.
(APIServer pid=10368) INFO:     Application startup complete.
[rank3]:[W1202 14:44:21.297963606 compiler_depend.ts:117] Warning: Driver Version: ▒▒{▒▒ is invalid or not supported yet. (function operator())
[rank1]:[W1202 14:44:21.298480790 compiler_depend.ts:117] Warning: Driver Version: ▒▒p▒▒▒ is invalid or not supported yet. (function operator())
[rank2]:[W1202 14:44:21.300224365 compiler_depend.ts:117] Warning: Driver Version: ▒▒▒ is invalid or not supported yet. (function operator())
[rank0]:[W1202 14:44:21.300443377 compiler_depend.ts:117] Warning: Driver Version: ▒▒p▒▒ is invalid or not supported yet. (function operator())
(Worker_TP3 pid=10941) INFO 12-02 14:44:21 [acl_graph.py:187] Replaying aclgraph
(Worker_TP2 pid=10940) INFO 12-02 14:44:21 [acl_graph.py:187] Replaying aclgraph
(Worker_TP1 pid=10939) INFO 12-02 14:44:21 [acl_graph.py:187] Replaying aclgraph
(Worker_TP0 pid=10938) INFO 12-02 14:44:21 [acl_graph.py:187] Replaying aclgraph
(APIServer pid=10368) INFO:     127.0.0.1:46416 - "POST /v1/completions HTTP/1.1" 200 OK
(APIServer pid=10368) INFO 12-02 14:44:31 [loggers.py:127] Engine 000: Avg prompt throughput: 0.3 tokens/s, Avg generation throughput: 10.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=10368) INFO 12-02 14:44:31 [metrics.py:96] SpecDecoding metrics: Mean acceptance length: 2.83, Accepted throughput: 1.95 tokens/s, Drafted throughput: 4.25 tokens/s, Accepted: 66 tokens, Drafted: 144 tokens, Per-position acceptance rate: 0.639, 0.556, 0.389, 0.250, Avg Draft acceptance rate: 45.8%
(APIServer pid=10368) INFO 12-02 14:44:41 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=10368) INFO 12-02 14:45:51 [loggers.py:127] Engine 000: Avg prompt throughput: 0.6 tokens/s, Avg generation throughput: 9.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=10368) INFO 12-02 14:45:51 [metrics.py:96] SpecDecoding metrics: Mean acceptance length: 2.09, Accepted throughput: 0.59 tokens/s, Drafted throughput: 2.15 tokens/s, Accepted: 47 tokens, Drafted: 172 tokens, Per-position acceptance rate: 0.721, 0.279, 0.093, 0.000, Avg Draft acceptance rate: 27.3%
(APIServer pid=10368) INFO:     127.0.0.1:51582 - "POST /v1/completions HTTP/1.1" 200 OK
(APIServer pid=10368) INFO 12-02 14:46:01 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3.7 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=10368) INFO 12-02 14:46:01 [metrics.py:96] SpecDecoding metrics: Mean acceptance length: 2.00, Accepted throughput: 1.90 tokens/s, Drafted throughput: 7.60 tokens/s, Accepted: 19 tokens, Drafted: 76 tokens, Per-position acceptance rate: 1.000, 0.000, 0.000, 0.000, Avg Draft acceptance rate: 25.0%
(APIServer pid=10368) INFO 12-02 14:46:11 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%



root@vllm-0:/vllm-workspace/vllm-ascend# curl http://localhost:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "Qwen3-32B",
        "prompt": "what is AI",
        "max_tokens": 100,
        "temperature": 0
    }'
{"id":"cmpl-d46756f480c74be29cc520d02e8dc8af","object":"text_completion","created":1764686661,"model":"Qwen3-32B","choices":[{"index":0,"text":"?\n\n: 1. 1.1.2 1.3 1.5 1.5 1.6 1.7 1.8 1.9 1.1 1.11 1.12 1.13 1.14 1.15 1.16\n1.17\n1.18\n1.19\n1.20\n1.21\n1","logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":3,"total_tokens":103,"completion_tokens":100,"prompt_tokens_details":null},"kv_transfer_params":null}root@vllm-0:/vllm-workspllm-0:/vllm-workspace/vllm-ascend# curl http://localhost:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "Qwen3-32B",
        "prompt": "what is large language model?",
        "max_tokens": "128",
        "top_p": "0.95",
        "top_k": "40",
        "temperature": "0.0"
    }'
{"id":"cmpl-a55d887808b344a9b5e57d16c268c1d6","object":"text_completion","created":1764686744,"model":"Qwen3-32B","choices":[{"index":0,"text":" AA1llll L\n\n```\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n>\n#include <conio.h>\n#include <conio.h>\n#include <string.h>\n#include <string.h>\n#include <string.h>\n#include <string.h\n#include <string.h\n#include <string.h\n#include <string.h\n#include <string.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h\n#includestring.h","logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null,"prompt_logprobs":null,"prompt_token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":134,"completion_tokens":128,"prompt_tokens_details":null},"kv_transfer_params":null}root@vllm-0:/vllm-workspace/vllm-ascend#

cann_version = getattr(torch.version, "cann", "")
target_device = device or self.device
use_chunked_mask = (seq_lens is None or position is None
or dtype is None or cann_version.startswith("8.3"))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoding version 8.3 for judgment is not conducive to maintenance. It is recommended to define the version condition as a constant (e.g., MIN_CANN_VERSION_FOR_OPTIMIZED_MASK = "8.3").

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

if target_device is None:
raise ValueError(
"splitfuse_attn_mask requires device for non-chunked mask")
max_seq_len = seq_lens.max().item() if seq_lens.numel() > 0 else 0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Magic number: The value 0 for max_seq_len should be defined as a constant (e.g., DEFAULT_MAX_SEQ_LEN = 0).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@sunchendd sunchendd force-pushed the v0.11.0-dev branch 6 times, most recently from 12b7323 to 3388cc6 Compare December 4, 2025 12:32
@sunchendd sunchendd closed this Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants