lighteval icon indicating copy to clipboard operation
lighteval copied to clipboard

[BUG] VLLM Code Generate hangs

Open berserkr opened this issue 5 months ago • 0 comments

Describe the bug

When running vllm lcb generate, it looks like generation completes but it just hangs:

(VllmWorkerProcess pid=1325213) INFO 08-22 01:44:02 [model_runner.py:1671] Graph capturing finished in 17 secs, took 0.38 GiB
(VllmWorkerProcess pid=1325212) INFO 08-22 01:44:02 [model_runner.py:1671] Graph capturing finished in 17 secs, took 0.38 GiB
[2025-08-22 01:44:02,664] [    INFO]: Graph capturing finished in 17 secs, took 0.39 GiB (model_runner.py:1671)
[2025-08-22 01:44:02,665] [    INFO]: init engine (profile, create kv cache, warmup model) took 58.78 seconds (llm_engine.py:428)
[2025-08-22 01:44:21,275] [    INFO]: --- INIT SEEDS --- (pipeline.py:258)
[2025-08-22 01:44:21,275] [    INFO]: --- LOADING TASKS --- (pipeline.py:212)
[2025-08-22 01:44:21,275] [    INFO]: Found 1 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/ifeval/main.py (registry.py:142)
[2025-08-22 01:44:21,275] [    INFO]: Found 6 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/tiny_benchmarks/main.py (registry.py:142)
[2025-08-22 01:44:21,275] [    INFO]: Found 1 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/mt_bench/main.py (registry.py:142)
[2025-08-22 01:44:21,275] [    INFO]: Found 4 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/mix_eval/main.py (registry.py:142)
[2025-08-22 01:44:21,275] [    INFO]: Found 5 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/olympiade_bench/main.py (registry.py:142)
[2025-08-22 01:44:21,275] [    INFO]: Found 1 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/hle/main.py (registry.py:142)
[2025-08-22 01:44:21,275] [    INFO]: Found 23 custom tasks in /proj/checkpoints/shared_data/envs/code_eval/lib/python3.10/site-packages/lighteval/tasks/extended/lcb/main.py (registry.py:142)
[2025-08-22 01:44:21,292] [    INFO]: livecodebench/code_generation_lite v4_v5 (lighteval_task.py:187)
[2025-08-22 01:44:21,292] [ WARNING]: Careful, the task extended|lcb:codegeneration is using evaluation data to build the few shot examples. (lighteval_task.py:260)
[2025-08-22 01:44:46,703] [    INFO]: --- RUNNING MODEL --- (pipeline.py:482)
[2025-08-22 01:44:46,703] [    INFO]: Running RequestType.GREEDY_UNTIL requests (pipeline.py:468)
[2025-08-22 01:44:46,904] [ WARNING]: You cannot select the number of dataset splits for a generative evaluation at the moment. Automatically inferring. (data.py:237)
Adding requests: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 268/268 [00:00<00:00, 572.17it/s]
Adding requests:  83%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋                                | 222/268 [00:00<00:00, 577.38it/s]
Processed prompts:  92%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌          | 3936/4288 [21:52<03:17,  1.78it/s, est. speed input: 1674.62 toks/s, output: 1539.18 toks/s]


Processed prompts: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 4288/4288 [51:08<00:00,  1.40it/s, est. speed input: 796.88 toks/s, output: 1017.20 toks/s]Splits: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████Splits: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [51:09<00:00, 3069.67s/it]

To Reproduce

Pull from main as of today: 0e06249d1caa3c1d27a93e590929531291c9c493

#!/bin/bash

export CUDA_VISIBLE_DEVICES=0,1,2,3 export NCCL_IGNORE_DISABLED_P2P=1 export VLLM_SKIP_P2P_CHECK=1 export VLLM_WORKER_MULTIPROC_METHOD=spawn

NUM_GPUS=4 MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-7B MODEL_ARGS="model_name=$MODEL,dtype=bfloat16,tensor_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}" TASK=aime24 OUTPUT_DIR=data/evals/$MODEL

export VLLM_WORKER_MULTIPROC_METHOD=spawn lighteval vllm $MODEL_ARGS "extended|lcb:codegeneration|0|0"
--use-chat-template
--output-dir $OUTPUT_DIR

Expected behavior

Results printed out and saved to output dir.

Version info

Commit: 0e06249d1caa3c1d27a93e590929531291c9c493

transformers==4.52.3 datasets==3.6.0 vllm==0.9.2 torch==2.7.0 torchaudio==2.7.0 torchvision==0.22.0

berserkr avatar Aug 22 '25 07:08 berserkr