inference icon indicating copy to clipboard operation
inference copied to clipboard

GPU 不被调用

Open wilsonlv opened this issue 1 year ago • 3 comments

System Info / 系統信息

win11 python3.11 cuda 12.4

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • [ ] docker / docker
  • [x] pip install / 通过 pip install 安装
  • [ ] installation from source / 从源码安装

Version info / 版本信息

默认的最新版本

The command used to start Xinference / 用以启动 xinference 的命令

xinference-local --host 127.0.0.1 --port 9997

Reproduction / 复现过程

正常启动,但是加载 DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf 不调用GPU,torch.cuda.is_available() 为 True

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 7B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - kv 25: general.file_type u32 = 15 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.36 GiB (4.91 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151647 '<|EOT|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151646 '<|begin▁of▁sentence|>' is not marked as EOG load: control token: 151643 '<|end▁of▁sentence|>' is not marked as EOG load: control token: 151644 '<|User|>' is not marked as EOG load: control token: 151645 '<|Assistant|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 3584 print_info: n_layer = 28 print_info: n_head = 28 print_info: n_head_kv = 4 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 7 print_info: n_embd_k_gqa = 512 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 18944 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 7B print_info: model params = 7.62 B print_info: general.name = DeepSeek R1 Distill Qwen 7B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 148848 'ÄĬ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: layer 0 assigned to device CPU load_tensors: layer 1 assigned to device CPU load_tensors: layer 2 assigned to device CPU load_tensors: layer 3 assigned to device CPU load_tensors: layer 4 assigned to device CPU load_tensors: layer 5 assigned to device CPU load_tensors: layer 6 assigned to device CPU load_tensors: layer 7 assigned to device CPU load_tensors: layer 8 assigned to device CPU load_tensors: layer 9 assigned to device CPU load_tensors: layer 10 assigned to device CPU load_tensors: layer 11 assigned to device CPU load_tensors: layer 12 assigned to device CPU load_tensors: layer 13 assigned to device CPU load_tensors: layer 14 assigned to device CPU load_tensors: layer 15 assigned to device CPU load_tensors: layer 16 assigned to device CPU load_tensors: layer 17 assigned to device CPU load_tensors: layer 18 assigned to device CPU load_tensors: layer 19 assigned to device CPU load_tensors: layer 20 assigned to device CPU load_tensors: layer 21 assigned to device CPU load_tensors: layer 22 assigned to device CPU load_tensors: layer 23 assigned to device CPU load_tensors: layer 24 assigned to device CPU load_tensors: layer 25 assigned to device CPU load_tensors: layer 26 assigned to device CPU load_tensors: layer 27 assigned to device CPU load_tensors: layer 28 assigned to device CPU load_tensors: tensor 'token_embd.weight' (q4_K) (and 338 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead load_tensors: CPU model buffer size = 4460.45 MiB load_all_data: no device found for buffer type CPU for async uploads llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 4096 llama_init_from_model: n_ctx_per_seq = 4096 llama_init_from_model: n_batch = 512 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 1: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 2: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 3: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 4: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 5: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 6: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 7: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 8: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 9: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 10: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 11: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 12: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 13: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 14: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 15: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 16: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 17: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 18: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 19: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 20: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 21: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 22: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 23: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 24: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 25: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 26: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: layer 27: n_embd_k_gqa = 512, n_embd_v_gqa = 512 llama_kv_cache_init: CPU KV buffer size = 224.00 MiB llama_init_from_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB llama_init_from_model: CPU output buffer size = 0.58 MiB llama_init_from_model: CPU compute buffer size = 304.00 MiB llama_init_from_model: graph nodes = 986 llama_init_from_model: graph splits = 1 CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |

Expected behavior / 期待表现

能调用GPU计算

wilsonlv avatar Feb 07 '25 13:02 wilsonlv

gguf 用的是 llama.cpp 后端,应该和你安装的 CPU 版本有关。

这样安装。

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

qinxuye avatar Feb 08 '25 02:02 qinxuye

放弃了手动注册的gguf ,改为使用程序下载的Transformer模型就好了

wilsonlv avatar Feb 10 '25 02:02 wilsonlv

gguf 用的是 llama.cpp 后端,应该和你安装的 CPU 版本有关。

这样安装。

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

现在用 xllamacpp 之后呢?我这边 vllm 可以调用 GPU 但是因为无法 offload 到 CPU (那 WebUI 不应该显示 gpu layer 吧) 没有足够 vram 而 llama.cpp 不论 gpu layer 设置多少都完全不用 GPU.

ZhangTianrong avatar Aug 10 '25 15:08 ZhangTianrong