No GPU usage, occupied VRAM but only CPU is working
Good morning all! I am running dual RTX NVIDIA 3090 at x8x8 using nvlink, 7950x3d, 128GB RAM, only CPU is being used:
Configuration of my python script:
# --- Configuration ---
MODEL_PATH = "/models/DeepSeek-V3-Q2_K_XS/DeepSeek-V3-Q2_K_XS-00001-of-00005.gguf"
N_GPU_LAYERS = 12 # Reduce the number of layers offloaded to the GPU
N_CTX = 512 # Reduce the context window size
TEMPERATURE = 0.7
TOP_P = 0.95
MAX_TOKENS = 256
# Set CUDA_VISIBLE_DEVICES to use both GPUs.
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
# --- Initialize Llama ---
llm = Llama(
model_path=MODEL_PATH,
n_gpu_layers=N_GPU_LAYERS,
n_ctx=N_CTX,
tensor_split=[0.5, 0.5],
use_cublas=True,
use_mmap=True,
verbose=True,
)
The rest of the code is irrelevant since I created a basic chat with the model, as we can observe, we load the model on the VRAM, using only 12 layers, which is the maximum allowed by this model, if I try to allow more n_layers loaded into both GPU I get the following error:
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 101554.98 MiB on device 0: cudaMalloc failed: out of memory
Tuning it I found the maximum layers I got to work the model is with 12, when I run it we have the following resource consuming:
`llama-1 | Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes llama-1 | Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes llama-1 | llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23590 MiB free llama-1 | llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23858 MiB free llama-1 | llama_model_loader: additional 4 GGUFs metadata loaded. llama-1 | llama_model_loader: loaded meta data with 46 key-value pairs and 1025 tensors from /models/DeepSeek-V3-Q2_K_XS/DeepSeek-V3-Q2_K_XS-00001-of-00005.gguf (version GGUF V3 (latest)) llama-1 | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama-1 | llama_model_loader: - kv 0: general.architecture str = deepseek2 llama-1 | llama_model_loader: - kv 1: general.type str = model llama-1 | llama_model_loader: - kv 2: general.name str = DeepSeek V3 BF16 llama-1 | llama_model_loader: - kv 3: general.size_label str = 256x20B llama-1 | llama_model_loader: - kv 4: deepseek2.block_count u32 = 61 llama-1 | llama_model_loader: - kv 5: deepseek2.context_length u32 = 163840 llama-1 | llama_model_loader: - kv 6: deepseek2.embedding_length u32 = 7168 llama-1 | llama_model_loader: - kv 7: deepseek2.feed_forward_length u32 = 18432 llama-1 | llama_model_loader: - kv 8: deepseek2.attention.head_count u32 = 128 llama-1 | llama_model_loader: - kv 9: deepseek2.attention.head_count_kv u32 = 128 llama-1 | llama_model_loader: - kv 10: deepseek2.rope.freq_base f32 = 10000.000000 llama-1 | llama_model_loader: - kv 11: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama-1 | llama_model_loader: - kv 12: deepseek2.expert_used_count u32 = 8 llama-1 | llama_model_loader: - kv 13: general.file_type u32 = 10 llama-1 | llama_model_loader: - kv 14: deepseek2.leading_dense_block_count u32 = 3 llama-1 | llama_model_loader: - kv 15: deepseek2.vocab_size u32 = 129280 llama-1 | llama_model_loader: - kv 16: deepseek2.attention.q_lora_rank u32 = 1536 llama-1 | llama_model_loader: - kv 17: deepseek2.attention.kv_lora_rank u32 = 512 llama-1 | llama_model_loader: - kv 18: deepseek2.attention.key_length u32 = 192 llama-1 | llama_model_loader: - kv 19: deepseek2.attention.value_length u32 = 128 llama-1 | llama_model_loader: - kv 20: deepseek2.expert_feed_forward_length u32 = 2048 llama-1 | llama_model_loader: - kv 21: deepseek2.expert_count u32 = 256 llama-1 | llama_model_loader: - kv 22: deepseek2.expert_shared_count u32 = 1 llama-1 | llama_model_loader: - kv 23: deepseek2.expert_weights_scale f32 = 2.500000 llama-1 | llama_model_loader: - kv 24: deepseek2.expert_weights_norm bool = true llama-1 | llama_model_loader: - kv 25: deepseek2.expert_gating_func u32 = 2 llama-1 | llama_model_loader: - kv 26: deepseek2.rope.dimension_count u32 = 64 llama-1 | llama_model_loader: - kv 27: deepseek2.rope.scaling.type str = yarn llama-1 | llama_model_loader: - kv 28: deepseek2.rope.scaling.factor f32 = 40.000000 llama-1 | llama_model_loader: - kv 29: deepseek2.rope.scaling.original_context_length u32 = 4096 llama-1 | llama_model_loader: - kv 30: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama-1 | llama_model_loader: - kv 31: tokenizer.ggml.model str = gpt2 llama-1 | llama_model_loader: - kv 32: tokenizer.ggml.pre str = deepseek-v3 llama-1 | Exception ignored on calling ctypes callback function: <function llama_log_callback at 0x76f6193dfd00> llama-1 | Traceback (most recent call last): llama-1 | File "/app/llama_cpp/_logger.py", line 39, in llama_log_callback llama-1 | print(text.decode("utf-8"), end="", flush=True, file=sys.stderr) llama-1 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0xef in position 128: invalid continuation byte llama-1 | llama_model_loader: - kv 34: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama-1 | llama_model_loader: - kv 35: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama-1 | llama_model_loader: - kv 36: tokenizer.ggml.bos_token_id u32 = 0 llama-1 | llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 1 llama-1 | llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 1 llama-1 | llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = true llama-1 | llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false llama-1 | llama_model_loader: - kv 41: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama-1 | llama_model_loader: - kv 42: general.quantization_version u32 = 2 llama-1 | llama_model_loader: - kv 43: split.no u16 = 0 llama-1 | llama_model_loader: - kv 44: split.count u16 = 5 llama-1 | llama_model_loader: - kv 45: split.tensors.count i32 = 1025 llama-1 | llama_model_loader: - type f32: 361 tensors llama-1 | llama_model_loader: - type q2_K: 662 tensors llama-1 | llama_model_loader: - type q4_K: 1 tensors llama-1 | llama_model_loader: - type q6_K: 1 tensors llama-1 | print_info: file format = GGUF V3 (latest) llama-1 | print_info: file type = Q2_K - Medium llama-1 | print_info: file size = 206.05 GiB (2.64 BPW) llama-1 | init_tokenizer: initializing tokenizer for type 2 llama-1 | load: control token: 128813 '<|tool▁output▁end|>' is not marked as EOG llama-1 | load: control token: 128812 '<|tool▁output▁begin|>' is not marked as EOG llama-1 | load: control token: 128811 '<|tool▁outputs▁end|>' is not marked as EOG llama-1 | load: control token: 128810 '<|tool▁outputs▁begin|>' is not marked as EOG
'<|place▁holder▁no▁355|>' is not marked as EOG llama-1 | load: control token: 128382 '<|place▁holder▁no▁382|>' is not marked as EOG llama-1 | load: control token: 128520 '<|place▁holder▁no▁520|>' is not marked as EOG llama-1 | load: control token: 128040 '<|place▁holder▁no▁40|>' is not marked as EOG llama-1 | load: control token: 128814 '<|tool▁sep|>' is not marked as EOG llama-1 | load: control token: 128586 '<|place▁holder▁no▁586|>' is not marked as EOG llama-1 | load: control token: 128151 '<|place▁holder▁no▁151|>' is not marked as EOG llama-1 | load: control token: 128388 '<|place▁holder▁no▁388|>' is not marked as EOG llama-1 | load: control token: 128743 '<|place▁holder▁no▁743|>' is not marked as EOG llama-1 | load: control token: 128374 '<|place▁holder▁no▁374|>' is not marked as EOG llama-1 | load: control token: 128083 '<|place▁holder▁no▁83|>' is not marked as EOG llama-1 | load: control token: 128775 '<|place▁holder▁no▁775|>' is not marked as EOG llama-1 | load: control token: 128363 '<|place▁holder▁no▁363|>' is not marked as EOG llama-1 | load: control token: 128432 '<|place▁holder▁no▁432|>' is not marked as EOG llama-1 | load: control token: 128809 '<|tool▁call▁end|>' is not marked as EOG llama-1 | load: control token: 128726 '<|place▁holder▁no▁726|>' is not marked as EOG llama-1 | load: control token: 128351 '<|place▁holder▁no▁351|>' is not marked as EOG llama-1 | load: control token: 128214 '<|place▁holder▁no▁214|>' is not marked as EOG llama-1 | load: control token: 128604 '<|place▁holder▁no▁604|>' is not marked as EOG llama-1 | load: control token: 128314 '<|place▁holder▁no▁314|>' is not marked as EOG llama-1 | load: control token: 128644 '<|place▁holder▁no▁644|>' is not marked as EOG llama-1 | load: control token: 128241 '<|place▁holder▁no▁241|>' is not marked as EOG llama-1 | load: control token: 128104 '<|place▁holder▁no▁104|>' is not marked as EOG llama-1 | load: control token: 128702 '<|place▁holder▁no▁702|>' is not marked as EOG llama-1 | load: control token: 128000 '<|place▁holder▁no▁0|>' is not marked as EOG llama-1 | load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llama-1 | load: special tokens cache size = 818 llama-1 | load: token to piece cache size = 0.8223 MB llama-1 | print_info: arch = deepseek2 llama-1 | print_info: vocab_only = 0 llama-1 | print_info: n_ctx_train = 163840 llama-1 | print_info: n_embd = 7168 llama-1 | print_info: n_layer = 61 llama-1 | print_info: n_head = 128 llama-1 | print_info: n_head_kv = 128 llama-1 | print_info: n_rot = 64 llama-1 | print_info: n_swa = 0 llama-1 | print_info: n_embd_head_k = 192 llama-1 | print_info: n_embd_head_v = 128 llama-1 | print_info: n_gqa = 1 llama-1 | print_info: n_embd_k_gqa = 24576 llama-1 | print_info: n_embd_v_gqa = 16384 llama-1 | print_info: f_norm_eps = 0.0e+00 llama-1 | print_info: f_norm_rms_eps = 1.0e-06 llama-1 | print_info: f_clamp_kqv = 0.0e+00 llama-1 | print_info: f_max_alibi_bias = 0.0e+00 llama-1 | print_info: f_logit_scale = 0.0e+00 llama-1 | print_info: n_ff = 18432 llama-1 | print_info: n_expert = 256 llama-1 | print_info: n_expert_used = 8 llama-1 | print_info: causal attn = 1 llama-1 | print_info: pooling type = 0 llama-1 | print_info: rope type = 0 llama-1 | print_info: rope scaling = yarn llama-1 | print_info: freq_base_train = 10000.0 llama-1 | print_info: freq_scale_train = 0.025 llama-1 | print_info: n_ctx_orig_yarn = 4096 llama-1 | print_info: rope_finetuned = unknown llama-1 | print_info: ssm_d_conv = 0 llama-1 | print_info: ssm_d_inner = 0 llama-1 | print_info: ssm_d_state = 0 llama-1 | print_info: ssm_dt_rank = 0 llama-1 | print_info: ssm_dt_b_c_rms = 0 llama-1 | print_info: model type = 671B llama-1 | print_info: model params = 671.03 B llama-1 | print_info: general.name = DeepSeek V3 BF16 llama-1 | print_info: n_layer_dense_lead = 3 llama-1 | print_info: n_lora_q = 1536 llama-1 | print_info: n_lora_kv = 512 llama-1 | print_info: n_ff_exp = 2048 llama-1 | print_info: n_expert_shared = 1 llama-1 | print_info: expert_weights_scale = 2.5 llama-1 | print_info: expert_weights_norm = 1 llama-1 | print_info: expert_gating_func = sigmoid llama-1 | print_info: rope_yarn_log_mul = 0.1000 llama-1 | print_info: vocab type = BPE llama-1 | print_info: n_vocab = 129280 llama-1 | print_info: n_merges = 127741 llama-1 | print_info: BOS token = 0 '<|begin▁of▁sentence|>' llama-1 | print_info: EOS token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: EOT token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: PAD token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: LF token = 131 'Ä' llama-1 | print_info: FIM PRE token = 128801 '<|fim▁begin|>' llama-1 | print_info: FIM SUF token = 128800 '<|fim▁hole|>' llama-1 | print_info: FIM MID token = 128802 '<|fim▁end|>' llama-1 | print_info: EOG token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: max token length = 256 llama-1 | load_tensors: layer 0 assigned to device CPU llama-1 | load_tensors: layer 1 assigned to device CPU llama-1 | load_tensors: layer 2 assigned to device CPU llama-1 | load_tensors: layer 3 assigned to device CPU llama-1 | load_tensors: layer 4 assigned to device CPU llama-1 | load_tensors: layer 5 assigned to device CPU llama-1 | load_tensors: layer 6 assigned to device CPU llama-1 | load_tensors: layer 7 assigned to device CPU llama-1 | load_tensors: layer 8 assigned to device CPU llama-1 | load_tensors: layer 9 assigned to device CPU llama-1 | load_tensors: layer 10 assigned to device CPU llama-1 | load_tensors: layer 11 assigned to device CPU llama-1 | load_tensors: layer 12 assigned to device CPU llama-1 | load_tensors: layer 13 assigned to device CPU llama-1 | load_tensors: layer 14 assigned to device CPU llama-1 | load_tensors: layer 15 assigned to device CPU llama-1 | load_tensors: layer 16 assigned to device CPU llama-1 | load_tensors: layer 17 assigned to device CPU llama-1 | load_tensors: layer 18 assigned to device CPU llama-1 | load_tensors: layer 19 assigned to device CPU llama-1 | load_tensors: layer 20 assigned to device CPU llama-1 | load_tensors: layer 21 assigned to device CPU llama-1 | load_tensors: layer 22 assigned to device CPU llama-1 | load_tensors: layer 23 assigned to device CPU llama-1 | load_tensors: layer 24 assigned to device CPU llama-1 | load_tensors: layer 25 assigned to device CPU llama-1 | load_tensors: layer 26 assigned to device CPU llama-1 | load_tensors: layer 27 assigned to device CPU llama-1 | load_tensors: layer 28 assigned to device CPU llama-1 | load_tensors: layer 29 assigned to device CPU llama-1 | load_tensors: layer 30 assigned to device CPU llama-1 | load_tensors: layer 31 assigned to device CPU llama-1 | load_tensors: layer 32 assigned to device CPU llama-1 | load_tensors: layer 33 assigned to device CPU llama-1 | load_tensors: layer 34 assigned to device CPU llama-1 | load_tensors: layer 35 assigned to device CPU llama-1 | load_tensors: layer 36 assigned to device CPU llama-1 | load_tensors: layer 37 assigned to device CPU llama-1 | load_tensors: layer 38 assigned to device CPU llama-1 | load_tensors: layer 39 assigned to device CPU llama-1 | load_tensors: layer 40 assigned to device CPU llama-1 | load_tensors: layer 41 assigned to device CPU llama-1 | load_tensors: layer 42 assigned to device CPU llama-1 | load_tensors: layer 43 assigned to device CPU llama-1 | load_tensors: layer 44 assigned to device CPU llama-1 | load_tensors: layer 45 assigned to device CPU llama-1 | load_tensors: layer 46 assigned to device CPU llama-1 | load_tensors: layer 47 assigned to device CPU llama-1 | load_tensors: layer 48 assigned to device CPU llama-1 | load_tensors: layer 49 assigned to device CUDA0 llama-1 | load_tensors: layer 50 assigned to device CUDA0 llama-1 | load_tensors: layer 51 assigned to device CUDA0 llama-1 | load_tensors: layer 52 assigned to device CUDA0 llama-1 | load_tensors: layer 53 assigned to device CUDA0 llama-1 | load_tensors: layer 54 assigned to device CUDA0 llama-1 | load_tensors: layer 55 assigned to device CUDA1 llama-1 | load_tensors: layer 56 assigned to device CUDA1 llama-1 | load_tensors: layer 57 assigned to device CUDA1 llama-1 | load_tensors: layer 58 assigned to device CUDA1 llama-1 | load_tensors: layer 59 assigned to device CUDA1 llama-1 | load_tensors: layer 60 assigned to device CUDA1 llama-1 | load_tensors: layer 61 assigned to device CPU llama-1 | load_tensors: tensor 'token_embd.weight' (q4_K) (and 820 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead llama-1 | load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llama-1 | load: special tokens cache size = 818 llama-1 | load: token to piece cache size = 0.8223 MB llama-1 | print_info: arch = deepseek2 llama-1 | print_info: vocab_only = 0 llama-1 | print_info: n_ctx_train = 163840 llama-1 | print_info: n_embd = 7168 llama-1 | print_info: n_layer = 61 llama-1 | print_info: n_head = 128 llama-1 | print_info: n_head_kv = 128 llama-1 | print_info: n_rot = 64 llama-1 | print_info: n_swa = 0 llama-1 | print_info: n_embd_head_k = 192 llama-1 | print_info: n_embd_head_v = 128 llama-1 | print_info: n_gqa = 1 llama-1 | print_info: n_embd_k_gqa = 24576 llama-1 | print_info: n_embd_v_gqa = 16384 llama-1 | print_info: f_norm_eps = 0.0e+00 llama-1 | print_info: f_norm_rms_eps = 1.0e-06 llama-1 | print_info: f_clamp_kqv = 0.0e+00 llama-1 | print_info: f_max_alibi_bias = 0.0e+00 llama-1 | print_info: f_logit_scale = 0.0e+00 llama-1 | print_info: n_ff = 18432 llama-1 | print_info: n_expert = 256 llama-1 | print_info: n_expert_used = 8 llama-1 | print_info: causal attn = 1 llama-1 | print_info: pooling type = 0 llama-1 | print_info: rope type = 0 llama-1 | print_info: rope scaling = yarn llama-1 | print_info: freq_base_train = 10000.0 llama-1 | print_info: freq_scale_train = 0.025 llama-1 | print_info: n_ctx_orig_yarn = 4096 llama-1 | print_info: rope_finetuned = unknown llama-1 | print_info: ssm_d_conv = 0 llama-1 | print_info: ssm_d_inner = 0 llama-1 | print_info: ssm_d_state = 0 llama-1 | print_info: ssm_dt_rank = 0 llama-1 | print_info: ssm_dt_b_c_rms = 0 llama-1 | print_info: model type = 671B llama-1 | print_info: model params = 671.03 B llama-1 | print_info: general.name = DeepSeek V3 BF16 llama-1 | print_info: n_layer_dense_lead = 3 llama-1 | print_info: n_lora_q = 1536 llama-1 | print_info: n_lora_kv = 512 llama-1 | print_info: n_ff_exp = 2048 llama-1 | print_info: n_expert_shared = 1 llama-1 | print_info: expert_weights_scale = 2.5 llama-1 | print_info: expert_weights_norm = 1 llama-1 | print_info: expert_gating_func = sigmoid llama-1 | print_info: rope_yarn_log_mul = 0.1000 llama-1 | print_info: vocab type = BPE llama-1 | print_info: n_vocab = 129280 llama-1 | print_info: n_merges = 127741 llama-1 | print_info: BOS token = 0 '<|begin▁of▁sentence|>' llama-1 | print_info: EOS token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: EOT token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: PAD token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: LF token = 131 'Ä' llama-1 | print_info: FIM PRE token = 128801 '<|fim▁begin|>' llama-1 | print_info: FIM SUF token = 128800 '<|fim▁hole|>' llama-1 | print_info: FIM MID token = 128802 '<|fim▁end|>' llama-1 | print_info: EOG token = 1 '<|end▁of▁sentence|>' llama-1 | print_info: max token length = 256 llama-1 | load_tensors: layer 0 assigned to device CPU llama-1 | load_tensors: layer 1 assigned to device CPU llama-1 | load_tensors: layer 2 assigned to device CPU llama-1 | load_tensors: layer 3 assigned to device CPU llama-1 | load_tensors: layer 4 assigned to device CPU llama-1 | load_tensors: layer 5 assigned to device CPU llama-1 | load_tensors: layer 6 assigned to device CPU llama-1 | load_tensors: layer 7 assigned to device CPU llama-1 | load_tensors: layer 8 assigned to device CPU llama-1 | load_tensors: layer 9 assigned to device CPU llama-1 | load_tensors: layer 10 assigned to device CPU llama-1 | load_tensors: layer 11 assigned to device CPU llama-1 | load_tensors: layer 12 assigned to device CPU llama-1 | load_tensors: layer 13 assigned to device CPU llama-1 | load_tensors: layer 14 assigned to device CPU llama-1 | load_tensors: layer 15 assigned to device CPU llama-1 | load_tensors: layer 16 assigned to device CPU llama-1 | load_tensors: layer 17 assigned to device CPU llama-1 | load_tensors: layer 18 assigned to device CPU llama-1 | load_tensors: layer 19 assigned to device CPU llama-1 | load_tensors: layer 20 assigned to device CPU llama-1 | load_tensors: layer 21 assigned to device CPU llama-1 | load_tensors: layer 22 assigned to device CPU llama-1 | load_tensors: layer 23 assigned to device CPU llama-1 | load_tensors: layer 24 assigned to device CPU llama-1 | load_tensors: layer 25 assigned to device CPU llama-1 | load_tensors: layer 26 assigned to device CPU llama-1 | load_tensors: layer 27 assigned to device CPU llama-1 | load_tensors: layer 28 assigned to device CPU llama-1 | load_tensors: layer 29 assigned to device CPU llama-1 | load_tensors: layer 30 assigned to device CPU llama-1 | load_tensors: layer 31 assigned to device CPU llama-1 | load_tensors: layer 32 assigned to device CPU llama-1 | load_tensors: layer 33 assigned to device CPU llama-1 | load_tensors: layer 34 assigned to device CPU llama-1 | load_tensors: layer 35 assigned to device CPU llama-1 | load_tensors: layer 36 assigned to device CPU llama-1 | load_tensors: layer 37 assigned to device CPU llama-1 | load_tensors: layer 38 assigned to device CPU llama-1 | load_tensors: layer 39 assigned to device CPU llama-1 | load_tensors: layer 40 assigned to device CPU llama-1 | load_tensors: layer 41 assigned to device CPU llama-1 | load_tensors: layer 42 assigned to device CPU llama-1 | load_tensors: layer 43 assigned to device CPU llama-1 | load_tensors: layer 44 assigned to device CPU llama-1 | load_tensors: layer 45 assigned to device CPU llama-1 | load_tensors: layer 46 assigned to device CPU llama-1 | load_tensors: layer 47 assigned to device CPU llama-1 | load_tensors: layer 48 assigned to device CPU llama-1 | load_tensors: layer 49 assigned to device CUDA0 llama-1 | load_tensors: layer 50 assigned to device CUDA0 llama-1 | load_tensors: layer 51 assigned to device CUDA0 llama-1 | load_tensors: layer 52 assigned to device CUDA0 llama-1 | load_tensors: layer 53 assigned to device CUDA0 llama-1 | load_tensors: layer 54 assigned to device CUDA0 llama-1 | load_tensors: layer 55 assigned to device CUDA1 llama-1 | load_tensors: layer 56 assigned to device CUDA1 llama-1 | load_tensors: layer 57 assigned to device CUDA1 llama-1 | load_tensors: layer 58 assigned to device CUDA1 llama-1 | load_tensors: layer 59 assigned to device CUDA1 llama-1 | load_tensors: layer 60 assigned to device CUDA1 llama-1 | load_tensors: layer 61 assigned to device CPU llama-1 | load_tensors: tensor 'token_embd.weight' (q4_K) (and 820 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead llama-1 | load_tensors: offloading 12 repeating layers to GPU llama-1 | load_tensors: offloaded 12/62 layers to GPU llama-1 | load_tensors: CUDA0 model buffer size = 21644.37 MiB llama-1 | load_tensors: CUDA1 model buffer size = 21644.37 MiB llama-1 | load_tensors: CPU_Mapped model buffer size = 42690.54 MiB llama-1 | load_tensors: CPU_Mapped model buffer size = 42108.14 MiB llama-1 | load_tensors: CPU_Mapped model buffer size = 42049.55 MiB llama-1 | load_tensors: CPU_Mapped model buffer size = 40861.93 MiB llama-1 | llama_init_from_model: n_seq_max = 1 llama-1 | llama_init_from_model: n_ctx = 512 llama-1 | llama_init_from_model: n_ctx_per_seq = 512 llama-1 | llama_init_from_model: n_batch = 512 llama-1 | llama_init_from_model: n_ubatch = 512 llama-1 | llama_init_from_model: flash_attn = 0 llama-1 | llama_init_from_model: freq_base = 10000.0 llama-1 | llama_init_from_model: freq_scale = 0.025 llama-1 | llama_init_from_model: n_ctx_per_seq (512) < n_ctx_train (163840) -- the full capacity of the model will not be utilized llama-1 | llama_kv_cache_init: kv_size = 512, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0 llama-1 | llama_kv_cache_init: layer 0: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 1: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 2: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 3: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 4: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 5: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 6: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 7: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 8: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 9: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 10: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 11: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 12: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 13: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 14: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 15: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 16: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 17: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 18: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 19: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 20: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 21: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 22: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 23: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 24: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 25: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 26: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 27: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 28: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 29: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 30: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 31: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 32: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 33: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 34: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 35: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 36: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 37: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 38: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 39: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 40: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 41: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 42: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 43: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 44: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 45: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 46: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 47: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 48: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 49: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 50: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 51: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 52: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 53: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 54: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 55: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 56: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 57: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 58: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 59: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: layer 60: n_embd_k_gqa = 24576, n_embd_v_gqa = 16384 llama-1 | llama_kv_cache_init: CUDA0 KV buffer size = 240.00 MiB llama-1 | llama_kv_cache_init: CUDA1 KV buffer size = 240.00 MiB llama-1 | llama_kv_cache_init: CPU KV buffer size = 1960.00 MiB llama-1 | llama_init_from_model: KV self size = 2440.00 MiB, K (f16): 1464.00 MiB, V (f16): 976.00 MiB llama-1 | llama_init_from_model: CPU output buffer size = 0.49 MiB llama-1 | llama_init_from_model: CUDA0 compute buffer size = 1398.75 MiB llama-1 | llama_init_from_model: CUDA1 compute buffer size = 283.00 MiB llama-1 | llama_init_from_model: CUDA_Host compute buffer size = 81.01 MiB llama-1 | llama_init_from_model: graph nodes = 5025 llama-1 | llama_init_from_model: graph splits = 921 (with bs=512), 4 (with bs=1) llama-1 | CUDA : ARCHS = 520,610,700,750 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | `
These are my logs (summary of it)
And when running, using htop and nvtop:
Results:
llama_perf_context_print: load time = 37126.23 ms llama_perf_context_print: prompt eval time = 37126.15 ms / 21 tokens ( 1767.91 ms per token, 0.57 tokens per second) llama_perf_context_print: eval time = 33660.92 ms / 35 runs ( 961.74 ms per token, 1.04 tokens per second) llama_perf_context_print: total time = 70814.69 ms / 56 tokens
same issue. Have you found the solution?
same issue. Have you found the solution?
Trying to tune it, got it working yesterday so its progress, now the next thing is making it work in my hardware
Any responses? Is the VRAM choked?
try this:
pip uninstall -y llama-cpp-python FORCE_CMAKE="1" CMAKE_ARGS="-DGGML_CUDA=on" pip install --upgrade --no-cache-dir --force-reinstall -v --prefer-binary llama-cpp-python
same on qwen72b
model_config = { "path": Path("/home/silvacarl/Desktop/models/llama-cmd-claude-q5_K_M.gguf").absolute(), "n_gpu_layers": -1, "n_ctx": 2048, "n_batch": 512, "chat_format": "llama-2", "verbose": False }
set "n_gpu_layers": -1,
model_config = { "path": Path("/home/silvacarl/Desktop/models/llama-cmd-claude-q5_K_M.gguf").absolute(), "n_gpu_layers": -1, "n_ctx": 2048, "n_batch": 512, "chat_format": "llama-2", "verbose": False }
set "n_gpu_layers": -1,
I will give a try to that model since R1 672B parameters is too big for my machine, if I set n_gpu_layers to -1 on this model it will run out of VRAM memory since I dont have enough to fit too many layers
try this:
pip uninstall -y llama-cpp-python FORCE_CMAKE="1" CMAKE_ARGS="-DGGML_CUDA=on" pip install --upgrade --no-cache-dir --force-reinstall -v --prefer-binary llama-cpp-python
I will give it a try before changing to llama-cmd-claude-q5_K_M as you suggested, when I get my hands again on the project this evening