CUDA error 700 : an illegal memory access was encountered
RTX 3090 Windows 11 CUDA 12.3
Same result with WSL2 or Native.
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6
// snip all the tensor stuff
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 100000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = mostly Q8_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 7.17 GiB (8.50 BPW)
llm_load_print_meta: general.name = models
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 ''
llm_load_print_meta: UNK token = 0 '
This looks like an error coming from llama.cpp itself, rather than LLamaSharp. Have you tried this model with llama.cpp directly to confirm if you get the same error?
I have compiled llama.cpp with CUDA support and it works. I've tried it with a few different 7b models that work with llama.cpp but give this error with LlamaSharp. I've tried sending the same prompts to llama.cpp and that also works. And to make matters more confusing, it started working for a bit, then it started failing again.
Could you get a stack trace from the exception? That'll tell us what C# code was running when it crashed.
I'm getting this too, and it hard crashes the host app even if you have try/catch all over the place. In my case It's using Apple Silicon.
llama_new_context_with_model: n_ctx = 32768
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 4096.00 MiB
llama_build_graph: non-view tensors processed: 740/740
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1
ggml_metal_init: picking default device: Apple M1
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading '/Users/jameshancock/Repos/TheDailyFactum/Server/Tools/Chat/bin/Debug/net8.0/runtimes/osx-arm64/native/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 10922.67 MiB
ggml_metal_init: maxTransferRate = built-in GPU
llama_new_context_with_model: compute buffer total size = 2139.07 MiB
llama_new_context_with_model: max tensor size = 102.54 MiB
ggml_metal_add_buffer: allocated 'data ' buffer, size = 4893.70 MiB, ( 4895.08 / 10922.67)
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 4096.02 MiB, ( 8991.09 / 10922.67)
ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 2136.02 MiB, (11127.11 / 10922.67)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: /Users/runner/work/LLamaSharp/LLamaSharp/ggml-metal.m:1611: false
The program '[26619] Chat.dll' has exited with code 0 (0x0).
It should have fallen back automatically to CPU and swapped like crazy.
command buffer 0 failed with status 5 seems to indicate an out-of-memory error (ref: https://github.com/ggerganov/llama.cpp/issues/2048).
Right. But the actual issue here is that llama.cpp errors are crashing any LlamaSharp based .net application to the desktop. We can't handle these errors.
And in addition to the fact that these errors can't be handled, LLamaSharp can't fall back properly in many cases to CPU from GPU because of memory errors like other systems like LM Desktop do just fine.
The result is a doubly brittle system that is not deployable outside of very tightly controlled environments.
Unfortunately I don't think there's any way we can handle a GGML_ASSERT. It's defined here to call abort() which is about as fatal as it gets!
According to MS's docs, the best way to work around abort() is to run in a separate process spun up in C# before calling into the C++ library.
Yep that would be the only way to handle it (an abort() just destroys the process, with no way to recover).
That's not something LLamaSharp does internally at the moment (and personally I would say we're unlikely to, remaining just a wrapper around llama.cpp).
Instead imo the two ways to handle this would be at a higher level (load LLamaSharp in a separate process and interact with it) and at a lower level (contact the llama.cpp team ask them to use a recoverable kind of error detection where possible).
Would it not make sense for LlamaSharp as a project to request this? it would also benefit every other language consuming Llama.cpp. (and would help their own server)
I can ask if you'd prefer not to, but LLamaSharp doesn't have any special pull in the llama.cpp project. To be honest at the moment I suspect any such request will be largely ignored (unless it's accompanied by PRs to implement better error handling).
Could you do so? This really is killing us because it doesn't allow us to fall back to not using the GPU when this occurs.
I've opened up https://github.com/ggerganov/llama.cpp/issues/4385
Although I will say I wouldn't expect this to change quickly if at all! It woud be a large change in both LLamaSharp and llama.cpp! If it's an issue you currently have you'll want to split off your usage of LLamaSharp into a separate process.
Some interesting discussion related to error handling in llama.cpp here: https://github.com/ggerganov/ggml/pull/701