rwkv.cpp icon indicating copy to clipboard operation
rwkv.cpp copied to clipboard

Crash on an `endbr64` instruction.

Open RnMss opened this issue 2 years ago • 8 comments

My build crashes inferencing with a model with "Illegal Instruction". I debugged it and seems to crash on an endbr64 instruction. I think my CPU doesn't support the instruction set. Is there a building option to turn off the instruction set?

Version: Master, commit e84c446d9533dabef2d8d60735d5924db63362ff

Command to reproduce python rwkv/chat_with_bot.py ../models/xxxxxxx.bin

It crashed with "Illegal Instruction"

I debugged the program:

> gdb python 
(gdb) handle SIGILL stop
(gdb) run rwkv/chat_with_bot.py ../models/xxxx.bin
...
[New Thread 0x7fff6fa49640 (LWP 738136)]
Loading 20B tokenizer
System info: AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 
Loading RWKV model

Thread 1 "python" received signal SIGILL, Illegal instruction.
0x00007fffde693135 in ggml_init () from /*****/rwkv.cpp/librwkv.so
(gdb) disassemble
Dump of assembler code for function ggml_init:
   0x00007fffde692fd0 <+0>:	endbr64 
   0x00007fffde692fd4 <+4>:	push   %r15
   0x00007fffde692fd6 <+6>:	mov    $0x1,%eax
   0x00007fffde692fdb <+11>:	push   %r14
...

RnMss avatar Apr 14 '23 13:04 RnMss

Hi! Please try to build and run llama.cpp and see if it works.

If it crashes too with similar error, report the problem with llama.cpp to their repo. They would fix it quicker, since their repo is more popular, and then I can port the fix here.

If it does not crash, we would need to compare the code of llama.cpp and rwkv.cpp and guess what can cause the issue.

saharNooby avatar Apr 14 '23 14:04 saharNooby

I tried llama.cpp, and it worked without a crash. Tested on models: opt-1.3b and Chinese-Alpaca-LoRA-13B llama.cpp version: master-53dbba7

RnMss avatar Apr 16 '23 05:04 RnMss

I took a look at llama.cpp version of ggml. Unfortunately, my and their repo are now too diverged to make sense of any comparisons. Sorry for asking you to test llama.cpp, I'll stop asking users to do that from now on.

As for the issue, I don't have any ideas how to fix this.

saharNooby avatar Apr 16 '23 06:04 saharNooby

I tried adding compile flags -fcf-protection=none, which is said to disable the CET instruction set like endbr64, but it does not help.

It doesn't make sense. I roughly read the code but didn't see anything close to that. The disassembly looks rather real, not like some random data. I'm dooooomed.

RnMss avatar Apr 17 '23 15:04 RnMss

@RnMss I've updated ggml to the latest version. Please try again, don't forget to update git submodules (or better -- clone from scratch git clone --recursive https://github.com/saharNooby/rwkv.cpp.git).

saharNooby avatar Apr 17 '23 15:04 saharNooby

It still doest not work on my CPU. I'll try on Windows later.

Model Tested: https://huggingface.co/BlinkDL/rwkv-4-raven/blob/main/RWKV-4-Raven-14B-v8-Eng87%25-Chn10%25-Jpn1%25-Other2%25-20230412-ctx4096.pth

RnMss avatar Apr 17 '23 16:04 RnMss

Got the same problem in docker nvcr.io/nvidia/pytorch:23.05-py3, tokenizers-0.13.3

EricLeeaaaaa avatar Jul 23 '23 12:07 EricLeeaaaaa

try recompile the repo with disable the AVX instruction flag on cmakelist.txt @RnMss . this step works for me

izzatzr avatar Oct 09 '23 11:10 izzatzr