BitNet icon indicating copy to clipboard operation
BitNet copied to clipboard

can not work in mac M1

Open ChengChen1113 opened this issue 9 months ago • 2 comments

error information:

ggml/src/ggml.c:21302: GGML_ASSERT(0 <= info->type && info->type < GGML_TYPE_COUNT) failed Error occurred while running command: Command '['3rdparty/llama.cpp/build/bin/llama-cli', '-m', 'models/BitNet-b1.58-2B-4T/ggml-model-i2_s.gguf', '-n', '128', '-t', '2', '-p', 'You are a helpful assistant', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1', '-cnv']' died with <Signals.SIGABRT: 6>.

ChengChen1113 avatar Apr 21 '25 04:04 ChengChen1113

I have the same device and it works fine for me !

Could you a bit more specific? What commands have you run until now? In my experience most mac users have problems with the cmake but it should be fairly simple here

Simar-malhotra09 avatar Apr 21 '25 22:04 Simar-malhotra09

Same issue here. I'm on Windows, using the latest bitnet.cpp and getting the exact same error when running inference:

python run_inference.py -m models/BitNet-b1.58-2B-4T/ggml-model-i2_s.gguf -p "You are a helpful assistant" -cnv -t 1 warning: not compiled with GPU offload support, --gpu-layers option will be ignored warning: see main README.md for information on enabling GPU BLAS support Log start main: build = 3639 (20f1789d) main: built with Clang 19.1.5 for x64 main: seed = 1753491527 D:\Trabajo\BitNet\3rdparty\llama.cpp\ggml\src\ggml.c:21302: GGML_ASSERT(0 <= info->type && info->type < GGML_TYPE_COUNT) failed Error occurred while running command: Command '['build\bin\Release\llama-cli.exe', '-m', 'models/BitNet-b1.58-2B-4T/ggml-model-i2_s.gguf', '-n', '128', '-t', '1', '-p', 'You are a helpful assistant', '-ngl', '0', '-c', '2048', '--temp', '0.8', '-b', '1', '-cnv']' returned non-zero exit status 3221226505.

I followed all the Windows setup instructions:

  • Compiled inside the VS2022 Developer Command Prompt
  • Used conda and installed all Python dependencies
  • Downloaded the model via huggingface-cli from microsoft/BitNet-b1.58-2B-4T-gguf
  • Used the bundled llama.cpp inside 3rdparty

Let me know if there's a solution please.

JavierNancoB avatar Jul 26 '25 01:07 JavierNancoB