llama_cpp/lib/libllama.so: undefined symbol: llama_kv_cache_view_init
Prerequisites
Just built with Python12 in fresh .venv
Please answer the following questions for yourself before submitting an issue.
- [ x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [x ] I carefully followed the README.md.
- [ x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [ x] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
from llama_cpp import Llama (success!)
Current Behavior
from llama_cpp import Llama
Traceback (most recent call last):
File "
Environment and Context
$ sysinfo CPU: quad core Intel Core i7-2860QM (-MT MCP-) speed/min/max: 912/800/3600 MHz Kernel: 6.14.8-300.fc42.x86_64 x86_64 Up: 1d 12h 46m Mem: 5.05/31.29 GiB (16.1%) Storage: 1.86 TiB (30.5% used) Procs: 376 Shell: Bash inxi: 3.3.38 Graphics: Device-1: NVIDIA GM204GLM [Quadro M3000M] driver: nvidia v: 570.153.02 Display: x11 server: X.Org v: 21.1.16 with: Xwayland v: 24.1.6 driver: X: loaded: nvidia gpu: nvidia,nvidia-nvswitch resolution: 1: 1920x1080~60Hz 2: 1920x1080~60Hz 3: 1366x768~60Hz API: OpenGL v: 4.6.0 vendor: nvidia v: 570.153.02 renderer: Quadro M3000M/PCIe/SSE2 API: EGL Message: EGL data requires eglinfo. Check --recommends. Info: Tools: api: glxinfo de: kscreen-doctor gpu: nvidia-settings,nvidia-smi wl: kanshi,wlr-randr x11: xdriinfo, xdpyinfo, xprop, xrandr
- Operating System, e.g. for Linux: Fedora 42
$ uname -a
Linux k 6.14.8-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Thu May 22 19:26:02 UTC 2025 x86_64 GNU/Linux
- SDK version, e.g. for Linux:
$ python3 --version
Python 3.12.10
$ make --version
GNU Make 4.4.1
$ g++ --version
g++-13 (GCC) 13.3.1 20240611 (Red Hat 13.3.1-2)
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
git pull
git submodule update --remote vendor/llama.cpp
CC=gcc-13 CXX=g++-13 FORCE_CMAKE=1 CMAKE_BUILD_PARALLEL_LEVEL=7
CMAKE_ARGS="-DGGML_CUDA=on
-DCMAKE_CUDA_FLAGS_RELEASE=-Wno-deprecated-gpu-targets
-DLLAVA_BUILD=OFF"
pip install .[server] --upgrade --force-reinstall --no-cache-dir
Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.
Try the following:
-
git clone https://github.com/abetlen/llama-cpp-python -
cd llama-cpp-python -
rm -rf _skbuild/# delete any old builds -
python -m pip install . -
cd ./vendor/llama.cpp - Follow llama.cpp's instructions to
cmakellama.cpp - Run llama.cpp's
./mainwith the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp
llama-cli -ngl 20 -m deepseek-r1-0528-qwen3-8b-q2_k.gguf -i Hi there
My name is deepseek-r1. How can I help you?
I tried changing it up using uv and pulling everything fresh.
uv pip uninstall llama-cpp-python
uv pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu126
Using Python 3.12.10 environment at: /home/k/Downloads/src/chatterbox/.venv
Resolved 6 packages in 11.58s
Built llama-cpp-python==0.3.9
Prepared 1 package in 1m 15s
Installed 1 package in 10ms
+ llama-cpp-python==0.3.9
(chatterbox) k@k:~/Downloads/src/llama-cpp-python$ python
Python 3.12.10 (main, May 9 2025, 00:00:00) [GCC 15.1.1 20250425 (Red Hat 15.1.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_cpp import Llama
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/k/Downloads/src/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
from .llama_cpp import *
File "/home/k/Downloads/src/llama-cpp-python/llama_cpp/llama_cpp.py", line 1824, in <module>
@ctypes_function(
^^^^^^^^^^^^^^^^
File "/home/k/Downloads/src/llama-cpp-python/llama_cpp/_ctypes_extensions.py", line 113, in decorator
func = getattr(lib, name)
^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/ctypes/__init__.py", line 392, in __getattr__
func = self.__getitem__(name)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/ctypes/__init__.py", line 397, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: /home/k/Downloads/src/llama-cpp-python/llama_cpp/lib/libllama.so: undefined symbol: llama_kv_cache_view_init
It's still using libllama.so from git.
Solution? Rename the llama-cpp-python git directory and run the same commands yet again.
mv llama-cpp-python llamacp
uv pip uninstall llama-cpp-python CMAKE_ARGS="-DGGML_CUDA=on" uv pip install llama-cpp-python[server] --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu126 --upgrade --force-reinstall --no-cache-dir
Works...
The git version is still bugged. I just re-cloned everything from adam.
I got it to load by removing all the debugging lines that were causing errors with ctypes. But it's far from working. Need to find where the exports moved to.
uv venv
source .venv/bin/activate
CC=gcc-13 CXX=g++-13 FORCE_CMAKE=1 CMAKE_BUILD_PARALLEL_LEVEL=7 CMAKE_ARGS="-DGGML_CUDA=on \
-DCMAKE_CUDA_FLAGS_RELEASE=-Wno-deprecated-gpu-targets \
-DLLAVA_BUILD=OFF" uv pip install .[server]
edit File "/home/k/Downloads/src/llamacp/llama_cpp/llama_cpp.py", line 1824
# comment out offending lines
@opsec-ai Hello, maybe it related with this commit? https://github.com/ggml-org/llama.cpp/commit/a4090d1174aed22dde5cacce2a4c27656b987a2f https://github.com/ggml-org/llama.cpp/pull/13653
It looks like here need to remove llama_kv_cache_view too