could not find llama.dll
I cannot import Llama:
from llama_cpp import Llama
This results in
{
"name": "RuntimeError",
"message": "Failed to load shared library 'c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama.dll': Could not find module 'c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama.dll' (or one of its dependencies). Try using the full path with constructor syntax.",
"stack": "---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
File c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama_cpp.py:78, in _load_shared_library(lib_base_name)
77 try:
---> 78 return ctypes.CDLL(str(_lib_path), **cdll_args)
79 except Exception as e:
File ~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\ctypes\\__init__.py:374, in CDLL.__init__(self, name, mode, handle, use_errno, use_last_error, winmode)
373 if handle is None:
--> 374 self._handle = _dlopen(self._name, mode)
375 else:
FileNotFoundError: Could not find module 'c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama.dll' (or one of its dependencies). Try using the full path with constructor syntax.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 from llama_cpp import Llama
File c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\__init__.py:1
----> 1 from .llama_cpp import *
2 from .llama import *
4 __version__ = \"0.2.38\"
File c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama_cpp.py:91
88 _lib_base_name = \"llama\"
90 # Load the library
---> 91 _lib = _load_shared_library(_lib_base_name)
93 # Misc
94 c_float_p = POINTER(c_float)
File c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama_cpp.py:80, in _load_shared_library(lib_base_name)
78 return ctypes.CDLL(str(_lib_path), **cdll_args)
79 except Exception as e:
---> 80 raise RuntimeError(f\"Failed to load shared library '{_lib_path}': {e}\")
82 raise FileNotFoundError(
83 f\"Shared library with base name '{lib_base_name}' not found\"
84 )
RuntimeError: Failed to load shared library 'c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama.dll': Could not find module 'c:\\Users\\user\\Documents\\llamacpp\\llama-cpp-python\\llama_cpp\\llama.dll' (or one of its dependencies). Try using the full path with constructor syntax."
}
The installation process was standard, on windows 10. I have CUDA 12.1.
set CMAKE_ARGS=-DLLAMA_CUBLAS=on
set FORCE_CMAKE=1
pip install llama-cpp-python
The solutions from the other issues are not working (setting winmode=0 and/or adding the dll via add_dll_directory)
RuntimeError: ... Could not find module ... (or one of its dependencies).
I met the same problem and an explicit env for nvcc fix it for me.
Try this:
$env:CMAKE_ARGS="-DLLAMA_CUBLAS=on"
$env:CUDACXX="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc.exe"
python -m pip install llama-cpp-python --prefer-binary --no-cache-dir --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
Same problem. Python 12.3.4, CUDA 12.4/12.5. ENV variable CUDA_PATH set to cuda installation. Commenting cdll_args["winmode"] = ctypes.RTLD_GLOBAL in llama_cpp.py fixed it.
Same problem. Python 12.3.4, CUDA 12.4/12.5. ENV variable CUDA_PATH set to cuda installation. Commenting cdll_args["winmode"] = ctypes.RTLD_GLOBAL in llama_cpp.py fixed it.
Solved for me as well. Seems to be the same error as reported in https://stackoverflow.com/questions/59330863/cant-import-dll-module-in-python
I had the same issue, I solved it by installing nvidia cuda toolkit and restarting. https://developer.nvidia.com/cuda-downloads
The issue it seems that llama.dll depends on cuda toolkit, which it can't find. This is why it's saying "(or one of its dependencies)".