install target fails for llava
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [x] I carefully followed the README.md.
- [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [x] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
I was building from source specifically version 0.2.76
make build should have built all the required stuff
Current Behavior
build install step fails at llava example due to some path variations (not sure what is wrong where though)
-- Installing: /home/bargo/projects/rocm-setup/llama-cpp-python/llama_cpp/libllama.so
CMake Error at /tmp/tmpigzjrup0/build/vendor/llama.cpp/examples/llava/cmake_install.cmake:46 (file):
file INSTALL cannot find
"/tmp/tmpigzjrup0/build/vendor/llama.cpp/examples/llava/libllava.so": No
such file or directory.
Call Stack (most recent call first):
/tmp/tmpigzjrup0/build/cmake_install.cmake:128 (include)
*** CMake install failed
error: subprocess-exited-with-error
× Building editable for llama_cpp_python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
Environment and Context
As build have successfully completed for both llama.pp and for the binding except that installing them have failed, I believe it should not matter on the environment that much (but im running rocm 6.1.1 built from source as well)
- Physical (or virtual) hardware you are using, e.g. for Linux:
Physical linux machine
$ lscpuAMD ryzen 2700x - Operating System, e.g. for Linux:
Linux 22.04 with built from source kernel 6.4
$ uname -aLinux bargos 6.4.0bargos #9 SMP PREEMPT_DYNAMIC Sun May 19 17:44:55 CEST 2024 x86_64 GNU/Linux - SDK version, e.g. for Linux:
$ python3 --version
$ make --version
$ g++ --version
Failure Information (for bugs)
what have been failed to be found llava shared library actually have already been built and its located in two locations;
llama-cpp-python$ find . -name libllava.so
./build/vendor/llama.cpp/examples/llava/libllava.so
./llama_cpp/libllava.so
Steps to Reproduce
- git clone
2.
pip3 install . OR CMAKE_ARGS="-D LLAMA_HIPBLAS=ON -D CMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -D CMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -D CMAKE_PREFIX_PATH=/opt/rocm" make build -j 8
To mitigate it right now:
I'm just skipping the llava build
option(LLAVA_BUILD "Build llava shared library and install alongside python package" ON)
by passing OFF
I also get this error on Linux 22.04 with AMD.
-- Configuring done (2.0s)
CMake Error in vendor/llama.cpp/examples/llava/CMakeLists.txt:
HIP_ARCHITECTURES is empty for target "llava_shared".
-- Generating done (0.0s)
CMake Generate step failed. Build files cannot be regenerated correctly.
*** CMake configuration failed
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Me too. I get the same error.
I have some workaround for this issue.
- Download source code from previous llama-cpp-python release ( i used 0.2.71 ) and unzip it
- Download soruce code of previous version of llama-cpp ( i used b2800) and unzip it in vendor folder in llama-cpp-python folder make sure to replace existing llama.cpp folder.
- install with
pip3install . OR CMAKE_ARGS="-D LLAMA_HIPBLAS=ON -D CMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -D CMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -D CMAKE_PREFIX_PATH=/opt/rocm" make build -j 8
im pretty sure there are better configuration instead of my configuration llama-cpp-python (0.2.71) and llama-cpp ( b2800). but well at least it works right now.
note: probably need to install additional libraries depends on your system.
I'm observing the same error. I'm using AMD/ROCm too.
Same here error here. Ubuntu 22.04 AMD/ROC: It will fail at installing llava_shared library (libllava.so): Installing: /home/[]rojects/rocm-setup/llama-cpp-python/llama_cpp/libllama.so "/tmp/tmpigzjrup0/build/vendor/llama.cpp/examples/llava/libllava.so": No such file or directory.
Concretely these task(s) will fail:
install(
TARGETS llava_shared
LIBRARY DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
RUNTIME DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
ARCHIVE DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
FRAMEWORK DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
RESOURCE DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
)
# Temporary fix for https://github.com/scikit-build/scikit-build-core/issues/374
install(
TARGETS llava_shared
LIBRARY DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
RUNTIME DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
ARCHIVE DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
FRAMEWORK DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
RESOURCE DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
)
it will work if i remove LLAVA support with: -DLLAVA_BUILD=off As for missing HIP_ARCHITECTURES try: -DCMAKE_HIP_ARCHITECTURES=gfx1100
Hi, I am having a similar issue
cd llama-cpp-python
git pull --recurse-submodules -v
git clean -x -n -f
cmake -B /pkgs/build/llama-cpp-python -DCMAKE_INSTALL_PREFIX=/pkgs/llama-cpp-python -DLLAMA_CUDA=on
cmake --build /pkgs/build/llama-cpp-python --config Release -v
cmake --install /pkgs/build/llama-cpp-python --prefix /pkgs/llama-cpp-python
the last step (the install step) fails with
CMake Error at /pkgs/build/llama-cpp-python/cmake_install.cmake:65 (file):
file cannot create directory: /llama_cpp. Maybe need administrative
privileges.
and I traced it down to ${SKBUILD_PLATLIB_DIR}/llama_cpp and SKBUILD_PLATLIB_DIR is not set here.
So what I'm trying to do is just compile the .so bits for llama-cpp-python (See #1533), but it looks like I'm missing something here
I'd like to point out one thing though: the standalone llama.cpp repo at the current master branch (45c0e2e4c1268c2d7c8c45536f15e3c9a731ecdc) builds just fine with this command (c&p from llama.cpp build instructions) and also produces llava binaries/libraries:
HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" cmake -S . -B build -DLLAMA_HIPBLAS=ON -DAMDGPU_TARGETS=gfx900 -DCMAKE_BUILD_TYPE=Release && cmake --build build --config Release -- -j 16
I tried updating vendor/llama.cpp in llama-cpp-python to that revision, but it did not help. Explicitly disabling llava (which I did not need anyway) made llama-cpp-python compile and produce a .whl:
HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" CMAKE_ARGS="-DLLAVA_BUILD=OFF -DLLAMA_HIPBLAS=on -DAMDGPU_TARGETS=gfx900" pip wheel .
I'm on Linux / ROCm 6.x
I updated the rocm drivers to version 6.2. After that, llama-cpp-python 0.2.90 was installed without problems.
@HardAndHeavy thanks for confirming the new release has this problem already fixed, lets wait for another confirmation and then we close this ticket for good :)
I also had no issue anymore when installing llama-cpp-python 0.3.1