llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Compile bug: cuda backend compile error

Open lizhenneng opened this issue 10 months ago • 0 comments

Git commit

1682e39aa5bb1699fae3f760450be2e76d35a6a1

Operating systems

Linux

GGML backends

CUDA

Problem description & steps to reproduce

Tell CMake where to find the compiler by setting either the environment variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH.

First Bad Commit

No response

Compile command

cmake ../ -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Debug -DLLAMA_CURL=OFF

Relevant log output

cmake ../ -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Debug -DLLAMA_CURL=OFF
-- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native 
-- CUDA Toolkit found
-- Using CUDA architectures: native
-- The CUDA compiler identification is unknown


CMake Error at ggml/src/ggml-cuda/CMakeLists.txt:26 (enable_language):
  No CMAKE_CUDA_COMPILER could be found.

  Tell CMake where to find the compiler by setting either the environment
  variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
  path to the compiler, or to the compiler name if it is in the PATH.

lizhenneng avatar Apr 11 '25 10:04 lizhenneng