dw5189
dw5189
I've just successfully installed it! Here's the information for your reference. PowerShell : $env:CUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6" $env:CMAKE_GENERATOR_PLATFORM="x64" $env:FORCE_CMAKE="1" $env:CMAKE_ARGS="-DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=89" pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade ********************************************************************** **...
(base) PS C:\WINDOWS\system32> conda activate CUDA124-py312 (CUDA124-py312) PS C:\WINDOWS\system32> $env:CUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.4" (CUDA124-py312) PS C:\WINDOWS\system32> $env:CMAKE_GENERATOR_PLATFORM="x64" (CUDA124-py312) PS C:\WINDOWS\system32> $env:FORCE_CMAKE="1" (CUDA124-py312) PS C:\WINDOWS\system32> $env:CMAKE_ARGS="-DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=89" (CUDA124-py312) PS C:\WINDOWS\system32>...
(CUDA125-py311) D:\software\llama-cpp-python>set CMAKE_CXX_COMPILER="C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.43.34808\bin\Hostx64\x64\cl.exe" (CUDA125-py311) D:\software\llama-cpp-python>set FORCE_CMAKE=1 && set CMAKE_ARGS=-DGGML_CUDA=on && pip install --upgrade --no-cache-dir --force-reinstall -v --prefer-binary llama-cpp-python Using pip 25.0 from D:\software\minicondapy311\envs\CUDA125-py311\Lib\site-packages\pip (python 3.11) Looking in...
(CUDA125-py312) PS E:\llama-cpp-python\build> cmake -G "Visual Studio 17 2022" -A x64 ` >> -DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.5" ` >> -DCMAKE_CUDA_COMPILER="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.5/bin/nvcc.exe" ` >> -DCMAKE_CUDA_ARCHITECTURES="89" ` >>...