Fail to install llama-cpp-python
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
I am trying to install "llama-cpp-python" in myserver.
Current Behavior
An Exception occurred causing a fail as followed.
Failure Logs
Defaulting to user installation because normal site-packages is not writeable
Collecting llama-cpp-python
Using cached llama_cpp_python-0.2.6.tar.gz (1.6 MB)
ERROR: Exception:
Traceback (most recent call last):
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 186, in _main
status = self.run(options, args)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 331, in run
resolver.resolve(requirement_set)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/legacy_resolve.py", line 177, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/legacy_resolve.py", line 333, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/legacy_resolve.py", line 282, in _get_abstract_dist_for
abstract_dist = self.preparer.prepare_linked_requirement(req)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 515, in prepare_linked_requirement
abstract_dist = _get_prepared_distribution(
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 95, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(finder, build_isolation)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/distributions/sdist.py", line 33, in prepare_distribution_metadata
self.req.load_pyproject_toml()
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/req/req_install.py", line 512, in load_pyproject_toml
pyproject_toml_data = load_pyproject_toml(
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_internal/pyproject.py", line 75, in load_pyproject_toml
pp_toml = pytoml.load(f)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_vendor/pytoml/parser.py", line 11, in load
return loads(fin.read(), translate=translate, object_pairs_hook=object_pairs_hook, filename=getattr(fin, 'name', repr(fin)))
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_vendor/pytoml/parser.py", line 24, in loads
ast = _p_toml(src, object_pairs_hook=object_pairs_hook)
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_vendor/pytoml/parser.py", line 341, in _p_toml
s.expect_eof()
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_vendor/pytoml/parser.py", line 123, in expect_eof
return self._expect(self.consume_eof())
File "/panfs/roc/msisoft/anaconda/miniconda3_4.8.3-jupyter/lib/python3.8/site-packages/pip/_vendor/pytoml/parser.py", line 163, in _expect
raise TomlError('msg', self._pos[0], self._pos[1], self._filename)
pip._vendor.pytoml.core.TomlError: /tmp/pip-install-eq1pl5tb/llama-cpp-python/pyproject.toml(55, 1): msg
I have no idea what is this issue and how to fix it.
Environment and Context:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7763 64-Core Processor
Stepping: 1
CPU MHz: 2445.356
BogoMIPS: 4890.71
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
NUMA node4 CPU(s): 64-79
NUMA node5 CPU(s): 80-95
NUMA node6 CPU(s): 96-111
NUMA node7 CPU(s): 112-127
$ uname -a
Linux agc10 3.10.0-1160.95.1.el7.x86_64 #1 SMP Mon Jul 24 13:59:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
SDKs:
$ python3 --version
Python 3.8.3
$ make --version
GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ g++ --version
g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Thank you!
I loaded several different "cmake", including 3.21.3, 3.26.3, and it still failed. Error msg here:
% pip install --user llama-cpp-python
Collecting llama-cpp-python
Using cached llama_cpp_python-0.2.7.tar.gz (1.6 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy>=1.20.0 in /common/software/install/migrated/anaconda/python3-2023.03-libmamba/lib/python3.10/site-packages (from llama-cpp-python) (1.23.5)
Requirement already satisfied: typing-extensions>=4.5.0 in ./.local/lib/python3.10/site-packages (from llama-cpp-python) (4.8.0)
Requirement already satisfied: diskcache>=5.6.1 in ./.local/lib/python3.10/site-packages (from llama-cpp-python) (5.6.3)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [235 lines of output]
*** scikit-build-core 0.5.1 using CMake 3.21.3 (wheel)
*** Configuring CMake...
2023-09-27 13:43:14,596 - scikit_build_core - WARNING - libdir/ldlibrary: /common/software/install/migrated/anaconda/python3-2023.03-libmamba/lib/libpython3.10.a is not a real file!
2023-09-27 13:43:14,596 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/common/software/install/migrated/anaconda/python3-2023.03-libmamba/lib, ldlibrary=libpython3.10.a, multiarch=x86_64-linux-gnu, masd=None
loading initial cache file /tmp/tmp5wuuv766/build/CMakeInit.txt
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /bin/git (found version "1.8.3.1")
fatal: Not a git repository (or any parent up to mount point /tmp)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: Not a git repository (or any parent up to mount point /tmp)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
CMake Warning at vendor/llama.cpp/CMakeLists.txt:127 (message):
Git repository not found; to enable automatic generation of build info,
make sure Git is installed and the project is a Git repository.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/tmp5wuuv766/build
*** Building project with Ninja...
[1/10] /bin/cc -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -mf16c -mfma -mavx -mavx2 -pthread -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/ggml-alloc.c
[2/10] /bin/cc -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -mf16c -mfma -mavx -mavx2 -pthread -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/ggml.c
FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
/bin/cc -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -mf16c -mfma -mavx -mavx2 -pthread -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/ggml.c
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/ggml.c:101:23: fatal error: stdatomic.h: No such file or directory
#include <stdatomic.h>
^
compilation terminated.
[3/10] /bin/c++ -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/. -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wmissing-declarations -Wno-unused-function -Wno-multichar -Wno-format-truncation -Wno-array-bounds -mf16c -mfma -mavx -mavx2 -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/console.cpp
[4/10] /bin/c++ -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/. -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wmissing-declarations -Wno-unused-function -Wno-multichar -Wno-format-truncation -Wno-array-bounds -mf16c -mfma -mavx -mavx2 -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp
FAILED: vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o
/bin/c++ -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/. -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wmissing-declarations -Wno-unused-function -Wno-multichar -Wno-format-truncation -Wno-array-bounds -mf16c -mfma -mavx -mavx2 -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp: In function 'void dump_string_yaml_multiline(FILE*, const char*, const char*)':
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1096:72: error: no matching function for call to 'regex_replace(std::string&, std::regex, const char [3])'
data_str = std::regex_replace(data_str, std::regex("\n"), "\\n");
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1096:72: note: candidates are:
In file included from /usr/include/c++/4.8.2/regex:62:0,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:13:
/usr/include/c++/4.8.2/bits/regex.h:2162:5: note: template<class _Out_iter, class _Bi_iter, class _Rx_traits, class _Ch_type> _Out_iter std::regex_replace(_Out_iter, _Bi_iter, _Bi_iter, const std::basic_regex<_Ch_type, _Rx_traits>&, const std::basic_string<_Ch_type>&, std::regex_constants::match_flag_type)
regex_replace(_Out_iter __out, _Bi_iter __first, _Bi_iter __last,
^
/usr/include/c++/4.8.2/bits/regex.h:2162:5: note: template argument deduction/substitution failed:
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1096:72: note: deduced conflicting types for parameter '_Bi_iter' ('std::basic_regex<char>' and 'const char*')
data_str = std::regex_replace(data_str, std::regex("\n"), "\\n");
^
In file included from /usr/include/c++/4.8.2/regex:62:0,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:13:
/usr/include/c++/4.8.2/bits/regex.h:2182:5: note: template<class _Rx_traits, class _Ch_type> std::basic_string<_Ch_type> std::regex_replace(const std::basic_string<_Ch_type>&, const std::basic_regex<_Ch_type, _Rx_traits>&, const std::basic_string<_Ch_type>&, std::regex_constants::match_flag_type)
regex_replace(const basic_string<_Ch_type>& __s,
^
/usr/include/c++/4.8.2/bits/regex.h:2182:5: note: template argument deduction/substitution failed:
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1096:72: note: mismatched types 'const std::basic_string<_Ch_type>' and 'const char [3]'
data_str = std::regex_replace(data_str, std::regex("\n"), "\\n");
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1097:73: error: no matching function for call to 'regex_replace(std::string&, std::regex, const char [3])'
data_str = std::regex_replace(data_str, std::regex("\""), "\\\"");
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1097:73: note: candidates are:
In file included from /usr/include/c++/4.8.2/regex:62:0,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:13:
/usr/include/c++/4.8.2/bits/regex.h:2162:5: note: template<class _Out_iter, class _Bi_iter, class _Rx_traits, class _Ch_type> _Out_iter std::regex_replace(_Out_iter, _Bi_iter, _Bi_iter, const std::basic_regex<_Ch_type, _Rx_traits>&, const std::basic_string<_Ch_type>&, std::regex_constants::match_flag_type)
regex_replace(_Out_iter __out, _Bi_iter __first, _Bi_iter __last,
^
/usr/include/c++/4.8.2/bits/regex.h:2162:5: note: template argument deduction/substitution failed:
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1097:73: note: deduced conflicting types for parameter '_Bi_iter' ('std::basic_regex<char>' and 'const char*')
data_str = std::regex_replace(data_str, std::regex("\""), "\\\"");
^
In file included from /usr/include/c++/4.8.2/regex:62:0,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:13:
/usr/include/c++/4.8.2/bits/regex.h:2182:5: note: template<class _Rx_traits, class _Ch_type> std::basic_string<_Ch_type> std::regex_replace(const std::basic_string<_Ch_type>&, const std::basic_regex<_Ch_type, _Rx_traits>&, const std::basic_string<_Ch_type>&, std::regex_constants::match_flag_type)
regex_replace(const basic_string<_Ch_type>& __s,
^
/usr/include/c++/4.8.2/bits/regex.h:2182:5: note: template argument deduction/substitution failed:
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/common.cpp:1097:73: note: mismatched types 'const std::basic_string<_Ch_type>' and 'const char [3]'
data_str = std::regex_replace(data_str, std::regex("\""), "\\\"");
^
At global scope:
cc1plus: warning: unrecognized command line option "-Wno-format-truncation" [enabled by default]
[5/10] /bin/c++ -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/. -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wmissing-declarations -Wno-unused-function -Wno-multichar -Wno-format-truncation -Wno-array-bounds -mf16c -mfma -mavx -mavx2 -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/common/grammar-parser.cpp
[6/10] /bin/cc -DGGML_USE_K_QUANTS -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -mf16c -mfma -mavx -mavx2 -pthread -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/k_quants.c
[7/10] /bin/c++ -DGGML_USE_K_QUANTS -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wmissing-declarations -Wno-unused-function -Wno-multichar -Wno-format-truncation -Wno-array-bounds -mf16c -mfma -mavx -mavx2 -pthread -std=gnu++11 -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp
FAILED: vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
/bin/c++ -DGGML_USE_K_QUANTS -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wmissing-declarations -Wno-unused-function -Wno-multichar -Wno-format-truncation -Wno-array-bounds -mf16c -mfma -mavx -mavx2 -pthread -std=gnu++11 -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:872:10: warning: unused parameter 'addr' [-Wunused-parameter]
bool raw_lock(const void * addr, size_t len) const {
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:872:10: warning: unused parameter 'len' [-Wunused-parameter]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:877:17: warning: unused parameter 'addr' [-Wunused-parameter]
static void raw_unlock(const void * addr, size_t len) {}
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:877:17: warning: unused parameter 'len' [-Wunused-parameter]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_vocab' [-Wmissing-field-initializers]
llama_hparams hparams = {};
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_ctx_train' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_ctx' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_embd' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_head' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_head_kv' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_layer' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_rot' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::n_ff' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::f_norm_eps' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::f_norm_rms_eps' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::rope_freq_base' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:1078:30: warning: missing initializer for member 'llama_hparams::rope_freq_scale' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:3910:15: error: 'is_trivially_copyable' is not a member of 'std'
static_assert(std::is_trivially_copyable<llm_symbol>::value, "llm_symbol is not trivially copyable");
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:3910:52: error: expected primary-expression before '>' token
static_assert(std::is_trivially_copyable<llm_symbol>::value, "llm_symbol is not trivially copyable");
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:3910:53: error: '::value' has not been declared
static_assert(std::is_trivially_copyable<llm_symbol>::value, "llm_symbol is not trivially copyable");
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp: In function 'llama_grammar* llama_grammar_init(const llama_grammar_element**, size_t, size_t)':
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:4648:75: warning: missing initializer for member 'llama_partial_utf8::value' [-Wmissing-field-initializers]
return new llama_grammar{ std::move(vec_rules), std::move(stacks), {} };
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:4648:75: warning: missing initializer for member 'llama_partial_utf8::n_remain' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp: In function 'void llama_model_quantize_internal(const string&, const string&, const llama_model_quantize_params*)':
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:5805:53: warning: missing initializer for member 'std::array<long int, 16ul>::_M_elems' [-Wmissing-field-initializers]
std::array<int64_t, 1 << 4> hist_cur = {};
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp: In lambda function:
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:5816:63: warning: missing initializer for member 'std::array<long int, 16ul>::_M_elems' [-Wmissing-field-initializers]
std::array<int64_t, 1 << 4> local_hist = {};
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp: In function 'int llama_apply_lora_from_file_internal(const llama_model&, const char*, const char*, int)':
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:5909:57: error: use of deleted function 'std::basic_ifstream<char>::basic_ifstream(const std::basic_ifstream<char>&)'
auto fin = std::ifstream(path_lora, std::ios::binary);
^
In file included from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:0:
/usr/include/c++/4.8.2/fstream:427:11: note: 'std::basic_ifstream<char>::basic_ifstream(const std::basic_ifstream<char>&)' is implicitly deleted because the default definition would be ill-formed:
class basic_ifstream : public basic_istream<_CharT, _Traits>
^
/usr/include/c++/4.8.2/fstream:427:11: error: use of deleted function 'std::basic_istream<char>::basic_istream(const std::basic_istream<char>&)'
In file included from /usr/include/c++/4.8.2/fstream:38:0,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:
/usr/include/c++/4.8.2/istream:58:11: note: 'std::basic_istream<char>::basic_istream(const std::basic_istream<char>&)' is implicitly deleted because the default definition would be ill-formed:
class basic_istream : virtual public basic_ios<_CharT, _Traits>
^
/usr/include/c++/4.8.2/istream:58:11: error: use of deleted function 'std::basic_ios<char>::basic_ios(const std::basic_ios<char>&)'
In file included from /usr/include/c++/4.8.2/ios:44:0,
from /usr/include/c++/4.8.2/istream:38,
from /usr/include/c++/4.8.2/fstream:38,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:
/usr/include/c++/4.8.2/bits/basic_ios.h:66:11: note: 'std::basic_ios<char>::basic_ios(const std::basic_ios<char>&)' is implicitly deleted because the default definition would be ill-formed:
class basic_ios : public ios_base
^
In file included from /usr/include/c++/4.8.2/ios:42:0,
from /usr/include/c++/4.8.2/istream:38,
from /usr/include/c++/4.8.2/fstream:38,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:
/usr/include/c++/4.8.2/bits/ios_base.h:786:5: error: 'std::ios_base::ios_base(const std::ios_base&)' is private
ios_base(const ios_base&);
^
In file included from /usr/include/c++/4.8.2/ios:44:0,
from /usr/include/c++/4.8.2/istream:38,
from /usr/include/c++/4.8.2/fstream:38,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:
/usr/include/c++/4.8.2/bits/basic_ios.h:66:11: error: within this context
class basic_ios : public ios_base
^
In file included from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:0:
/usr/include/c++/4.8.2/fstream:427:11: error: use of deleted function 'std::basic_ios<char>::basic_ios(const std::basic_ios<char>&)'
class basic_ifstream : public basic_istream<_CharT, _Traits>
^
/usr/include/c++/4.8.2/fstream:427:11: error: use of deleted function 'std::basic_filebuf<char>::basic_filebuf(const std::basic_filebuf<char>&)'
/usr/include/c++/4.8.2/fstream:72:11: note: 'std::basic_filebuf<char>::basic_filebuf(const std::basic_filebuf<char>&)' is implicitly deleted because the default definition would be ill-formed:
class basic_filebuf : public basic_streambuf<_CharT, _Traits>
^
In file included from /usr/include/c++/4.8.2/ios:43:0,
from /usr/include/c++/4.8.2/istream:38,
from /usr/include/c++/4.8.2/fstream:38,
from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:
/usr/include/c++/4.8.2/streambuf:802:7: error: 'std::basic_streambuf<_CharT, _Traits>::basic_streambuf(const std::basic_streambuf<_CharT, _Traits>&) [with _CharT = char; _Traits = std::char_traits<char>]' is private
basic_streambuf(const basic_streambuf& __sb)
^
In file included from /tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:63:0:
/usr/include/c++/4.8.2/fstream:72:11: error: within this context
class basic_filebuf : public basic_streambuf<_CharT, _Traits>
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp: In function 'void llama_copy_state_data_internal(llama_context*, llama_data_context*)':
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::n_nodes' [-Wmissing-field-initializers]
ggml_cgraph gf{};
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::n_leafs' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::nodes' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::grads' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::leafs' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::visited_hash_table' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::perf_runs' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::perf_cycles' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6721:28: warning: missing initializer for member 'ggml_cgraph::perf_time_us' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp: In function 'size_t llama_set_state_data(llama_context*, uint8_t*)':
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::n_nodes' [-Wmissing-field-initializers]
ggml_cgraph gf{};
^
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::n_leafs' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::nodes' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::grads' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::leafs' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::visited_hash_table' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::perf_runs' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::perf_cycles' [-Wmissing-field-initializers]
/tmp/pip-install-ht2u5tvg/llama-cpp-python_3893f8c071b94fedb9f96176d3fecde8/vendor/llama.cpp/llama.cpp:6831:28: warning: missing initializer for member 'ggml_cgraph::perf_time_us' [-Wmissing-field-initializers]
At global scope:
cc1plus: warning: unrecognized command line option "-Wno-format-truncation" [enabled by default]
ninja: build stopped: subcommand failed.
*** CMake build failed
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
@serendipitYang I am getting the same results, did you figure out a workaround?
@serendipitYang I am getting the same results, did you figure out a workaround?
Thanks for the notes, unfortunately, I have not yet figured out a solution.
I have the same error when trying to upgrade llama-cpp-python, any help ?
Same issue. Any help ?
for me, it seems <stdatomic.h> require at least GCC 4.9, which I have 4.8 so after updating it worked fine please check this https://github.com/ggerganov/llama.cpp/issues/552
I'm using a newer gcc version than 4.8 but I still have the same issue.