AssertionError: Torch not compiled with CUDA enabled
As per your request here the error I get after installing localGPT
PS C:\localGPT> python ingest.py
C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy.libs\libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll
C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy.libs\libopenblas64__v0.3.21-gcc_10_3_0.dll
warnings.warn("loaded more than 1 DLL from .libs:"
Loading documents from C:\localGPT/SOURCE_DOCUMENTS
Loaded 2 documents from C:\localGPT/SOURCE_DOCUMENTS
Split into 1536 chunks of text
load INSTRUCTOR_Transformer
max_seq_length 512
Using embedded DuckDB with persistence: data will be stored in: C:\localGPT
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\localGPT\ingest.py:52 in param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad(): │
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module. │
│ py:1143 in convert │
│ │
│ 1140 │ │ │ if convert_to_format is not None and t.dim() in (4, 5): │
│ 1141 │ │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() els │
│ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ ❱ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 1144 │ │ │
│ 1145 │ │ return self.apply(convert) │
│ 1146 │
│ │
│ C:\Users\Name\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda_init.py:2 │
│ 39 in _lazy_init │
│ │
│ 236 │ │ │ │ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " │
│ 237 │ │ │ │ "multiprocessing, you must use the 'spawn' start method") │
│ 238 │ │ if not hasattr(torch._C, '_cuda_getDeviceCount'): │
│ ❱ 239 │ │ │ raise AssertionError("Torch not compiled with CUDA enabled") │
│ 240 │ │ if _cudart is None: │
│ 241 │ │ │ raise AssertionError( │
│ 242 │ │ │ │ "libcudart functions unavailable. It looks like you have a broken build? │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled
i had the same problem i fixed with https://github.com/PromtEngineer/localGPT/issues/10#issuecomment-1567481140