Priestru

Results 34 comments of Priestru

> When trying to load vicuna models 4_2 and 4_3 from [eachadea's repo](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1) we're getting "unrecognized tensor type 4" and "... type 5" errors. Possibly requires updating this repo to...

I do have llama.cpp separately and it does load this very same model. But webui for some reason doesn't any more. From llama.cpp llama_model_load: loading tensors from 'E:\LLaMA\oobabooga-windows\text-generation-webui\models\ggml-vicuna-13b-4bit-rev1\ggml-vicuna-13b-4bit-rev1.bin' llama_model_load: model...

i tried git pull, install, and pip install llama-cpp-python. Nothing help

E:\LLaMA\oobabooga-windows\text-generation-webui\modules>pip install llama-cpp-python Requirement already satisfied: llama-cpp-python in c:\python311\lib\site-packages (0.1.25) Requirement already satisfied: typing-extensions>=4.5.0 in c:\python311\lib\site-packages (from llama-cpp-python) (4.5.0) Yet it doesn't work. How do i make GUI notice llama_cpp?

Downgraded my llama_cpp to 1.2.3 as per instruction, but nothing changed E:\LLaMA\oobabooga-windows>pip install llama-cpp-python==0.1.23 Requirement already satisfied: llama-cpp-python==0.1.23 in c:\python311\lib\site-packages (0.1.23) Requirement already satisfied: typing-extensions>=4.5.0 in c:\python311\lib\site-packages (from llama-cpp-python==0.1.23) (4.5.0)

Launched with arguments directly mention model ` call python server.py --auto-devices --chat --threads 8 --model ggml-vicuna-13b-4bit-rev1` got same error as before. i even tried to reboot too, just in case...

Okay i reversed part of commit in modules/models.py back to from modules.llamacpp_model import LlamaCppModel And now it works and loads and everything. (it works very bad, so i really hope...

Make sure that you have Python 3.10 on Windows. Otherwise it's one trouble after another. Also installing req doesn't help. :C

Until ooba fix it, we can help themselves by reversing this commit to models.py: https://github.com/oobabooga/text-generation-webui/commit/03cb44fc8ca24c6458dbd25c31acd1e8cbdfcde2

> This error is caused by the temporary removal of llama-cpp-python as it requires a local compiler to compile it, which causes issues with the Windows one-click-installer. > > We...