TechInnovate01
TechInnovate01
>  > > The '/localGPTUI/templates/home.html' file when loaded directly has all the options you would expect to see. All the menus aren't being rendered you have...
I also have a higher GPU and wish to run this in a production environment; however, its working only on CPU and memory, not a single % GPU is being...
> none is not an allowed value (type=type_error.none.not_allowed) 1st of all, the solution for your error is , please run the below command. _CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir_ if...
> hi, what did you change in the run_localgpt.py or ingest.py to get it to work with the 70B model? many thanks you have to select desired models in **_[constants.py]_**...
> Hi all you guys, > > My situation is that when I run the code from the terminal running on the GPU, it Goes very well on the GPU...
Just for clarity, GGUF models are quantized models and are meant to run on CPU and memory. if you want to use the model over the GPU you must select...
> Hey, > > I have run the ingest.py and I says that my provided pdfs are ingested (no errors). However if I ask my model something about these documents...
Same error after using updated API and latest code.