gpt4all.cpp
gpt4all.cpp copied to clipboard
Locally run an Assistant-Tuned Chat-Style LLM
Results
6
gpt4all.cpp issues
Sort by
recently updated
recently updated
newest added
The instruction prompt and response (prompt_inp and response_inp) are leaking into the output a lot. This code will prevent that from happening.
actually check for it's existence before setting it fixes ```%% ./chat main: seed = 1680128377 llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ... Illegal instruction: 4 ```
This is probably junk.