Carl Kenner
Carl Kenner
I don't know how much it matters. But it presumably helps at least a little, so we should do it correctly by default.
> This PR implements the exact prompts [a777c05](https://github.com/oobabooga/text-generation-webui/commit/a777c058aff905e142700b912cd8e3ddbd3fe8e1) It's wrong. That's not what the context string is supposed to be. > If someone can find an improvement to the templates,...
> Seems to work fine for me as is Then please provide the steps you used to make it work.
I added support for this in my pull request #1395. Let me know if there's anything else we need to do besides what's in my pull request.
Yeah, the quality of this model isn't great. But I don't think that's our fault (if you're using my pull request). I added presets so you can run the model...
I think you are using a later version of the one-click installer than other people. * start_windows now both installs and runs it. * miniconda is now used instead of...
I told you. Use `start_windows.bat`
> got it just worked now I am using a machine with 6 gbs of vram would that be enough? Also is there a way to add models I'm trying...
``` Gradio HTTP request redirected to localhost :) Loading llava-13b-4bit-128g... Could not find the quantized model in .pt or .safetensors format, exiting... Done! Press any key to continue . ....
Using `wojtab_llava-13b-v0-4bit-128g` instead of `llava-13b-4bit-128g`, I'm now getting this error: ``` Gradio HTTP request redirected to localhost :) Loading wojtab_llava-13b-v0-4bit-128g... Found the following quantized model: models\wojtab_llava-13b-v0-4bit-128g\llava-13b-v0-4bit-128g.safetensors Traceback (most recent call...