CrazyKrow
CrazyKrow
Yeah, I think this could be great. I'm using the Alpaca lora with the chat interface and for it to work I set the Bot name as "### Response" and...
I get the same error. I did a fresh install and I still get the same error, even after your last commit ``` Loading settings from settings.json... Loading llama-13b-hf... ===================================BUG...
try with this instead `python server.py --load-in-4bit --model llama-7b-hf` because you are using `python server.py --load-in-4bit --model llama-7b` and your model is not named that way.
you have to rename "LLaMATokenizer" to "LlamaTokenizer" on the tokenizer_config.json from your model folder.
go to the windows terminal and type ipconfig, then you copy the IPv4 address you see there and paste it on your browser, then type `:7860` and hit enter. For...
"Transformers bump" commit ruins gpt4-x-alpaca if using an RTX3090: model loads, but talks gibberish
I did get it working after changing the tokenizer files, but now after responding correctly to the prompt, it keeps generating random text. With that many problems wouldn't it be...
"Transformers bump" commit ruins gpt4-x-alpaca if using an RTX3090: model loads, but talks gibberish
Got it. Is there a way to stop the model from generating random text after it has finished responding to the prompt? Because that is the only problem I'm having...
im having the same issue, image generation sometimes gets stuck at 100% and I have to restart the webui every time that happens.
This didn't fix the issue for me, I have the same problem as Dasor92. All extensions still show up as "latest" even tho they have updates available when I check...