Priestru

Results 7 issues of Priestru

### Describe the bug Traceback (most recent call last): File "E:\LLaMA\oobabooga-windows\text-generation-webui\server.py", line 308, in shared.model, shared.tokenizer = load_model(shared.model_name) File "E:\LLaMA\oobabooga-windows\text-generation-webui\modules\models.py", line 106, in load_model from modules.llamacpp_model_alternative import LlamaCppModel File "E:\LLaMA\oobabooga-windows\text-generation-webui\modules\llamacpp_model_alternative.py",...

bug

Currently it indicated by orange line at the top that usually isn't seen at all. You have to scroll to the top to check it, can we move it to...

enhancement

Also after i updated it manually, i got the following: ``` File "server.py", line 914, in shared.model, shared.tokenizer = load_model(shared.model_name) File "/mnt/e/LLaMA/Ubuntu/text-generation-webui/modules/models.py", line 138, in load_model model_file = list(Path(f'{shared.args.model_dir}/{model_name}').glob('*ggml*.bin'))[0] ```...

enhancement

We may need an option to put immovable part of the prompt (such as character/user descriptions and dialog example) at the very beginning of prompt and never change it at...

https://github.com/magic-research/magic-animate/assets/108554892/590e4986-1859-4008-be58-9e82e97bf70f This thing is 8 steps, guidance 4 with LCM trained checkpoint. Seems almost like usual generation with 25 steps. It should work a bit better if we use LCM...

I guess if we one incorporates PULID into workflow face won't change so much anymore.

Any chance to make it compatible?