araleza
araleza
Hey, thanks for providing this super resolution network, it's produced some great output for me. I do have an issue to report though. As well as upsampling the image, the...
### Describe the bug Hi, I 'git clone'd a fresh checkout of webui, and then ran ./start_linux.sh. It installed some stuff, but then failed with: ``` ******************************************************************* * WARNING: You...
So I have a GPTQ llama model I downloaded (from TheBloke), and it's already 4 bit quantized. I have to pass in False for the load_in_4bit parameter of: ``` model,...
Looking at the DoRA paper, it seemed to get impressive results:  From: https://arxiv.org/pdf/2402.09353 There even seemed to be some indications that QDoRA could outperform full fine tuning:  Is...
If you finetune SDXL base with: ``` --train_text_encoder --learning_rate_te1 1e-10 --learning_rate_te2 1e-10 --fused_backward_pass ``` Then it will train fine. But if you stop training and restart by training from the...
I've recently switched over to doing Flux full fine tuning instead of LoRA training. But I've found that sample image generation while training is very slow. I'm using `--blocks_to_swap 35`...
This is a new feature, currently only for Flux LoRA training (although it could be applied to full fine-tune later too at least). It analyses the latents of training images,...
So I just discovered this Flux (SD3 branch) parameter: ``` --timestep_sampling flux_shift ``` Previously I'd been using ``` --timestep_sampling shift ``` due to: 1) The README.md having `--timestep_sampling shift` in...
While measuring sd-scripts' LoRA key lengths with Tensorboard, I noticed that from time to time there were big jumps. The jumps seem to correspond to the noise timestep value for...
### Name and Version I'm using the current latest llama-server. ### Operating systems Linux ### Which llama.cpp modules do you know to be affected? llama-server ### Command line ```shell llama.cpp/build/bin/llama-server...