Results 15 comments of jiaxiangc

> @jiaxiangc With lora rank=8, I get around 0.5M trainable parameters which seems to be consistent with the information from the paper. I've followed your suggestions from [here](https://github.com/huggingface/diffusers/issues/7243#issuecomment-1990234653) and believe...

@a-r-r-o-w Maybe You can open group norm. Finetune on general model, such as SD1.5. Make sure your dataset resolutions are right. Best.

@rootonchair @a-r-r-o-w @PacificG Thanks for your attention. For inference, we have supported huggingface demo, replicate demo that you can use. We will support ComfyUI. For training, it actually is easy...

@a-r-r-o-w Taking SD1.5 as an example, we initially experimented with Group norm and LoRA training turned on at a resolution of 128 to 1024. We found that at resolutions

@sczhou @pq-yang Same questions.

I try to train lora with peft, the results of training are right. But i can not load lora ckpt rightly.

I also want to know the video resolution, thanks.

慢还有一个原因是是分辨率大,attention的时间复杂度上升导致的。不过模型大小和分辨率,哪个比重更高就不清楚了~

@ziyannchen 请问 tiled sample 是否意味着对图像分片超分?例如一张图像是1024x1024,tile_stride=256,那么就会把图像分成四份,对每份进行超分,然后拼接?

@0x3f3f3f3fun Is there are model structure difference between LAControlNet and ControlNet? Can you convert LAControlNet to diffusers.