RuoyiDu

Results 8 comments of RuoyiDu

Hi @davizca, please try to set view_batch_size to 16. It should work for 3090 and will make inference faster.

Hi @davizca, this is very strange now. Are you running on a laptop with RTX3090? The power of the GPU also affects the inference time -- I'm using the RTX3090...

Hi @davizca, on my server, it takes about 80s under full load. I'll try to optimise the speed of the decoding. But it looks like there are other reasons here...

Hi @siraxe @davizca. Can you guys try to generate at 2048x2048 and set `multi_decoder=False`? For generating 2048x2048 images on 3090, we don't need the tiled decoder. Then we can see...

Thanks @siraxe! But it's still much slower than on my machine... It seems the decoder is quite slow on your PC, which makes it ridiculously slow when using tiled decoder....

Hi guys @davizca @siraxe @Yggdrasil-Engineering, I find a little mistake at line #607: `pad_size = self.unet.config.sample_size // 4 * 3` should be `pad_size = self.unet.config.sample_size // 8 * 3`. This...

Hi @chenpipi0807, thanks for your interest. As I mentioned in many places, DemoFusion is proposed for high-resolution generation. And a potential application is people can use a real image as...

> Hi, I was trying to run the repository but facing some difficulty in model training. Can the authors share the ViT-B-16 model weight files after they fine-tuned it for...