rohitanshu
rohitanshu
@MarineBirch723 It will give that error once at the start and then like 8 times consecutively before finally starting training. Did you wait till then?
@user83922 Yes, I have data loader set as 8. So, that's the reason. Good to know, Thanks! @bmaltais What do you set it to? 1? or 0? Or are you...
@bmaltais "ValueError: persistent_workers option needs num_workers > 0" So I had to set it to 1.
@RandomGitUser321 Does it offer any speed or memory benefits to you?
@RandomGitUser321 Thank you for sharing your experience. I was thinking of installing Triton on Windows but I see it's not useful. May be it helps on Linux only.
@maray29 Have you tried applying a VAE separately?
@specblades I had the same error. I solved it by removing `.int()` from line 791 of `train_tools.py`, which was giving the error. I don't know if it has any other...
@Haoming02 What model are you training on? Also, after commenting out that line, did your lora turn out okay in the end?
Same, training on SDXL gives almost non-existent results here, even at higher learning rates. Normal training (with images), SDXL never has this issue. I wonder what's going on! May be...
It seems you've checked `split mode`, but didn't set `train blocks` as `single`.