kmf123kmf
kmf123kmf
Same issue here with 10GB 3080. Confirmed disabling cross attention optimizations fixes the issue. Oddly, this only seems to affect training on the v1.5 model. Training an embedding on v2.1...
The video posted above is indeed a sort of "fix". It downgrades torch/cuda and xformers to previously working versions. I have no idea what the potential consequences might be in...
> There was a new discovery here. In previous tests, I had set the PWM Output mode for ch1 to ch4 to 333Hz, but when I changed it to the...
Not trying to be annoying, but is there anything new to report on this?
> This is somewhat related to #1858 After reading it, implementation of #1858 would be my preferred fix if possible.