firestar222

Results 3 comments of firestar222

Same for me, and "Upcast cross attention layer to float32" does not work for me either... M2 Max

I am on a Mac M2 Max with 64 gig ram. Same issue. --force-upcast-attention doesn't do anything to help. Comfy has always been very reliable and stable on my machine,...

Update- I had luck by updating torch to 2.3, now everything seems to be working again?