bigeye-studios
bigeye-studios
> same error, when use flux1-dev-fp8 with fp8_e4m3fn, but if change the weight dtype to "default"(fp16) it can works This worked for me. Thank you!
Did anyone find a solution for this SamplerCustomAdvanced Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. I'm on a macbook pro...
I'm trying to run COMFYUI with flux and am getting a similar BFloat 16 error. Would updating pytorch solve the issue?
I upgraded pytorch and things are working however now I'm getting this message: SamplerCustomAdvanced Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that...