Boyan Yanevsky

Results 11 comments of Boyan Yanevsky

The installer should have include a selector for RTX 5000 series with pytorch 2.8.0 / Cuda 12.8 by now. I guess i'm going back to the manual installation way with...

The only way I could get it to work was by doing clean manual install, then pytorch 2.8.0 over and latest bitsandbytes from the GH repo. After that had to...

> Cannot install bitsandbytes i do this: pip install --force-reinstall --no-cache-dir bitsandbytes then a lot gets uninstalled, downloaded and reinstalled. But then? ERROR: pip's dependency resolver does not currently take...

@Blackjack356 I haven't checked with the latest comfy updates, but before i had to mute the Comfy function that disables the torch compile in the code. This function seems to...

@jovan2009 I was also using the FP16 on 64GB RAM with **--cache-none** option as startup argument. This is the only way it could run the models in sequence and not...

This model would be perfect for an NVFP4 or INT4 variant yes indeed :) Hopefully we'll see Wan2.2 soon as well!

Is there more information about this? Has anyone managed to fix the black output? I'm using sage with --use-sage-attention startup argument. Previously in Wan2.1 I would get a black image...

@StrongerXi Here is what I tried with as per your recommendation: ``` $ export TORCHINDUCTOR_EMULATE_PRECISION_CASTS=1 $ echo $TORCHINDUCTOR_EMULATE_PRECISION_CASTS $ 1 ``` Then I started Comfy with: `$ python3 main.py --use-sage-attention`...

Works well now. Compiled from the stable branch stable_abi3 from @woct0rdho repo and silenced the annoying torch.compiler.disable() function. The good old torch compile functions are back, memory utilization is gold...

> Do you mind sharing a link to the branch? @StrongerXi Sure. It's this one: https://github.com/woct0rdho/SageAttention.git ( abi3_stable branch for pytorch >= 2.9 ) Also, the **torch.compiler.disable()** function seems to...