0xJpr
0xJpr
Sorry if this is a dumb question, but is --async-offload a replacement for --cuda-malloc? I noticed that when I use both --cuda-malloc and --async-offload at the same time, the Comfy...
> > works well, thank you very much > > Can you tell me how to impliment this,please. I thought I will be able too, but if I run the...
> > > > works well, thank you very much > > > > > > > > > Can you tell me how to impliment this,please. I thought I...
> > > Latest update now should make torch.compile less necessary, managed to reduce peak VRAM usage a lot, so if you still experience issues with torch.compile, try running without...
> Block swap doesn't affect anything for the VAE as long as you have the force_offload enabled on the sampler to fully offload the model before decode. I've not noticed...
> If the VAE works on it's own, but fails after sampler, then something isn't offloading properly... which workflow/model is used in those cases? I'm Using Wan 2.2 I2V 14B...
> Ok I found one thing that may contribute to that: LoRAs weren't being fully offloaded with force_offload since I moved them to use block swap, so depending on your...
Adjust it here, you can choose to save every n steps and set how many file are kept before older ones are removed
> Share the error? I got same error, but i'm running i2v workflow. ```sh [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json!!! Exception during processing !!! unsupported operand type(s)...
it was working fine, I’ve been using the --fast args and city96 gguf node the whole time, but after the recent comfy updates, it broke. I didn’t change any dependencies...