New SOTA distillation method
There is a new SOTA distillation method. We already have it for Wan2.1 here https://huggingface.co/worstcoder/rcm-Wan/tree/main Can we get it for Wan2.2?
@MeiYi-dev are LoghtX2V Loras obtained via RCM?
@MeiYi-dev are LoghtX2V Loras obtained via RCM?
There are multiple distillation methods for Wan2.1 on HuggingFace.
1: Causvid: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid 2: Self Forcing: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v 3: DCM: https://huggingface.co/cszy98/DCM 4: AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B 5: rCM: https://huggingface.co/worstcoder/rcm-Wan 6: FastWan (Unknown method): https://huggingface.co/FastVideo/FastWan2.1-T2V-14B-Diffusers 7: Wan Lighting (Unknown method): https://huggingface.co/lightx2v/Wan2.2-Lightning 8: Infinite Forcing: https://huggingface.co/SOTAMak1r/Infinite-Forcing 9: Self-Forcing++: No weights 10: Rolling Forcing: No weights
There might be some missing methods, but the best seems to be rCM if you want to get good motion generation. For quality one of the self forcing methods might be better.
There is also a repo on HF that contains all the distillation loras along with other wan stuff here https://huggingface.co/Kijai/WanVideo_comfy/tree/main
@MeiYi-dev Fast Wan is Distribution Matching (DMD2)
@MeiYi-dev Fast Wan is Distribution Matching (DMD2)
I see that Kandinsky 5 Pro is going to get a 16 step distilled version. Can we also get a 1-4 step version as well since Wan has 4 step distills. Seems like https://arxiv.org/abs/2510.27684 Phased-DMD is very good, if we get 2-4 step distilled version of K5 the community will adopt it pretty quickly since K5 quality is very good and much better than Wan2.2.
https://arxiv.org/abs/2511.20549 Flash-DMD this seems to train very fast if compute is limited.