Does FastSeq support video generation models such as Latte?
Hi, thank you for making this open source.
I've noticed that parameters such as 'sequence_parallel_size' and 'sequence_parallel_group' only appear in 'DiT' modules (such as 'DistAttn') but not in 'Latte' modules. Does this mean that FastSeq supports only image generation but not video generation? If so, could you explain why?
Thanks!!
Another question is, why flashattn and layernorm_kernel is forbidden during sampling? (https://github.com/NUS-HPC-AI-Lab/OpenDiT/blob/c15d82b738d0efb7f8f9e79c2f5277cbb417c8e2/sample.py#L70) Looking forward to you reply. Thanks in advance.
Hi! We are still working on adapting Fastseq to the Latte model and will release it in the future. You can manually set the enable_flashattn to True when sampling. It is just default to False. We will polish it.
Hi! We are still working on adapting Fastseq to the Latte model and will release it in the future. You can manually set the
enable_flashattntoTruewhen sampling. It is just default toFalse. We will polish it.
Thanks for your fast reply. I have another question: what is the difference between sequence_parallel_type 'longseq' and 'ulysses'?
dsp support now