philipwan
philipwan
@jcwchen hi,Due to this known issue: https://github.com/onnx/onnx/issues/4367, and according to your suggestion, I made a pull request, there is no problem in the cherry-pick process, please review this PR, thank...
I build v2.4.1 success , but build v2.5.0/v2.5.1/v2.5.2/v2.5.3 failed, I feel so confused =.=
> Hi! > > I also failed to build v2.5 [about a year ago](https://github.com/tensorflow/tensorflow/issues/49209), and I think it might be a bug of this version... But I saw that you...
> You may give it a try. Thank you for your reply,I will try it now!
> How did you solve it? Cheers! I didn't solve it, but I have a workaround by matching custom pattern rather than **xformers::efficient_attention_forward_cutlass**
> yeah! our pab can also be applied to 3d attention models like cogvideo and open_sora_plan v1.2. will support soon! Wow!I am looking forward to 3d attention PAB release Thank...
> yeah! our pab can also be applied to 3d attention models like cogvideo and open_sora_plan v1.2. will support soon! I see PAB strategy support open-sora-plan 1.2 / cogvideoX, how...
> cogvideox is supported but we haven't support open sora plan v1.2. the result is good > […](#) > ________________________________ 发件人: philipwan ***@***.***> 发送时间: Wednesday, September 4, 2024 9:56:56 AM...
> We are looking into this and will definitely support video generation models like Mochi and CogVideoX. Stay tuned. Wow~Good News!!! I notice there are too many new operators in...
> I think we just need to quantize the model with [deepcompressor](https://github.com/mit-han-lab/deepcompressor) and then convert it to nunchaku format. Yeap, we have all the operators in the C++ files. We...