[QST] A100 double-precision Tensor Cores ?
Hi,
is there any support for DP Tensor Cores in cutlass, available or foreseen ?
Thanks in advance
Yes.
For an example, you can modify the example 14_ampere_tf32_tensorop_gemm to use double precision.
To do so, you can change ElementAccumulator, ElementInputA, and ElementInputB on these lines to be of type double, and change ShapeMMAOp here to be 8x8x4.
We have many f64 or complex f64 unit tests here: https://github.com/NVIDIA/cutlass/tree/master/test/unit/gemm/device
Thanks a lot to both of you, I have everything I need.
Le jeu. 23 juin 2022 à 05:53, Haicheng Wu @.***> a écrit :
We have many f64 or complex f64 unit tests here: https://github.com/NVIDIA/cutlass/tree/master/test/unit/gemm/device
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/cutlass/issues/536#issuecomment-1163895707, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGBRHDTQVZRYYP2U2PF4JNLVQPNTFANCNFSM5ZMJV2HQ . You are receiving this because you authored the thread.Message ID: @.***>
This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.