wenchenvincent
wenchenvincent
> @wenchenvincent please sign CLA so we can proceed. Thanks. Thanks Jeff! I have just signed the CLA.
Thank you for the answer! Is there any option that we can choose to represent the numbers as double precision?
I think this commit added fp8 fused attention: https://github.com/NVIDIA/TransformerEngine/commit/989a53a06478a4223ffb2fc2fc92b5febcf9d8c1#diff-236e240f7f5506de96cfd5f61c77c7142905dabada33f6f0c68094724dbfb9b4
@levskaya I noticed that you have reviewed several PRs regarding fp8. Could you take a look at this one?
@levskaya Could you kindly serve as the reviewer for this PR?
@levskaya Thanks for the review. I have updated the PR to address the concerns. Could you take a look at the updates?
> Thanks for the fixes! We may need to do some tiny rebasing of simple things as the codebase just migrated to a python minver of 3.10. Thanks! Do you...
> Yes to tip as of today should have the 3.10 minver updates. Also, I'm seeing this failure in the tests: > > ``` > FAILED tests/linen/linen_test.py::Fp8Test::test_fp8_meta_dtype0 - TypeError: missing...