Add support for torch FP8 dtypes
Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [x] Did you read the contributor guideline, Pull Request section?
- [ ] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
What does this PR do?
This PR fixes #254 and adds native thunder support for the following dtypes:
torch.float8_e5m2
torch.float8_e5m2fnuz
torch.float8_e4m3fn
torch.float8_e4m3fnuz
Since the float8 dtype is implemented in 4 different variants I added the variant mechanism for Thunder dtypes such that we can differentiate between them.
This PR also adds the option to create test fp8 tensors with make_tensor so that we can start testing fp8 operations. After running the existing operators tests it is evident that the support for this dtype in torch is scarce since the majority of tests fail with "not implemented" runtime errors. With that I decided to skip the operator testing for all the fp8.
Furthermore, I updated the type promotion table, please get a look and don't hesitate to comment if you think some promotions are not in the right place.
Did you have fun?
Oh yes!