Adds dst.dtype information in copy_ method of quantized tensors.
Description
Fixes a bug that causes precision issues in mix-precision training.
Current implementation of copy_ method in QuantizedTensor class does not properly pass the dst.dtype information when src is a QuantizedTensor and dst is not. This may cause precision concerns under certain circumstances, for instance: (1) main-stream precision is bfloat16 (2) model is initialized with FP8 format (3) master weights in optimizers are kept at high precision, i.e., float32 (4) when continue training from a checkpoint but optimizer stats are not loaded/provided, master weights must be initialized from model weights
In above conditions and alike, model weights will be dequantized to bfloat16 (the dtype recorded in quantizer object) and then copied to master weight (of float32 precision), where the trailing 16 bits info are lost.
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refactoring
Changes
Please list the changes introduced in this PR:
- adds dst.dtype information in the dequantize() function call of copy_ method.
Checklist:
- [x] I have read and followed the contributing guidelines
- [ ] The functionality is complete
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch