resnet50-quantization
resnet50-quantization copied to clipboard
quantizing other than int8
Hi, thank you for the notebook!
I was wondering if you ever try to do quantize aware training (QAT )to convert the model from float 32 (fp32) to float point 16 (fp16) ?