SAQ icon indicating copy to clipboard operation
SAQ copied to clipboard

Quantize_first_last_layer

Open mmmiiinnnggg opened this issue 3 years ago • 1 comments

Hi! I noticed that in your code, you set bits_weights=8 and bits_activations=32 for first layer as default, it's not what is claimed in your paper " For the first and last layers of all quantized models, we quantize both weights and activations to 8-bit. " And I see an accuracy drop if I adjust the bits_activations to 8 for the first layer, could u please explain what is the reason? Thanks!

mmmiiinnnggg avatar Jul 01 '22 08:07 mmmiiinnnggg

We do not apply quantization to the input images since they have been quantized to 8-bit during image preprocessing.

liujingcs avatar Jan 26 '23 08:01 liujingcs