SAQ
SAQ copied to clipboard
Quantize_first_last_layer
Hi! I noticed that in your code, you set bits_weights=8 and bits_activations=32 for first layer as default, it's not what is claimed in your paper " For the first and last layers of all quantized models, we quantize both weights and activations to 8-bit. " And I see an accuracy drop if I adjust the bits_activations to 8 for the first layer, could u please explain what is the reason? Thanks!
We do not apply quantization to the input images since they have been quantized to 8-bit during image preprocessing.