APoT_Quantization icon indicating copy to clipboard operation
APoT_Quantization copied to clipboard

PyTorch implementation for the APoT quantization (ICLR 2020)

Results 18 APoT_Quantization issues
Sort by recently updated
recently updated
newest added

Hello, I have read your papar and your code. I found that in code the Build_power_value matched the describtion in paper. However, the weight_quantization_function passed the (bitwidth-1) value as B...

Hi~ I have a liitle puzzled about a difference between paper and code in quan_layer.py. In paper, when B = 4 P0∈{0,2^0,2^-2,2^-4} (show in example in 2.2) But in code...

Dear yhhli, could you tell me what is the Hyper-Params of mobilenet_v2 training?

I use the official MobilenetV2 in the torchvision.models. Are there any special tricks to train mobilenet_v2?

How did you calculate the number of MAC? Please share the code

I use Imagenet to train apot, it's really time-consuming. it seems that a epoch needs 1 day (8 V100). Are there anythings wrong?

Hi, Do you have the specific design of the MUL (Multiplication) unit for APOT quantization? We know that uniform(Int) quantization or POT quantization are friendly to hardware. Assume that: R...

Hello! I found that without weight normalization, the network ceases to learn, and the loss is equal to nan. Could you please explain why this is happening and how it...

Hello, I am doing the quantization work now, I tried to reproduce your results, but unfortunately the results are not the same. The full accuracy reported in your paper is...