liujingcs

Results 14 comments of liujingcs

Thanks for the great work! I also have the same question. Can you please provide the details of the final search space?

Hi, I have faced the same issue. Have you solved this issue?

Thanks for your response. I use CelebA data set. I have some question. 1. How to set learning rate? I found that large learning rate will result to NaN. 2....

Hi, we have released the code and pre-trained models. 1. For the corresponding instructions, you might refer to https://github.com/MonashAI/QTool/blob/master/doc/detectron2.md. 2. For the quantization function, you might refer to https://github.com/aim-uofa/model-quantization/blob/74115eaf33668207124254f2b2145209f7ab70fe/models/quant.py#L535. 3....

1. Yes. We have changed some codes based on detectron2. 2. We have included LSQ in line 535 of the corresponding link. 3. Yes. Given n levels, we create n...

We do not apply quantization to the input images since they have been quantized to 8-bit during image preprocessing.

这个是用来设置每层的weight和activation的bitwidth

LIQ_wn_qsam的具体位置在这里: https://github.com/ziplab/SAQ/blob/main/models/LIQ_wn_qsam.py

目前量化模型存储时还是用float32存储的,所以模型大小没变。如果要减小模型大小,需要将模型转成低比特进行存储

权重是的,激活值需要量化后才是低比特。比如说权重的范围是[0,1],那么量化到2-bit之后,量化之后的值就只有{0, 1/3, 2/3, 1}