DSQ
DSQ copied to clipboard
pytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"
Hi. In the code, function "set_quanbit()" is used to set the quantization bit for weights. However, in function "set_quanbit()", only attribute "num_bit" is modified, while attribute "bit_range" is left unchanged....
Hello. First of all, congratulation on your work. I would like to reproduce your work but I am facing a strange problem. When trying to use your DSQConv layer, I...
when I use distributed init_method is tcp
@ricky40403 Thanks for your hard and great work. I copied the DSQConv.py into my project and used DSQConv instead of nn.conv2d except the first and the last layers. However during...
Hi, impressive work! I want to reproduce the results of this implementation. Could you share the trained quantization model files (model_best.pth.tar)? Thanks!
Very appreciate for your implementation. "RuntimeError: derivative for floor_divide is not implemented" I have encounter this problem when i try to transform the conv layer to DSQ conv , and...
error
from PyTransformer.transformers.quantize import QConv2d, QuantConv2d, QLinear, ReLUQuant ImportError: cannot import name 'QuantConv2d' the 'QuantConv2d' function is not in the file . Is there a problem with the code?thinks
Hi, thank you for your implementation! I use the CIFAR10 dataset for quick training and evaluation on the 8-bit setting, but I only got not more than 80%@Top1 accuracies with/without...
@ricky40403 Hey, thanks for your great work. When i read the paper and your code, i have three question: 1. Can i set quan_bit to 2 or 4? 2. I...