LSQuantization icon indicating copy to clipboard operation
LSQuantization copied to clipboard

The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)

Results 7 LSQuantization issues
Sort by recently updated
recently updated
newest added

When we have trained the quantization model, how to deploy it?

As written in the README, the results on ImageNet are not good like the paper. Can you tell me how different the accuracy results are?

how can i use kernel-wise quantization ? (Qmodes) Does it work at your code?

在FunLSQ类中,您计算grad_alpha时是使用如下代码: `grad_alpha = ( (indicate_small * Qn + indicate_big * Qp + indicate_middle * (-q_w + q_w.round())) * grad_weight * g).sum().unsqueeze(dim=0)` 我的问题如下 1、调用sum()这个方法的意义在于何处?按论文的理解好像不需要sum()? 2、您这个方法适用于pre channel的量化方法吗?

hello hustzxd! Your repo is very helpful for my current work. But it confuses me that in the class _Conv2dQ there is an undefined function get_default_kwargs_q(). I can't find it...

Hello, What are the hyper parameter for the training of vggsmall on cifar 10?

In my case, the following code seems to consume heavy cpu usage during the backward pass in FunLSQ. (>1000% cpu usage). ``` indicate_middle = torch.ones(indicate_small.shape).to(indicate_small.device) - indicate_small - indicate_big ```...