大Y杨
Results
3
issues of
大Y杨
In 'HAWQ-main/tvm_benchmark/hawq_utils_resnet50.py' ,we pack 8 'int4' number to 1 'int32' number, so we got int4 speedup. Can we pack 16 'int2' to 1 'int32', to got int2 speedup?
这是在torch1.4之后的新要求,改变了torch.autograd.Function的调用方法,需要使用.apply来调用。 解决方法,在XNOR-Net-PyTorch-master\MNIST\models下的LeNet_5.py第53行改为 x = BinActive.apply(x)即可正常运行 change LeNet_5.py line53 to 'x = BinActive.apply(x)' to slove problem as following: RuntimeError: Legacy autograd function with non-static forward method is deprecated