Rui Bu
Rui Bu
> > Post-training quantization is a conversion technique that can **reduce model size** while also improving CPU and hardware accelerator latency, with little degradation in model accuracy. > > From...
> > Post-training quantization is a conversion technique that can **reduce model size** while also improving CPU and hardware accelerator latency, with little degradation in model accuracy. > > From...
> @burui11087 Could you tell me on which condition the quantized size will be bigger than the original one? not quantized size, it's accuracy.
@Super-Tree Hi, I want to talk with you by QQ, you can add me by my mail prefix. I also have same implementation issue about vfe thanks
Hi @cv2drpepper The network performance you trained on TU-Berlin looks very strange, I need some days to rerun expriment on TU-Berlin to verify the issue you reported. BTW, could you...
Hi @jiaxin19cvml I find that we augment testing datasets when preparing train/val/test datasets so that acc. is just 60%. I will update code soon in these few days. Thanks
Hi @HJ-Xu Please check your network state, using proxy if necessary Thanks
maybe this issue #189 can solve your question Thanks
You can select following node as output to freeze pretrained model https://github.com/yangyanli/PointCNN/blob/7d0af994718dd49c66faf21c03c964613e8bdc6f/train_val_cls.py#L177 Thanks
Please refer to following code for inputs https://github.com/yangyanli/PointCNN/blob/fa3b4d46a68450c6e0006b5c0cac014c94398fd2/train_val_cls.py#L172 and modify 'data_dim' to 9 and 'use_extra_features' to True https://github.com/yangyanli/PointCNN/blob/master/pointcnn_cls/scannet_x2_l4.py