SqueezeNext.PyTorch
SqueezeNext.PyTorch copied to clipboard
Slower inference time than squeezenet
Thanks for sharing wonderful repository.
I have one problem while inferencing Squeezenext. I comapre Squeezenextv5 with Squeezenet and Squeezenext showed more inference time than Squeezenet. According to paper, Squeezenext should have faster inference time. I checked the parameters of Squeezenext and it had fewer parameters than Squeezenet which is 0.79M(due to 6 num_classes) for Squeezenext and 1.2M for Squeezenet.
Is there any chance that Squeezenext could be slower than Squeeze net under the same hyper parameters situation?