SqueezeNext.PyTorch icon indicating copy to clipboard operation
SqueezeNext.PyTorch copied to clipboard

Slower inference time than squeezenet

Open Younghoon-Lee opened this issue 4 years ago • 0 comments

Thanks for sharing wonderful repository.

I have one problem while inferencing Squeezenext. I comapre Squeezenextv5 with Squeezenet and Squeezenext showed more inference time than Squeezenet. According to paper, Squeezenext should have faster inference time. I checked the parameters of Squeezenext and it had fewer parameters than Squeezenet which is 0.79M(due to 6 num_classes) for Squeezenext and 1.2M for Squeezenet.

Is there any chance that Squeezenext could be slower than Squeeze net under the same hyper parameters situation?

Younghoon-Lee avatar Nov 27 '21 07:11 Younghoon-Lee