ShuffleNet icon indicating copy to clipboard operation
ShuffleNet copied to clipboard

Inference Speed Test?

Open ildoonet opened this issue 8 years ago • 9 comments

It would be great if you test your code to check the inference speed.

ildoonet avatar Sep 09 '17 06:09 ildoonet

Hi @ildoonet, I'll take a look when I get time :)

jaxony avatar Sep 11 '17 14:09 jaxony

I finally got around to doing some inference on ShuffleNet today. And it is definitely far too slow. Any ideas on how to speed it up? I suspect the snail-like speed is due to the frequent channel shuffling.

jaxony avatar Dec 20 '17 12:12 jaxony

@gngdb if you have any ideas on how to speed it up in PyTorch, would love to know. I can't imagine doing a full training run at this speed. Speeding up would drastically help you with training too

jaxony avatar Dec 20 '17 12:12 jaxony

What version of PyTorch are you running? The speed of grouped convolutions increased a lot in the most recent versions.

gngdb avatar Dec 20 '17 14:12 gngdb

I'm running PyTorch 0.3.0 with CUDA. How long does it take for you to do one inference on the cat image? It takes probably 30 seconds for me.

jaxony avatar Dec 21 '17 02:12 jaxony

The entire script takes about 400ms for me to run, and the actual inference step y = net(x) takes about 70ms. The infer.py script never calls .cuda() so everything is running on CPU. I tried moving it to the GPU, but that just makes the single inference slower (takes longer to move the single image on and off the GPU); ends up being 16 seconds, with 5 seconds on inference.

gngdb avatar Dec 21 '17 12:12 gngdb

For completeness, I was running with pytorch version 0.4.0, and pip freeze gave this. I installed the conda env following the instructions to build pytorch from source.

Also, here are the CPU details:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Stepping:              1
CPU MHz:               1200.281

With an old conda env on pytorch version 0.2.0, it took 150ms for inference and 350ms for the whole script.

gngdb avatar Dec 21 '17 12:12 gngdb

Hmm okay. I guess there's no need to improve speed if it works well enough. I'll figure out what the problem is on my end.

jaxony avatar Dec 22 '17 04:12 jaxony

@jaxony HI,have you solved it?

DW1HH avatar Jul 10 '18 13:07 DW1HH