firedup icon indicating copy to clipboard operation
firedup copied to clipboard

Much slower than original spinningup tf version

Open zhan0903 opened this issue 6 years ago • 6 comments

Hi, I ran this pytorch version SAC on Mujoco, which took time almost three times more than the original tf version code? Why did this happen? Is there any way to improve the speed?

zhan0903 avatar Nov 17 '19 23:11 zhan0903

@zhan0903 thanks for trying it out! So there could be 2 reasons for this:

  1. I am calculating the gradients of the computational graph un-necessarily (thats where TF is better since it only runs the part of the graph that is needed) and a solution might be to add torch.no_grad() wrappers where needed
  2. TF will use the GPU if you are running on a GPU machine whereas currently I only run on the CPU

Do you have a benchmarking setup to test these reasons?

Thanks! Kashif

kashif avatar Nov 18 '19 08:11 kashif

@kashif thanks for your response. The TF version does not use the GPU either. I will try torch.no_grad() wrappers . I test your code in my experiments.

Thanks Han

zhan0903 avatar Nov 18 '19 23:11 zhan0903

thank you!

kashif avatar Nov 18 '19 23:11 kashif

@kashif Hi, I used the torch.no_grad() in the backpropagation process for SAC, but it didn't improve the speed. The TF version doesn't use the GPU, but it is faster than the pytorch GPU version (TF version SAC takes 7000 seconds, while the pytorch GPU version takes around 15000 seconds).

zhan0903 avatar Nov 19 '19 01:11 zhan0903

I've observed the same thing in the official Spinning Up PyTorch SAC code. For whatever reason, it's just slower, even when you're being very careful to only calculate quantities that are absolutely necessary. I haven't figured out why, yet! Hopefully will crack this eventually.

jachiam avatar Feb 02 '20 19:02 jachiam

Thanks @jachiam I'll have a look too perhaps after the icml deadline...

kashif avatar Feb 02 '20 21:02 kashif