RL-Bitcoin-trading-bot icon indicating copy to clipboard operation
RL-Bitcoin-trading-bot copied to clipboard

multiprocessing is slow and doesn't use much GPU

Open HoaxParagon opened this issue 4 years ago • 8 comments

Is there any way to speed up training and get the program to use more GPU?

HoaxParagon avatar Dec 21 '21 13:12 HoaxParagon

r u sure? did you check resource usage with glances?

kitmir avatar Jan 21 '22 23:01 kitmir

Yeah, I am running the multiprocessing training and it is using 35% of my GPU and also when running on CPU it is the same speed? Also I am using Binance API to get 1 minute candles to train which will of course take 60x longer across the same time range., but still no difference in time for each training iteration between GPU multiprocessing and CPU?

DogeBotDoge avatar Mar 01 '22 11:03 DogeBotDoge

Hey, this is a tutorial, not a release, was not working on it to get maximum efficiency... :)

pythonlessons avatar Mar 01 '22 14:03 pythonlessons

Yeah, Thanks for replying, first I must say thanks for the tutorials they are amazing!! I was wondering if maybe I was doing something different, The agents come in sequentially ie. 0, 1, 2... Should multiprocessing work to run the agents simultaneously?

DogeBotDoge avatar Mar 01 '22 18:03 DogeBotDoge

Hey, this is a tutorial, not a release, was not working on it to get maximum efficiency... :)

I get that it's a tutorial about RL but I had hoped to learn about multiprocessing too :)

HoaxParagon avatar Mar 01 '22 22:03 HoaxParagon

Yeh, I wasn't good enough with multiprocessing/multithreading at the point when I was writing this tutorial. Right now, I don't have time to continue developing this tutorial. Now when I look I see that the model is written not in its most efficient way, this means that models spend a lot of time on CPU, that doesn't allow him to use more of GPU power

pythonlessons avatar Mar 02 '22 08:03 pythonlessons

Yeh, I wasn't good enough with multiprocessing/multithreading at the point when I was writing this tutorial. Right now, I don't have time to continue developing this tutorial. Now when I look I see that the model is written not in its most efficient way, this means that models spend a lot of time on CPU, that doesn't allow him to use more of GPU power

Thank you for the tutorials, you're an awesome dude! Maybe I'll learn more about it in my own time, put together my own tutorial, and see if you wouldn't pull it back as a #8 tutorial.

HoaxParagon avatar Mar 02 '22 15:03 HoaxParagon

One way I figured out that the learning speed is increasing with learning rate decay.I tried to implement exponential learning rate in the shared model. lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate, decay_steps= XXXXX,decay_rate= XXX) I know it doesn't help to the calculation speed and GPU usage but it helps to converge faster.

kitmir avatar Mar 04 '22 12:03 kitmir