multiprocessing is slow and doesn't use much GPU
Is there any way to speed up training and get the program to use more GPU?
r u sure? did you check resource usage with glances?
Yeah, I am running the multiprocessing training and it is using 35% of my GPU and also when running on CPU it is the same speed? Also I am using Binance API to get 1 minute candles to train which will of course take 60x longer across the same time range., but still no difference in time for each training iteration between GPU multiprocessing and CPU?
Hey, this is a tutorial, not a release, was not working on it to get maximum efficiency... :)
Yeah, Thanks for replying, first I must say thanks for the tutorials they are amazing!! I was wondering if maybe I was doing something different, The agents come in sequentially ie. 0, 1, 2... Should multiprocessing work to run the agents simultaneously?
Hey, this is a tutorial, not a release, was not working on it to get maximum efficiency... :)
I get that it's a tutorial about RL but I had hoped to learn about multiprocessing too :)
Yeh, I wasn't good enough with multiprocessing/multithreading at the point when I was writing this tutorial. Right now, I don't have time to continue developing this tutorial. Now when I look I see that the model is written not in its most efficient way, this means that models spend a lot of time on CPU, that doesn't allow him to use more of GPU power
Yeh, I wasn't good enough with multiprocessing/multithreading at the point when I was writing this tutorial. Right now, I don't have time to continue developing this tutorial. Now when I look I see that the model is written not in its most efficient way, this means that models spend a lot of time on CPU, that doesn't allow him to use more of GPU power
Thank you for the tutorials, you're an awesome dude! Maybe I'll learn more about it in my own time, put together my own tutorial, and see if you wouldn't pull it back as a #8 tutorial.
One way I figured out that the learning speed is increasing with learning rate decay.I tried to implement exponential learning rate in the shared model. lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate, decay_steps= XXXXX,decay_rate= XXX) I know it doesn't help to the calculation speed and GPU usage but it helps to converge faster.