Deep-Q-Learning-Paper-To-Code icon indicating copy to clipboard operation
Deep-Q-Learning-Paper-To-Code copied to clipboard

Network is not learning when convolutional layers are applied.

Open DBaller opened this issue 4 years ago • 2 comments

Hey Phil! Thanks for the course. I'm really enjoying it so far.

I've implemented the first real Deep Q Network, and it is not learning. Whenever I take off the convolutional layers and just use the fully connected layers and test it on the CartPole-v1 it is able to learn, however, whenever I test it on Pong or Breakout with the convolutional layers it does not work. I've gone through all of my code many times. I don't know what it is I messed up. I've checked the wrapper, the network, the agent, even the main loop. Could it possibly be my imports?

I'm not sure what's the best way to upload code. Let me know if there is a better way. ExperienceReplay.txt GymWrapper.txt TrainAgent.txt DeepQNetwork.txt

DBaller avatar Jan 09 '22 21:01 DBaller

Hey Phil,

I did some more experimenting and am finding that my observations are returning values of 0. It is not an issue for the game pong, but it is for all other atari games.

BreakoutObservationOutput BoxingObservationOutput PongObservationOutput

Any reason as to why this may be?

DBaller avatar Jan 11 '22 23:01 DBaller

The first 2 screen shots are of breakout and boxing, the last one (with the filled tensors) is from Pong

DBaller avatar Jan 11 '22 23:01 DBaller