Deep_reinforcement_learning_Course
Deep_reinforcement_learning_Course copied to clipboard
Can anyone explain Why we have self.Q = tf.reduce_sum(tf.multiply(self.output, self.actions_), axis=1) in Deep Q learning with Doom.ipynb
why multiply by action and use reduce sum instead of argmax?
I think its because actions is a 1hot vector and there is 1 only in the chosen action, So multiplying will give you a vector of zeros instead of one place which will hold the qvalue. the reduce_sum just gets this number out because all the rest are zeros. What do you think?