Grokking-Deep-Learning icon indicating copy to clipboard operation
Grokking-Deep-Learning copied to clipboard

Inactive Activation gradients

Open Seabrand opened this issue 5 years ago • 1 comments

Notably in chapter 8, the backpropagation through activation function gradients appear off: if you target the derivative of an activation function for a given input σ'(x), shouldn't you use that input for the gradient instead of the output y = σ(x)? Example: if you calculate layer_1 = relu(np.dot(layer_0,weights_0_1)) in the forward direction, then propagating backward would require layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(np.dot(layer_0,weights_0_1)) i.e. the input at the activation function, and not as suggested layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(layer_1) After all, applying relu2deriv(relu(x)) would yield (x>=0)x>=0, the identity function and actually not change anything. The effects on training are not too big, but it does impact overfitting, the amount of loss and in fact some of the narrative.

Seabrand avatar Feb 01 '21 14:02 Seabrand

I also have the same doubt. Even Andrew Ng did it the same way as we are expecting. Below are the screenshots I took from Andrew Ng's course.

Screenshot (97) Screenshot (98)

AshishPandagre avatar Apr 06 '21 17:04 AshishPandagre