Daiki Tanaka

Results 5 comments of Daiki Tanaka

I agree with @Sycor4x and @haQiu-oMNi too. Approximation in this code doesn't hold for non-linear encoder which has ReLU as an activation function.

It would be great help for developers to have such Typography components :+1:

[pnezis](https://github.com/RaRe-Technologies/gensim/issues/2735#issuecomment-624550848) reports that epoch-wise loss changes by resetting `model.running_training_loss=0` on the end of each epoch. And I get the similar result. If we do not reset it, model loss continues...

@gojomo Thank you for kind explaining. I understand that reported loss can be used to judge model convergence. For best practice, we should reset running loss like `model.running_training_loss = 0`...

Thanks @gojomo, the above graph is generated by following codes. I also think gradual trend up is suspicious and may be caused by some bugs. - callback code ```python class...