Location of training loss and training accuracy calculations in scripts
I'm currently trying find the following in the library scripts:
- Training log loss or loss per epoch
- Training accuracy per epoch
Below is a snapshot of the training loss used on an example dummy sample set :
Training loss appears in libreco/algorithms/base.py, line 333-337.
Since during training, the process needs to calculate the training loss to update the model. So if we calculate the training loss again in evaluate.py, the training loss will be computed twice, which is inefficient.
Thanks, I may have overlooked it. Saved!
Can this training loss be compared directly to the eval loss recorded in the evaluation script or does it need be converted? Therefore, is training loss is the log loss?
The training_loss is computed using tf.nn.sigmoid_cross_entropy_with_logits, and the eval loss is computed using sklearn.metrics.log_loss. The math equations are the same, so if you trust the implementations of both TensorFlow ans Sklearn, then they can be compared.