LibRecommender icon indicating copy to clipboard operation
LibRecommender copied to clipboard

Location of training loss and training accuracy calculations in scripts

Open Shadz13 opened this issue 4 years ago • 3 comments

I'm currently trying find the following in the library scripts:

  1. Training log loss or loss per epoch
  2. Training accuracy per epoch

Below is a snapshot of the training loss used on an example dummy sample set : training_log

Shadz13 avatar Feb 25 '21 19:02 Shadz13

Training loss appears in libreco/algorithms/base.py, line 333-337. Since during training, the process needs to calculate the training loss to update the model. So if we calculate the training loss again in evaluate.py, the training loss will be computed twice, which is inefficient.

massquantity avatar Feb 27 '21 03:02 massquantity

Thanks, I may have overlooked it. Saved!

Can this training loss be compared directly to the eval loss recorded in the evaluation script or does it need be converted? Therefore, is training loss is the log loss?

Shadz13 avatar Mar 02 '21 07:03 Shadz13

The training_loss is computed using tf.nn.sigmoid_cross_entropy_with_logits, and the eval loss is computed using sklearn.metrics.log_loss. The math equations are the same, so if you trust the implementations of both TensorFlow ans Sklearn, then they can be compared.

massquantity avatar Mar 04 '21 11:03 massquantity