Xlearn icon indicating copy to clipboard operation
Xlearn copied to clipboard

Cross-Entropy Loss is not included in the total loss

Open daddyke opened this issue 5 years ago • 1 comments

Hi,

In the paper "Transferable Representation Learning with Deep Adaptation Networks", you use cross-entropy loss (which is corresponding to equation 8 in the paper) to minimize the uncertainty of predicting the labels of the target data.

I find the corresponding implementation of that equation which is defined as EntropyLoss() in loss.py. In the paper, the total loss is composed of three main parts: the classification loss, the mmd loss and the cross-entropy loss.

What confused me is that in train.py, you do add the mmd loss and the classification loss together, but you don't actually add the cross-entropy loss. I am wondering do I miss something or do you do it on purpose?

Looking forward to hearing from you soon.

Thank you, Ke

daddyke avatar Nov 25 '20 03:11 daddyke

Actually this confused me as well.

The fact is there are two versions of DAN, in year 2015 and 2018. The code you mentioned is a reimplement of DAN 2015, there are only two losses in paper DAN 2015.

KEVIN666666666 avatar Oct 23 '22 13:10 KEVIN666666666