Update loss.py
I wonder if it is possible to convert the zero vectors in the matrix to one vectors? To avoid NaN loss, add the following two lines of code. I'm testing this modified model to see if it works. As a newbie to deep learning, please forgive me if my question is naive. thank you very much for your help!🙏
Last Night I try to delete the lable_scale and I found it works also well , but the loss is big , so I think if turning zero vectro into One vetcor is ok
def forward(self, logits, z, labels):
if self.multiclass:
if not self.onehot:
labels = F.one_hot(labels, logits.size(1))
labels = labels.float()
margin_logits = self.compute_margin_logits(logits, labels)
# label smoothing
log_logits = F.log_softmax(margin_logits, dim=1)
# if there are some Zero Vector in Matrix
A = ((labels==0).sum(dim=1) == labels.shape[1])
labels[A==True] = 1
labels_scaled = labels / labels.sum(dim=1, keepdim=True)
loss = - (labels_scaled * log_logits).sum(dim=1)
loss = loss.mean()
I will continue to add the report later . I want to make a comparison between these two changes
I found that It has a better performance than just delete label_scale .