HashNet icon indicating copy to clipboard operation
HashNet copied to clipboard

pairwise loss in pytorch code

Open bfan opened this issue 7 years ago • 10 comments

Hi, I have difficult in understanding the pairwise loss in your pytorch code. Particularly,

  1. I can not relate it to the Equation (4) in the paper. What is the meaning of a parameter "l_threshold" in your code?

  2. The returned loss in the code seems to be weighted with 1/w_ij defined in the paper, i.e., Equation (2), as I find that the loss is final divided by |S|. Can you give me some explanation about this point?

bfan avatar Sep 28 '18 05:09 bfan

Sorry, I find my first question in the closed issue. However, I am still confused with the second question. Does the pytorch code of loss function only implement the non-weighted maximum likelihood when setting class_num=1? Otherwise, could you show me where is w_ij in the code?

bfan avatar Sep 28 '18 07:09 bfan

I think in the paper, the returned loss is weighted with w_ij, and it is calculated by Equation (2).

caozhangjie avatar Sep 28 '18 18:09 caozhangjie

The loss is divided by |S| to average the loss since there are |S| pairs of codes.

caozhangjie avatar Sep 28 '18 18:09 caozhangjie

Thanks for your reply. Maybe I didn't ask the question clearly. I want to know whether the loss implemented in the pytorch code (loss.py) is exactly the one defined in Equation (2), or just a simplied version when w_ij=1?

bfan avatar Sep 29 '18 00:09 bfan

We are still fixing the weight bug in pytorch version. Thus, in pytorch, we only use w_ij=1. There is some difference in parameters between caffe and pytorch.

caozhangjie avatar Sep 29 '18 06:09 caozhangjie

@bfan @caozhangjie I add the weight in pytorch version(without c).

def pairwise_loss(outputs1,outputs2,label1,label2):
    similarity = Variable(torch.mm(label1.data.float(), label2.data.float().t()) > 0).float()
    dot_product = torch.mm(outputs1, outputs2.t())
    #exp_product = torch.exp(dot_product)

    mask_positive = similarity.data > 0
    mask_negative = similarity.data <= 0
    exp_loss = torch.log(1+torch.exp(-torch.abs(dot_product))) + torch.max(dot_product, Variable(torch.FloatTensor([0.]).cuda()))-similarity * dot_product
    #weight
    S1 = torch.sum(mask_positive.float())
    S0 = torch.sum(mask_negative.float())
    S = S0+S1
    exp_loss[similarity.data > 0] = exp_loss[similarity.data > 0] * (S / S1)
    exp_loss[similarity.data <= 0] = exp_loss[similarity.data <= 0] * (S / S0)

    loss = torch.sum(exp_loss) / S

    #exp_loss = torch.sum(torch.log(1 + exp_product) - similarity * dot_product)

    return loss

soon-will avatar Nov 30 '18 09:11 soon-will

Thank you for your help. @soon-will

caozhangjie avatar Feb 23 '19 00:02 caozhangjie

@bfan @caozhangjie I add the weight in pytorch version(without c).

def pairwise_loss(outputs1,outputs2,label1,label2):
    similarity = Variable(torch.mm(label1.data.float(), label2.data.float().t()) > 0).float()
    dot_product = torch.mm(outputs1, outputs2.t())
    #exp_product = torch.exp(dot_product)

    mask_positive = similarity.data > 0
    mask_negative = similarity.data <= 0
    exp_loss = torch.log(1+torch.exp(-torch.abs(dot_product))) + torch.max(dot_product, Variable(torch.FloatTensor([0.]).cuda()))-similarity * dot_product
    #weight
    S1 = torch.sum(mask_positive.float())
    S0 = torch.sum(mask_negative.float())
    S = S0+S1
    exp_loss[similarity.data > 0] = exp_loss[similarity.data > 0] * (S / S1)
    exp_loss[similarity.data <= 0] = exp_loss[similarity.data <= 0] * (S / S0)

    loss = torch.sum(exp_loss) / S

    #exp_loss = torch.sum(torch.log(1 + exp_product) - similarity * dot_product)

    return loss

Hi, is it OK for Imagenet dataset? @soon-will @caozhangjie @bfan

SikaStar avatar Mar 03 '19 08:03 SikaStar

@bfan @caozhangjie I add the weight in pytorch version(without c).

def pairwise_loss(outputs1,outputs2,label1,label2):
    similarity = Variable(torch.mm(label1.data.float(), label2.data.float().t()) > 0).float()
    dot_product = torch.mm(outputs1, outputs2.t())
    #exp_product = torch.exp(dot_product)

    mask_positive = similarity.data > 0
    mask_negative = similarity.data <= 0
    exp_loss = torch.log(1+torch.exp(-torch.abs(dot_product))) + torch.max(dot_product, Variable(torch.FloatTensor([0.]).cuda()))-similarity * dot_product
    #weight
    S1 = torch.sum(mask_positive.float())
    S0 = torch.sum(mask_negative.float())
    S = S0+S1
    exp_loss[similarity.data > 0] = exp_loss[similarity.data > 0] * (S / S1)
    exp_loss[similarity.data <= 0] = exp_loss[similarity.data <= 0] * (S / S0)

    loss = torch.sum(exp_loss) / S

    #exp_loss = torch.sum(torch.log(1 + exp_product) - similarity * dot_product)

    return loss

I'm confused about this loss function. What is the principle of exp_loss ?

exp_loss = torch.log(1+torch.exp(-torch.abs(dot_product))) + torch.max(dot_product, Variable(torch.FloatTensor([0.]).cuda()))-similarity * dot_product

Can you help me?Thank you!

xandery-geek avatar Dec 16 '21 03:12 xandery-geek

这是来自QQ邮箱的假期自动回复邮件。   你好,你的邮件已收到,我会尽快给你回复。

soon-will avatar Dec 16 '21 03:12 soon-will