does commit_loss and codebook_loss always be equal?
when i try to retrain DAC model, i found that commit_loss and codebook_loss always be equal for each iteration.
is it correct? code location: quantize.py, class VectorQuantize commitment_loss = F.mse_loss(z_e, z_q.detach(), reduction="none").mean([1, 2]) codebook_loss = F.mse_loss(z_q, z_e.detach(), reduction="none").mean([1, 2]) print('commitment_loss/codebook_loss:', commitment_loss, codebook_loss) e.g.: commitment_loss/codebook_loss: tensor([1.6723, 2.0803, 1.9611, 1.8907], device='cuda:0', grad_fn=<MeanBackward1>) tensor([1.6723, 2.0803, 1.9611, 1.8907], device='cuda:0', grad_fn=<MeanBackward1>)
Can anyone explain the difference and function of these two loss terms?
yes, they always have the same value. but you can give them different loss weight to emphasize which is more important.