multi-task-learning-example-PyTorch
multi-task-learning-example-PyTorch copied to clipboard
This isn't an issue but a doubt i would like to clarify. When i am using the homoscedastic loss for my area of research the loss values are in negatives...
Your implementation is a little bit different from the formula in the paper, for example, where the denominator has sigma squared and yours doesn't 
How parallel in multi card?