multi-task-learning-example icon indicating copy to clipboard operation
multi-task-learning-example copied to clipboard

A multi-task learning example for the paper https://arxiv.org/abs/1705.07115

Results 13 multi-task-learning-example issues
Sort by recently updated
recently updated
newest added

I'm not in the field of deep learning and computer science, but I found this work very interesting. I am confused about what should I do if I want to...

The loss function can optimize in a way that keep decreasing the log_var values, which I observe in my experiments. One simple solution is to do torch.abs(log_var). Any thoughts on...

@yaringal Hi, I have a question about your multi-task loss function. Below you return a loss as torch.mean(loss), but if i undersatnd this function correctly, loss is just a single...

I think this is a lucky demo. When I chenge the data generation code, the optimization will be guided wrongly and the variance prediction is wrong. So I think this...

As described in this paper, the noise(sigma) increases, the respected L(W) decreases. But if we understand sigma as uncertainty of y, maybe it's better for L to be increase with...

I read the paper carefully, the formula in paper is fundamentally wrong. - Under the formula (2) and (3), the probility output has a gaussian distribution. However, the probility can't...

@yaringal Thank you for your example, it helps a lot to understand the paper. I am currently use the proposed formula (exp(-log_var)*loss+log_var)) in self-supervised learning with uncertainty estimation. In my...

Hi, thanks for your excellent job! I wonder is there any way to easily incorporate this method into other multi-tasks learning pipeline? I'm still trying to understanding the formulas and...

Hi,thanks for your great works, and i have some questions about the formulation 10 - it says "in the last transition we introduced the explicit simplifying assumption ... which becomes...