multi-task-learning-example icon indicating copy to clipboard operation
multi-task-learning-example copied to clipboard

This is a lucky demo When I change the data generation process, the prediction of the variance is wrong

Open conan-ux opened this issue 5 years ago • 2 comments

I think this is a lucky demo. When I chenge the data generation code, the optimization will be guided wrongly and the variance prediction is wrong.

So I think this uncertainty method only works at a situation that: the value of the diff is close to precsion and log variance. If their value are not at the same scale, the method will broke.

`

def gen_data(N):

X = np.random.randn(N, Q)

w1 = 2.*1e2

b1 = 8.*1e2

sigma1 = 10  # ground truth

Y1 = X.dot(w1) + b1 + sigma1 * np.random.randn(N, D1)

w2 = 3*1e2

b2 = 3*1e2

sigma2 = 1*1e2 # ground truth

Y2 = X.dot(w2) + b2 + sigma2 * np.random.randn(N, D2)

return X, Y1, Y2

`

conan-ux avatar Nov 26 '20 08:11 conan-ux

@conan-ux I got a similar issue. Have you tried tuning the learning rate?

ichxw avatar Jan 14 '21 05:01 ichxw

Try to increase epochs or tune the learning rate.

RuixuanDai avatar Apr 13 '21 02:04 RuixuanDai