glow icon indicating copy to clipboard operation
glow copied to clipboard

The generative loss in implementation

Open uuutty opened this issue 6 years ago • 4 comments

In the paper, the objective function to minimize is image However in the code, objective first add this constant c, logpz, and then apply a negative sign to the objective to get generate loss https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L172 https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L181 https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L184 It seems to minimize -logpx+Mlog(a), not the loss writed in paper which is -logpx-Mlog(a) Do you ignore the constant because it will not affect the training or I missed something in the code?

uuutty avatar Mar 14 '19 08:03 uuutty

It's an optimization:

log(a) = log(1 / n_bins) = -log(n_bins)

gerbenvv avatar May 04 '19 14:05 gerbenvv

Just to clarify, the purpose of the constant "scaling penalty" c is just to ensure accurate likelihood computations? Since the minimum would be the same with or without c. Comparison or model selection on the basis of likelihood computation is also iffy though isn't it?

christabella avatar Jan 10 '20 10:01 christabella

Giving that a normalizing flow gives you a correct log-likelihood of your data under your model it would be a shame to omit c even though technically not required for optimization. Model scoring/selection can be done using the log-likelihood of test data under the model. Superiority of a model can for example be proven with a likelihood-ratio test.

gerbenvv avatar Jan 11 '20 14:01 gerbenvv

Thank you for the explanation!

christabella avatar Jan 14 '20 08:01 christabella