fwd_code icon indicating copy to clipboard operation
fwd_code copied to clipboard

Bad performance when training on scale_factor 200

Open monkeydchopper opened this issue 1 year ago • 1 comments

During training, I found that the scale_factor has a significant impact on the training results. I noticed that some other projects use a scale_factor of 200 when utilizing DTU, and I think this might represent the real scale. Therefore, I modified this parameter, but the model's performance deteriorated significantly, with the depth loss remaining above 0.8 and not decreasing. However, if I set it to default value 100, the depth loss decreases rapidly, and the model performs much better. I would like to ask if you remember this situation and if you have any suggestions on how to fix this issue.

monkeydchopper avatar Feb 20 '24 05:02 monkeydchopper

I actually did not expect this problem. Can you check the output and gt depth in this situation? This scale should be adjustable.

Caoang327 avatar Feb 29 '24 18:02 Caoang327