Bad performance when training on scale_factor 200
During training, I found that the scale_factor has a significant impact on the training results. I noticed that some other projects use a scale_factor of 200 when utilizing DTU, and I think this might represent the real scale. Therefore, I modified this parameter, but the model's performance deteriorated significantly, with the depth loss remaining above 0.8 and not decreasing. However, if I set it to default value 100, the depth loss decreases rapidly, and the model performs much better. I would like to ask if you remember this situation and if you have any suggestions on how to fix this issue.
I actually did not expect this problem. Can you check the output and gt depth in this situation? This scale should be adjustable.