possibly critical bugs in UNet training script
In the UNet training script, when training_type="single", I think the model might only be training on one batch element per epoch due to an indentation error. Specifically this if-clause should be indented one more level. The FNO is indented correctly and does not have this problem.
Sidenote (another bug): When constructing the UNetDatasetMult, the script instantiates the class with parameters that don't exist (anymore?), causing a crash.
Huh, that's unfortunate but seems like you are correct... Would you mind opening a PR for this?
done.
This this affect any of the benchmark results?
Thank you!
Possibly yes. It's a little hard to backtrack when this was introduced since we moved repositories at some point. But yes, the "single" unet training might have been severely limited by this.