Reproduction model perform poorly on real data
Thanks for sharing such great work!
On the synthetic data, the processing results of the model I reproduced and the pretrain model you provided are basically the same.
pretrain-117 | reproduced-125
syn-input | ground-truth
However, the difference in real data is huge, the effect of the pretrain model is still very good, and the reproduced model is basically ineffective (I keep the processing scripts consistent and only change the weight of the model).
I tried to downsample the real data by 4 times, and then upsample back to the original size, and the reproduction model worked.
pretrain-117 | reproduced-125-downup
reproduced-125 | real-input
I feel this is caused by insufficient generalization of synthetic data during training or other reasons. When training, I followed your code exactly and made no modifications. Could you please give me some advice?
Could you point out which part is not as good as you expected?
Could you point out which part is not as good as you expected?
Sorry. I put the subtitle on the figure so that the contrast is clearer.
The main problem is that the reconstruction model does not work well on the real test data, as shown in the middle and right sides of the figure below. The pre-trained model can handle it very well, as shown on the left side of the figure below. But they all do well on synthetic data.
@q935970314 Could you provide more information about how were you able to replicate the results on the synthetic dataset at epoch 125? Did you use the same configs and the dataset for training?
@q935970314 Could you provide more information about how were you able to replicate the results on the synthetic dataset at epoch 125? Did you use the same configs and the dataset for training?
Yes, I use exactly the settings in the code. Same config, same dataset, same GPU
@IceClear Hi, do you have any suggestions? Did you use other data synthesis methods during training?
Sry for the late reply. I am now busy with other projects. I am not sure of the reasons. I did not use additional data. We noticed that sometimes the training ckpt may be confined to the training data and carefully choosing the ckpt may be required.
Sry for the late reply. I am now busy with other projects. I am not sure of the reasons. I did not use additional data. We noticed that sometimes the training ckpt may be confined to the training data and carefully choosing the ckpt may be required.
Following the settings in the code, same config, same dataset, same GPU, and I have carefully chosen the trained ckpt and tested all checkpoints, but it is still worse than the public stablesr_000117.ckpt. And I tried training longer but it didn't work, but more blurry!