Feature_Critic icon indicating copy to clipboard operation
Feature_Critic copied to clipboard

About the performance

Open ReedZyd opened this issue 6 years ago • 5 comments

Sorry for the inconvenience。

I trained and tested on the PACS dataset. The accuracy of the model trained by myself was 58%, and the accuracy of your trained model was 62%. But either not match the accuracy rate in your article. Why? But neither of the results reached the accuracy rate in the article. Why?

ReedZyd avatar Jan 09 '20 09:01 ReedZyd

Hi, we then checked the models on our server and updated to the remote disk. The performance is close to the reported results. We ran the models on our old code where we changed some underlying code of torch, and cleaned up the code trying to write them in another way and avoided changing the underlying torch code. I think maybe this would cause some influence to the performance..

liyiying avatar Jan 11 '20 06:01 liyiying

Thank you for your reply. And meanwhile, I am wondering if you update the baseline code, such as MetaReg.

ReedZyd avatar Jan 11 '20 07:01 ReedZyd

Hi, we have provided the baseline code in the project. If you want to follow the MetaReg work, I advice you to ask for the MetaReg authors for their clear code.

liyiying avatar Jan 12 '20 06:01 liyiying

Thank you!

ReedZyd avatar Jan 13 '20 06:01 ReedZyd

And when I trained the model, the loss was very small(0.00...) after 15,000 iterations. Why do you reset the optimizer at 15,000 iterations and continue training?

ReedZyd avatar Jan 13 '20 07:01 ReedZyd