Hibercraft

Results 30 comments of Hibercraft

@hardBird123 could you share your training setttings for ResNet38 AffinityNet? I train the ResNet38_aff network by myself with lr=0.01 and other default params but only achieve 59.077% mIoU on val...

> @zbf1991 @jiwoon-ahn find i get the same mIoU on val dataset with author, but can't reach the same mIoU on train set that both in provided weights by author...

@hardBird123 May I ask how many GPUs you used for CAM and AffinityNet training?

@hardBird123 Thanks, I will try.

@zhouyuan888888 Have you use deeplabv3+vocfinetuning in experiment folder to refine your 79.2.6% model? The annotation of VOC train set is better than trainaug set. Experiment of deeplabv3+vocfinetuning will finetune on...

@zhouyuan888888 deeplabv3+res101 achieve 79.155% and deeplabv3+xception achieve 79.945% I find you have said that pretrained res101 model was used. The comparison is correct? your-79.206%>my-79.155%

@zhouyuan888888 Maybe it is caused by random seed? The code I released does not fix it. And what about single scale test?

@zhouyuan888888 There is no other tricks. Please finetuning on train set (the paper includes that), or select better super-parameters by yourself. Dense CRF also can be considered.

@Carlisle-Liu Replace the pretrain model address here https://github.com/YudeWang/semantic-segmentation-codebase/blob/995b8fa9d6b08e3ff1f15a7aef2ce5445deaa834/lib/net/backbone/resnet38d.py#L7

@Carlisle-Liu Load pretrain model https://github.com/YudeWang/semantic-segmentation-codebase/blob/995b8fa9d6b08e3ff1f15a7aef2ce5445deaa834/experiment/deeplabv3%2Bvoc/config.py#L33 And left TRAIN_CKPT=None. TRAIN_CKPT is used for finetuning or recovering from unexpected interruption. https://github.com/YudeWang/semantic-segmentation-codebase/blob/995b8fa9d6b08e3ff1f15a7aef2ce5445deaa834/experiment/deeplabv3%2Bvoc/config.py#L63