GCNet icon indicating copy to clipboard operation
GCNet copied to clipboard

Does anyone have problems in training from scratch with GCNet on ImageNet?

Open implus opened this issue 6 years ago • 17 comments

Using the best setting of GC-ResNet50 and train it from scratch on ImageNet, I found it will be stuck in a high loss in the early epochs before the training loss begins to decline normally. Therefore the final result is much lower than original ResNet50. Note that one difference from the original paper is that the GC modules are embedded in each bottleneck exactly as SE does, for a fair comparison.

Does anyone have the same problem?

This may be the case since the authors report the ImageNet results via a finetuning setting, which is not very common when validating models on ImageNet Benchmarks. At least all other modules (SE, SK, BAM, CBAM, AA) are following a training-from-scratch setting.

implus avatar May 16 '19 22:05 implus

Did you try to finetune the resnet?

lxtGH avatar May 19 '19 07:05 lxtGH

I have the similar problem. When training from scratch, the model converges very slowly.

kfxw avatar Jun 04 '19 03:06 kfxw

I have the similar problem. When training from scratch, the model converges very slowly.

Me too

ZzzjzzZ avatar Jun 04 '19 03:06 ZzzjzzZ

Sorry for the late reply.

At the first place, we don't have enough resources for training from scratch. Afterwards, we tried train the whole network from scratch on ImageNet. We didn't observe similar issue. I suggested you train it for 110 or 120 epoch to see the finally performance.

Noted that we use the same augmentation method as SENet.

xvjiarui avatar Jul 03 '19 04:07 xvjiarui

@xvjiarui Hi! In terms of the fusion setting, did the case you mentioned use the 'add' one or the 'scale' one?

kfxw avatar Jul 03 '19 06:07 kfxw

We use the 'add' one by default. On the other hand, 'scale' is similar to SENet. Both of them should not have converge issue.

xvjiarui avatar Jul 03 '19 07:07 xvjiarui

@xvjiarui Thx for your reply. Btw, would you mind sharing the classification training codes in this repo? That would be of great help.

kfxw avatar Jul 03 '19 09:07 kfxw

@xvjiarui Thx for your reply. Btw, would you mind sharing the classification training codes in this repo? That would be of great help.

Our internal code base is used for classification training. We may try to release a cleaned version in the future. But it is not on schedule yet.

The block structure is the same. You could simply add it to your own code.

xvjiarui avatar Jul 03 '19 16:07 xvjiarui

Using the best setting of GC-ResNet50 and train it from scratch on ImageNet, I found it will be stuck in a high loss in the early epochs before the training loss begins to decline normally. Therefore the final result is much lower than original ResNet50. Note that one difference from the original paper is that the GC modules are embedded in each bottleneck exactly as SE does, for a fair comparison.

Does anyone have the same problem?

This may be the case since the authors report the ImageNet results via a finetuning setting, which is not very common when validating models on ImageNet Benchmarks. At least all other modules (SE, SK, BAM, CBAM, AA) are following a training-from-scratch setting.

Hi! Did you solve the problem?

taoxinlily avatar Aug 09 '19 08:08 taoxinlily

@xvjiarui I am also trying to use gc block on classifier such as resnet, vgg16 etc and I would like to be sure that I am doing thing good. First : in the resnet backbone, we just need to use the global context block before the downsample in the bottleneck/basicblock class Second : for the global contexxt module, the inplane parameter is the depth of the feature map which will feed the gc module and the plane parameter is equal to inplane//16, is it right ? regarding the parameter "pool", I suppose "att" is better ? And for the "fusions" parameter "channel add" is also better ? Why this parameter take only a list? I am not sure to understand.

Shiro-LK avatar Feb 23 '20 01:02 Shiro-LK

For all the yes/no questions, the answers are yes. You are understanding correctly. The fusions as a list indicate that multiple fusion methods could be used together.

xvjiarui avatar Feb 23 '20 10:02 xvjiarui

For all the yes/no questions, the answers are yes. You are understanding correctly. The fusions as a list indicate that multiple fusion methods could be used together. @xvjiarui Hi, thanks a lot for your great work. I appreciate it. However, I tried to train the network on ImageNet and GC achieves a worse performance than original ResNet. Then I follow the paper that fine-tune the ResNet50 for other 40 epochs using cosine schedule, and the performance is still bad. Could you please share you fine-tuning code, and I can cite your work in my research. Thanks a lot.

ma-xu avatar Feb 28 '20 02:02 ma-xu

@xvjiarui

Hi, Thanks for your reply. Do you still have the model trained on imagenet with gcblock ?

Shiro-LK avatar Mar 05 '20 01:03 Shiro-LK

Hi, @13952522076 Sorry for the late reply. Currently, I am too busy to release that part of the code. If the issue is overfitting, you are suggested to adopt the augmentations in the original paper as well as the drop out on the gc block branch.

xvjiarui avatar Mar 05 '20 17:03 xvjiarui

Hi, @Shiro-LK The models are not available for now. You will be informed when I train them again.

xvjiarui avatar Mar 05 '20 17:03 xvjiarui

Hi, @13952522076 Sorry for the late reply. Currently, I am too busy to release that part of the code. If the issue is overfitting, you are suggested to adopt the augmentations in the original paper as well as the drop out on the gc block branch.

Thanks a lot for your reply. Looks like not the issue of overfitting (train loss vs. val loss). Anyway, appreciate your work and it helped a lot. 😺

ma-xu avatar Mar 05 '20 17:03 ma-xu

@ma-xu Hello, excuse me, has the problem been solved?

rainofbow avatar May 31 '22 13:05 rainofbow