Compositor icon indicating copy to clipboard operation
Compositor copied to clipboard

Unable to reproduce MaskFormer results

Open joshmyersdean opened this issue 2 years ago • 4 comments

Hello!

With the code and configs provided I am unable to reproduce the Compositor MaskFormer results on PartImageNet. I've tried a few different learning rates but the best results I've observed are ~50 mIoU for part and ~62 for object. Could you release checkpoints and training logs?

Thank you! Josh

joshmyersdean avatar Jan 05 '24 17:01 joshmyersdean

Hi Josh,

I also noticed that the performance doesn't match for both the MaskFormer and k-MaX variants. I guess there might be something wrong when cleaning the code. I'll go to have a deep check and fix the bugs soon. Sorry for the inconvenience!

TACJu avatar Jan 10 '24 11:01 TACJu

No worries, thank you for looking into this! Would it be possible to release the Pascal Part configs and datasets as well?

Thank you!

joshmyersdean avatar Jan 10 '24 17:01 joshmyersdean

Hi @TACJu,

Thanks for providing the updated code! However, when I go to run on multiple GPUs the sem seg head has some issues (stacktrace below) -- do you encounter similar?

e unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss func
tion and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameters which did not receive grad for rank 2: sem_seg_head.predictor.transformer_object_self_attention_layers.8.norm.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.8.norm.weight
, sem_seg_head.predictor.transformer_object_self_attention_layers.8.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.8.self_attn.out_proj.weight, sem_seg_head.predi
ctor.transformer_object_self_attention_layers.8.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_object_self_attention_layers.8.self_attn.in_proj_weight, sem_seg_head.predictor.transformer_objec
t_self_attention_layers.7.norm.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.7.norm.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.7.self_attn.out_proj.bia
s, sem_seg_head.predictor.transformer_object_self_attention_layers.7.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.7.self_attn.in_proj_bias, sem_seg_head.predi
ctor.transformer_object_self_attention_layers.7.self_attn.in_proj_weight, sem_seg_head.predictor.transformer_object_self_attention_layers.6.norm.bias, sem_seg_head.predictor.transformer_object_self_attent
ion_layers.6.norm.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.6.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.6.self_attn.out_proj.we
ight, sem_seg_head.predictor.transformer_object_self_attention_layers.6.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_object_self_attention_layers.6.self_attn.in_proj_weight, sem_seg_head.pre
dictor.transformer_object_self_attention_layers.5.norm.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.5.norm.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.
5.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.5.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.5.self_attn.in_proj_
bias, sem_seg_head.predictor.transformer_object_self_attention_layers.5.self_attn.in_proj_weight, sem_seg_head.predictor.transformer_object_self_attention_layers.4.norm.bias, sem_seg_head.predictor.transf
ormer_object_self_attention_layers.4.norm.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.4.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_object_self_attention_layers
.4.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.4.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_object_self_attention_layers.4.self_attn.in_proj_
weight, sem_seg_head.predictor.transformer_object_self_attention_layers.3.norm.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.3.norm.weight, sem_seg_head.predictor.transformer_objec
t_self_attention_layers.3.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.3.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_object_self_attention_lay
ers.3.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_object_self_attention_layers.3.self_attn.in_proj_weight, sem_seg_head.predictor.transformer_object_self_attention_layers.2.norm.bias, sem_s
eg_head.predictor.transformer_object_self_attention_layers.2.norm.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.2.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_obje
ct_self_attention_layers.2.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.2.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_object_self_attention_lay
ers.2.self_attn.in_proj_weight, sem_seg_head.predictor.transformer_object_self_attention_layers.1.norm.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.1.norm.weight, sem_seg_head.pre
dictor.transformer_object_self_attention_layers.1.self_attn.out_proj.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.1.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_o
bject_self_attention_layers.1.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_object_self_attention_layers.1.self_attn.in_proj_weight, sem_seg_head.predictor.transformer_object_self_attention_l
ayers.0.norm.bias, sem_seg_head.predictor.transformer_object_self_attention_layers.0.norm.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.0.self_attn.out_proj.bias, sem_seg_head.pr
edictor.transformer_object_self_attention_layers.0.self_attn.out_proj.weight, sem_seg_head.predictor.transformer_object_self_attention_layers.0.self_attn.in_proj_bias, sem_seg_head.predictor.transformer_o
bject_self_attention_layers.0.self_attn.in_proj_weight
Parameter indices which did not receive grad for rank 2: 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 4```

joshmyersdean avatar Feb 28 '24 17:02 joshmyersdean

No worries, thank you for looking into this! Would it be possible to release the Pascal Part configs and datasets as well?

Thank you!

Hello, do you find the Pascal Part configs and datasets?

xjwu1024 avatar Mar 18 '24 03:03 xjwu1024