Issue in loading pretrained weights while fine-tuning!!
Prerequisite
- [X] I have searched Issues and Discussions but cannot get the expected help.
- [X] I have read the FAQ documentation but cannot get the expected help.
- [X] The bug has not been fixed in the latest version (master) or latest version (1.x).
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
master branch https://github.com/open-mmlab/mmrotate
Environment
sys.platform: linux Python: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] CUDA available: True GPU 0: Tesla T4 CUDA_HOME: /usr NVCC: Cuda compilation tools, release 9.1, V9.1.8 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 2.0.1 PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.2 OpenCV: 4.9.0 MMCV: 1.7.2 MMCV Compiler: GCC 9.3 MMCV CUDA Compiler: 11.7 MMRotate: 0.3.4+9ea1aee
Reproduces the problem - code sample
Reproduces the problem - command or script
python tools/train.py configs/redet/redet_re50_refpn_1x_dota_ms_rr_le90.py
Reproduces the problem - error message
2024-03-14 11:34:09,483 - mmcv - INFO - backbone.layer4.2.bn3.batch_norm_[8].weight - torch.Size([256]): PretrainedInit: load from work_dirs/pretrain/re_resnet50_c8_batch256-25b16846.pth
2024-03-14 11:34:09,483 - mmcv - INFO - backbone.layer4.2.bn3.batch_norm_[8].bias - torch.Size([256]): PretrainedInit: load from work_dirs/pretrain/re_resnet50_c8_batch256-25b16846.pth
2024-03-14 11:34:09,483 - mmcv - INFO -
neck.lateral_convs.0.conv.bias - torch.Size([32]):
The value is the same before and after calling init_weights of ReDet
2024-03-14 11:34:09,483 - mmcv - INFO -
neck.lateral_convs.0.conv.weights - torch.Size([8192]):
The value is the same before and after calling init_weights of ReDet
Additional information
I want to fine-tune the redet_re50_refpn_1x_dota_ms_rr_le90 model using custom data of DOTA format. Upon reviewing the logs during the execution of the training command, I noticed that only the pretrained weights of the ReResNet(re_resnet50_c8_batch256-25b16846.pth) are being used, while the pretrained model specific to DOTA, redet_re50_fpn_1x_dota_ms_rr_le90-fc9217b5.pth, is not being loaded. What could be the reason behind this?
I tried to use load-from parameter mentioned in the config to pass the DOTA pretrained weights path but the results was the same!!
How can the DOTA pretrained weights (redet_re50_fpn_1x_dota_ms_rr_le90-fc9217b5.pth) be loaded during the fine-tuning process?
In the logs, I observed that pretrained weights are loaded in two different modules:
- Within the Backbone module, the loaded weights are for ReResNet(re_resnet50_c8_batch256-25b16846.pth).
2024-03-19 11:02:20,525 - mmcv - INFO - load checkpoint from local path: work_dirs/pretrain/re_resnet50_c8_batch256-25b16846.pth
2024-03-19 11:02:20,658 - mmcv - WARNING - The model and loaded state dict do not match exactly
- Additionally, After loading whole architecture the weights loaded are for the DOTA pretrained model (redet_re50_fpn_1x_dota_ms_rr_le90-fc9217b5.pth).
2024-03-19 11:02:29,098 - mmrotate - INFO - load checkpoint from local path: /path/to/file/mmrotate/redet_re50_fpn_1x_dota_ms_rr_le90-fc9217b5.pth
2024-03-19 11:02:29,780 - mmrotate - WARNING - The model and loaded state dict do not match exactly
Is it working correctly if I want to fine-tune on a custom dataset using the DOTA pretrained weights?