DataContainer error at training stage
Describe the Issue Train FasterRCNN on VG dataset. Error code: TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not DataContainer
Reproduction
-
What command, code, or script did you run?
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=12340 ./tools/train.py ./configs/visualgenome/faster_rcnn_x101_64x4d_fpn_1x.py --launcher pytorch -
Did you make any modifications on the code? Did you understand what you have modified? No I have not.
Environment
Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda-11.3
NVCC: Build cuda_11.3.r11.3/compiler.29920130_0
GPU 0: NVIDIA A40
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 1.8.1+cu111
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.0.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.9.1+cu111
OpenCV: 4.6.0
MMCV: 0.4.3
MMDetection: 1.1.0+126af87
MMDetection Compiler: GCC 9.4
MMDetection CUDA Compiler: 11.3
Error traceback
Traceback (most recent call last):
File "/home/stud/zhangya/.pycharm_helpers/pydev/pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/stud/zhangya/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/stud/zhangya/repo/MMSceneGraph-master/tools/train.py", line 165, in <module>
main()
File "/home/stud/zhangya/repo/MMSceneGraph-master/tools/train.py", line 154, in main
train_detector(
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/apis/train.py", line 190, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmcv-0.4.3/mmcv/runner/runner.py", line 359, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmcv-0.4.3/mmcv/runner/runner.py", line 262, in train
outputs = self.batch_processor(
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/apis/train.py", line 77, in batch_processor
losses = model(**data)
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 705, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/core/fp16/decorators.py", line 49, in new_func
return old_func(*args, **kwargs)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/models/detectors/base.py", line 192, in forward
return self.forward_train(img, img_meta, **kwargs)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/models/detectors/two_stage.py", line 227, in forward_train
x = self.extract_feat(img)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/models/detectors/two_stage.py", line 129, in extract_feat
x = self.backbone(img)
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stud/zhangya/repo/MMSceneGraph-master/mmdet/models/backbones/resnet.py", line 496, in forward
x = self.conv1(x)
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/stud/zhangya/miniconda3/envs/mmsg/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not DataContainer
Bug fix
I checked the solution provided by https://github.com/open-mmlab/mmdetection/issues/2782. But the model is indeed encapsulated by MMDistributedDataParallel.

Any solutions? Thank you in advance!