about the result on coco test2017
I've test the model you provided on coco val2017, but I don't know how to test the model on coco test2017.could you tell me how to do that? I test the model on coco val2017 with below command line whose result is similar to yours.
CUDA_VISIBLE_DEVICES=3,2 ./tools/dist_test.sh ./configs/rdsnet/rdsnet_r50_fpn_1x.py ./work_dirs/rdsnet_r50_fpn_1x/epoch_12.pth 2 --eval bbox segm --out self_train_val2017_rdsnet_r50_results.12_epoch.pkl
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
loading annotations into memory...
loading annotations into memory...
Done (t=0.72s)
creating index...
Done (t=0.75s)
creating index...
index created!
index created!
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 11.5 task/s, elapsed: 436s, ETA: 0s
writing results to self_train_val2017_rdsnet_r50_results.12_epoch.pkl
Starting evaluate bbox and segm
Loading and preparing results...
DONE (t=4.99s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=109.74s).
Accumulating evaluation results...
DONE (t=16.81s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.369
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.572
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.398
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.216
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.408
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.480
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.313
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.507
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.539
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.352
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.581
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.695
Loading and preparing results...
DONE (t=16.04s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=128.71s).
Accumulating evaluation results...
DONE (t=16.85s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.322
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.528
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.337
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.142
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.358
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.478
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.286
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.446
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.468
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.266
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.515
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.650
if I want to test the model on coco test2017, how could I do that? should I uncomment the code in the file configs/rdsnet/rdsnet_r101_fpn_1x.py and change them to the code next to that? before:
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# img_prefix=data_root + 'test2017/',
after:
# ann_file=data_root + 'annotations/instances_val2017.json',
# img_prefix=data_root + 'val2017/',
ann_file=data_root + 'annotations/image_info_test-dev2017.json',
img_prefix=data_root + 'test2017/',
but after I do that, I run the command ./tools/dist_test.sh configs/rdsnet/rdsnet_r101_fpn_1x.py checkpoints/rdsnet_r101_fpn_1x-81ac3f75.pth 8 --eval bbox segm --out ./results/test2017_results_r101_1x.pkl. and I get the result as following, I don't know why. Could please tell me what mistakes I made and point them out?thank you
Evaluate annotation type *bbox*
DONE (t=141.98s).
Accumulating evaluation results...
DONE (t=35.18s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Loading and preparing results...
DONE (t=56.91s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=178.09s).
Accumulating evaluation results...
DONE (t=36.18s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Can you run the code properly? Is it convenient to ask about your package version? I always report errors when running the code. I suspect it is the package version problem when configuring the environment.
Can you run the code properly? Is it convenient to ask about your package version? I always report errors when running the code. I suspect it is the package version problem when configuring the environment.
I also met many problems, which drove me crazy. I build the environment via the dockfile he offered, and I adjust the environment configure for several times before I can run the code. I got the packages and their version as following. I think you should pay attention to the packages' version, especially mmdet, mmcv, and torch, for I don't know which versions they are compitable with each other. It's so frustrating when I configure the environment and meet errors after run the code.
Package Version Location
---------------------- -------------- ----------------
addict 2.2.1
albumentations 0.4.3
asn1crypto 0.24.0
backcall 0.1.0
beautifulsoup4 4.7.1
certifi 2019.11.28
cffi 1.12.3
chardet 3.0.4
conda 4.8.0
conda-build 3.17.8
conda-package-handling 1.6.0
cryptography 2.6.1
cycler 0.10.0
Cython 0.29.14
decorator 4.4.0
filelock 3.0.10
future 0.18.2
glob2 0.6
idna 2.8
imagecorruptions 1.1.0
imageio 2.6.1
imgaug 0.2.6
ipython 7.5.0
ipython-genutils 0.2.0
jedi 0.13.3
Jinja2 2.10.1
kiwisolver 1.1.0
libarchive-c 2.8
lief 0.9.0
MarkupSafe 1.1.1
matplotlib 3.1.2
mkl-fft 1.0.12
mkl-random 1.0.2
mmcv 0.2.14
mmdet 1.0rc0+0fd3abb /home/fcy/RDSNet
mmpycocotools 12.0.3
networkx 2.4
numpy 1.16.3
olefile 0.46
opencv-python 4.1.2.30
opencv-python-headless 4.1.2.30
parso 0.4.0
pexpect 4.7.0
pickleshare 0.7.5
Pillow 6.0.0
pip 19.1
pkginfo 1.5.0.1
prompt-toolkit 2.0.9
psutil 5.6.2
ptyprocess 0.6.0
pycocotools 2.0.0
pycosat 0.6.3
pycparser 2.19
Pygments 2.3.1
pyOpenSSL 19.0.0
pyparsing 2.4.6
PySocks 1.6.8
python-dateutil 2.8.1
pytz 2019.1
PyWavelets 1.1.1
PyYAML 5.1
requests 2.21.0
ruamel-yaml 0.15.46
scikit-image 0.16.2
scipy 1.4.1
setuptools 41.0.1
six 1.12.0
soupsieve 1.8
terminaltables 3.1.0
torch 1.1.0
torchvision 0.2.2
tqdm 4.19.9
traitlets 4.3.2
urllib3 1.24.2
wcwidth 0.1.7
wheel 0.33.1
yapf 0.31.0
when I use torch1.1.0,torchvison=0.2.2, I met error as below:
File "/mnt/media/users/zhaijunzhi/code/crack_detection/multimodal_crack/RDSNet-master/mmdet/models/mask_heads/rdsnet_mask_head.py", line 176, in loss gt_mask[torch.bitwise_not(crop_mask)] = -1 AttributeError: module 'torch' has no attribute 'bitwise_not'