Index out of range for pretrained model
loading annotations into memory...
Done (t=0.02s)
creating index...
index created!
INFO json_dataset_rel.py: 395: Loading cached gt_roidb from /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/data/cache/vrd_val_rel_gt_roidb.pkl
INFO subprocess_rel.py: 88: rel_detection range command 0: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 0 250 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
INFO subprocess_rel.py: 88: rel_detection range command 1: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 250 500 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
INFO subprocess_rel.py: 88: rel_detection range command 2: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 500 750 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
INFO subprocess_rel.py: 88: rel_detection range command 3: python3 /home/prudhvik/graphicalLosses/ContrastiveLosses4VRD/./tools/test_net_rel.py --range 750 1000 --cfg Outputs/vrd_VGG16_COCO_pretrained/rel_detection_range_config.yaml --set TEST.DATASETS '("vrd_val",)' --do_val --output_dir Outputs/vrd_VGG16_COCO_pretrained --load_ckpt trained_models/vrd_VGG16_COCO_pretrained/model_step7559.pth
Traceback (most recent call last):
File "./tools/test_net_rel.py", line 175, in
Could you help me fix this?
Hi @Prudhvinik1,
I think this is probably an issue of the environment that you are using. I have seen this before when someone else in a company tries to run the same code in their system, but somehow this issue persists. Unfortunately I don't know how to tackle it, but if you have already done you are more than welcome to share it here. Thanks!
Ji
In my machine gpu_inds was [0,1,2,3,4,5,6,7] even though I have only 2 gpus. Setting gpu_inds = [0,1] manually fixed the issue for me.
It happened for me too when the number of images (for which I want to run the inference) is less than the number of GPUs. The code tries to divide the tasks between all GPUs. This is when this problem arises.