Cannot specify GPU when carrying out inference with mask DINO
When I set cfg.MODEL.DEVICE = 'cuda:1' ,it says illegal memory access. The problem does not exist when i set cfg.MODEL.DEVICE = 'cuda:0', how to solve this?

I am still trying to figure out this problem. If you wanna use cuda:1 and run on this single GPU, you can add 'CUDA_VISIBLE_DEVICES=1' before your command, for example
CUDA_VISIBLE_DEVICES=1 && python train_net.py --resume --num-gpus 1 --config-file \
configs/coco/instance-segmentation/maskdino_R50_bs16_50ep_3s_dowsample1_2048.yaml
This will only use the cuda:1 device.
Hi, thanks for your reply.
We would like to implement multiple maskdino models on multiple gpus, it would be great if we can do so.
Is it because the .cuda() defaults using cuda:0 in the code? Could it be .to('cuda:1') where the 'cuda:1' reads from the cfg.MODEL.DEVICE? Just my guess
Maybe you guess if right, but I never used your way of specifying cuda devices. You can implement multiple maskdino models on multiple gpus with my command, which works well for me.
We are implementing the models for inference on our project, so probably cannot just use the command above. I want to see if the codes can be modified that it can read from the cfg.MODEL.DEVICE.
https://github.com/IDEA-Research/MaskDINO/blob/95cf05ccfd1d0496bc92980f29a18536c92b450f/maskdino/modeling/transformer_decoder/maskdino_decoder.py#L268
As I see the codes here use 'cuda' only without specifying the gpus, is there a way to pass the gpu no. as specified by cfg.MODEL.DEVICE into those codes?