Traceback (most recent call last):
File "train_test.py", line 183, in
net = torch.nn.DataParallel(net, device_ids=list(range(args.ngpu)))
File "/home/cv2018/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 102, in init
_check_balance(self.device_ids)
File "/home/cv2018/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 17, in _check_balance
dev_props = [torch.cuda.get_device_properties(i) for i in device_ids]
File "/home/cv2018/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 17, in
dev_props = [torch.cuda.get_device_properties(i) for i in device_ids]
File "/home/cv2018/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py", line 292, in get_device_properties
raise AssertionError("Invalid device id")
AssertionError: Invalid device id
你把refinedet——train的关于gpu的default改一下,改成你的GPU——ID
How much gpus did you want to use? Please note that argumentation, '--ngpu' in train_test.py.
First, add 'CUDA_VISIBEL_DIVICES=x1' (x1 is the gpu id) before 'python'
Second, add '--ngpu x'(x is the amount of gpus) behind your running shell command
Here is an example:
CUDA_VISIBLE_DEVICES=0 python train_test.py -d VOC -v FSSD_vgg -s 300 --ngpu 1
Try it, hopes working for you.