InsightFace_Pytorch icon indicating copy to clipboard operation
InsightFace_Pytorch copied to clipboard

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

Open lusihua opened this issue 6 years ago • 4 comments

run get_aligned_face_from_mtcnn.ipynb error occurred

lusihua avatar Aug 20 '19 04:08 lusihua


RuntimeError Traceback (most recent call last) in ----> 1 bounding_boxes, landmarks = detect_faces(img)

~/lusihua/facial_detect/insightface_pytorch/mtcnn_pytorch/src/detector.py in detect_faces(image, min_face_size, thresholds, nms_thresholds) 61 # run P-Net on different scales 62 for s in scales: ---> 63 boxes = run_first_stage(image, pnet, scale=s, threshold=thresholds[0]) 64 bounding_boxes.append(boxes) 65

~/lusihua/facial_detect/insightface_pytorch/mtcnn_pytorch/src/first_stage.py in run_first_stage(image, net, scale, threshold) 33 img = torch.FloatTensor(_preprocess(img)).to(device) 34 with torch.no_grad(): ---> 35 output = net(img) 36 probs = output[1].cpu().data.numpy()[0, 1, :, :] 37 offsets = output[0].cpu().data.numpy()

~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result)

~/lusihua/facial_detect/insightface_pytorch/mtcnn_pytorch/src/get_nets.py in forward(self, x) 65 a: a float tensor with shape [batch_size, 2, h', w']. 66 """ ---> 67 x = self.features(x) 68 a = self.conv4_1(x) 69 b = self.conv4_2(x)

~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result)

~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input) 89 def forward(self, input): 90 for module in self._modules.values(): ---> 91 input = module(input) 92 return input 93

~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result)

~/anaconda3/envs/pytorch1/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input) 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --> 301 self.padding, self.dilation, self.groups) 302 303

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

lusihua avatar Aug 20 '19 04:08 lusihua

我也遇到这样的问题在test_on_images.ipynb

chernbo avatar May 17 '20 11:05 chernbo

hello, did you solve it?

chocokassy avatar Sep 09 '21 09:09 chocokassy

I solved it by passing in the device as an argument for the run_first_stage function and adding .to(device) the img variable before feeding it into the network. I think their original code is used only for running on the CPU. To run the inference with GPU, you should modify this part of the run_first_stage function.

Screenshot 2024-05-13 at 6 11 52 AM

Morris88826 avatar May 13 '24 11:05 Morris88826