when using "device = torch.device("cuda:0") # Uncomment this to run on GPU", I get following error
Traceback (most recent call last):
File "ballbot_learner.py", line 301, in
p, u_net = policy(ttx_torch)
File "/home/utsav/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/utsav/catkin_ws/src/MPC-Net/PolicyNet.py", line 92, in forward
z_h = self.activation1(self.linear1(tx))
File "/home/utsav/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/utsav/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/home/utsav/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
Hi utsavrai,
It looks like the code wasn't properly tested on GPU....
The error message suggests that we're mixing GPU and CPU tensors. Can you find out for the last line which tensors live on the CPU side? Then I suggest to trace back where they are created and check if the torch device was erroneously hardcoded to CPU.