Leaf node being replaced?
I am using this package for some analysis of ODEs and I have started with a very simple example.
def func(t, state):
dx_dt = beta[0]*t
dy_dt = beta[1]*t
return dx_dt, dy_dt
beta = torch.tensor([3.,5.],requires_grad=True)
x0 = torch.tensor([0.1,2.],requires_grad=True)
D = odeint(func, x0, t=torch.tensor([0.,10.]))
print(D)
D[0][1].backward()
print(x0.grad)
print(beta.grad)
I get the following error
Traceback (most recent call last): File "tp.py", line 14, in
D = odeint(func, x0, t=torch.tensor([0.,10.])) File "/nfs4/ushashi/anaconda3/envs/gpu_ptorch/lib/python3.6/site-packages/torchdiffeq/_impl/odeint.py", line 65, in odeint solution = solver.integrate(t) File "/nfs4/ushashi/anaconda3/envs/gpu_ptorch/lib/python3.6/site-packages/torchdiffeq/_impl/solvers.py", line 27, in integrate self._before_integrate(t) File "/nfs4/ushashi/anaconda3/envs/gpu_ptorch/lib/python3.6/site-packages/torchdiffeq/_impl/rk_common.py", line 144, in _before_integrate self.norm, f0=f0) File "/nfs4/ushashi/anaconda3/envs/gpu_ptorch/lib/python3.6/site-packages/torchdiffeq/_impl/misc.py", line 69, in _select_initial_step d1 = norm(f0 / scale) File "/nfs4/ushashi/anaconda3/envs/gpu_ptorch/lib/python3.6/site-packages/torch/tensor.py", line 519, in rdiv return self.reciprocal() * other TypeError: only integer tensors of a single element can be converted to an index
If I change it tuple(x0) in the function argument, it works. Is this expected behaviour?
Try replacing beta[0].grad with beta.grad[0]. beta is a leaf node but beta[0] is not.
On Tue., Jan. 19, 2021, 2:20 a.m. Syomantak Chaudhuri < [email protected]> wrote:
I am using this package for very simple stuff and I tested it on the following code.
def func(t, state): dx_dt = beta[0]*t dy_dt = beta[1]*t return dx_dt, dy_dt
beta = torch.tensor([3.,5.],requires_grad=True) x0 = torch.tensor([0.1],requires_grad=True) y0 = torch.tensor([2.],requires_grad=True)
D = odeint(func, (x0,y0), t=torch.tensor([0.,10.])) print(D) D[1][0].backward() print(x0.grad) print(beta[0].grad)
The expected gradient for x0 is 1 and for beta[0] is 50 but I get the following output from the terminal -
(tensor([[1.0000e-01], [1.5010e+02]], grad_fn=), tensor([[ 2.0000], [252.0001]], grad_fn=))
/nfs4/ushashi/anaconda3/envs/gpu_ptorch/lib/python3.6/site-packages/torch/autograd/ init.py:132: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 9000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) allow_unreachable=True) # allow_unreachable flag tensor([0.]) tp.py:19: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 https://github.com/pytorch/pytorch/pull/30531 for more informations. print(beta[0].grad) None
What is the issue exactly? Are the variables being overwritten somewhere in odeint which causes this error?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rtqichen/torchdiffeq/issues/147, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZYGG3QMYYYGM3E4M37ES3S2UXEDANCNFSM4WIKY24A .
It seems the issue was updated. Right now, the error is func outputs a tuple but the initial state is a tensor. Their shapes need to match. (Error message could be improved here, hmm.)