Chuang Yu

Results 6 comments of Chuang Yu

I understand [explicit hyper-gradient](https://torchopt.readthedocs.io/en/latest/explicit_diff/explicit_diff.html). mean. but what I want to do in **only inference** (no need save any activation) the model. I think only detach the outer loop is not...

@XuehaiPan thank you for your reply, In my view [torchopt.stop_gradient](https://torchopt.readthedocs.io/en/latest/api/api.html#torchopt.stop_gradient) only detach the link for input tensor, but grad link between inner loop is also connected, like optimizer update parameters?...

> > @XuehaiPan thank you for your reply, In my view [torchopt.stop_gradient](https://torchopt.readthedocs.io/en/latest/api/api.html#torchopt.stop_gradient) only detach the link for input tensor, but grad link between inner loop is also connected, like optimizer...

> > that is totally right in training, but in inference, we don't need keep grad connect, and cause torch cannot release these tensor. > > @ycsos I opened a...

I use this PR [1672](https://github.com/NVIDIA/apex/pull/1672) fix the build problem

I think this problem can fix by [pull 1672](https://github.com/NVIDIA/apex/pull/1672)