Stefano Carrazza
Stefano Carrazza
@DiegoGM91 thanks for reporting this feature, if you have some implementation please open a PR or share the code with us.
@AdrianPerezSalinas thanks for this issue. I think this is feasible, by computing the analytic gradients manually or automatically (via tensorflow, or other backends).
Usually, automatic analytic gradients are more efficient than finite differences, which are already computed by the scipy minimize.
Thanks for the clarification, sorry for the misunderstanding. Yes, this is something which may help, and for sure interesting to have build-in.
Ok, thanks for this tests, lets see.
OK and how this compares to the numerical derivative for similar configurations?
@igres26 thanks for reporting. I believe we can have a `trainable` flag which accepts a boolean or a mask.
Do you have references about this feature?
Thanks @renatomello, we are looking into this.
Thanks @igres26 , that's a quite good list wish list.