FiniteDiff.jl
FiniteDiff.jl copied to clipboard
Fast non-allocating calculations of gradients, Jacobians, and Hessians with sparsity support
```julia julia> function f(x,p) grad = FiniteDiff.finite_difference_gradient(y -> sum(y.^3), x) return grad .* p end f (generic function with 1 method) julia> x,p = rand(3),rand(3); julia> Zygote.gradient(p->sum(f(x,p)), p)[1] ERROR: Mutating...
```julia julia> function f(x,p) hess = FiniteDiff.finite_difference_hessian(y -> sum(y.^3), x) return hess * p end f (generic function with 1 method) julia> x=rand(3) 3-element Vector{Float64}: 0.45298015977579165 0.6696731824795704 0.8825460798816239 julia> p=rand(3)...
This has turned into a real library, but is lacking a real documentation. It should change to full docs because then they can be tested, unlike the README. Also then...
Hi everyone! Thanks a ton for implementing this and all associated packages! I use this package wrapped into Optim.jl---by selecting finite difference---and would appreciate improved performance, because who doesn't. So...
Changes to gradients.jl made in fc5a08e removed any way to reduce the number of function calls made when using forward difference for gradients as compared to central difference. The `finite_difference_gradient!`...
I'm not sure if this is intended behavior, but the output of `finite_difference_hessian!` is dependent on the values stored in the cache, as illustrated below: ``` using FiniteDiff using Random...
I'm aware of #104 which was closed by #113, but Jacobians still allocate when using StaticArrays, even with a preallocated cache. Here's a barebones solution for a cache-free non-allocating Jacobian:...
https://twitter.com/willkurt/status/1330183861452541953?s=20
But it's probably no the worst thing since it's only for sparse... if someone has a nice idea for how to do this without mutation though please take a stab...
https://github.com/JuliaDiffEq/DiffEqSensitivity.jl/pull/163#discussion_r362298724