François Pacaud

Results 43 comments of François Pacaud

At the moment our use case is the following. We have a dense KKT system ``` [ H J'] [ J 0 ] ``` with the Jacobian `J` and the...

I think we are not targeting the resolution of `CUTest` problem on the GPU right now (and for these, the good ol' `jac_coord!` and `hess_coord!` are already doing a good...

I apologize if that was unclear! I should have stated first that we are using our personal model :) > If we only define the new functions but don't implement...

I would be more happy to make the PR!

I would be also interested in having a presolve package for https://github.com/exanauts/Simplex.jl I do not know when MatrixOptInterface will be merged into MOI. Maybe it would make sense to build...

`cusolverRF` is able to refactorize the matrix entirely on the GPU, but it requires to compute the initial symbolic factorization on the CPU, and a fixed sparsity pattern. This is...

@ChrisRackauckas we tried to wrap `cusolverRF` in CUDA.jl in #828 , but the PR is a bit stalled now (it's on me). We are currently using a [custom wrapper](https://github.com/exanauts/Argos.jl/blob/fp/vectorized/lib/cusolverRF.jl/src/interface.jl) for...

I think it's not safe to access to the solution in the solver's attributes as we are doing right now. I would suggest adding custom getters to return the solution...

I think supporting reusing the MadNLP solver would be a net improvement (even compared to Ipopt). It should not be difficult to implement, but we have to do that with...

I am running into a similar issue. I think we could build something directly in the Hessian callback by wrapping the LBFGS operator implemented in JuliaSmoothOptimizers: https://github.com/JuliaSmoothOptimizers/LinearOperators.jl/blob/master/src/lbfgs.jl It's not direct...