François Pacaud
François Pacaud
LBFGS has been implemented in this PR: #221 At the end, we do not depend on JSO for the LBFGS implementation, and use instead the compact representation introduced in: >...
To the best of my knowledge, most interior point methods for generic NLP require the full evaluation of the Jacobian. Even Knitro, which supports computing the descent direction with Hessian-vector...
At the moment, we are mostly working on problem with the following shape: ``` min_{x,p} f(x, p) subject to g(x, p) = 0 , h(x, p) = 0 ``` where...
I think this is definitely doable. All kernels implemented in `MadNLPGPU` are using KernelAbstractions, so are directly portable to AMD. One thing, though: I don't know it's a good idea...
That's the risk. KernelAbstractions depends on Cassette.jl, and that might add some overhead in the load time. Maybe we need another package `MadNLPKernel`? But that's maybe too much ...
That makes total sense. `MadNLP` should be as lightweight as possible (and I like the current setting, with few dependencies)
I am currently not able to reproduce the failure in the CI locally. Investigating what's going on.
FYI, here is a benchmark of MadNLP LBFGS with Ipopt LBFGS algorithm (using SCALAR1 init, and mem_size=6) on a subset of the CUTEst benchmark. https://web.cels.anl.gov/~fpacaud/result_lbfgs.txt Overall, Ipopt LBFGS is better...
Thank you for including a MWE. I am able to reproduce your issue on my laptop. Investigating it further, it looks like the bottleneck is in `MOI.copy_to`. If I do...
Hi! As a follow up to our discussion, we have tried to implement a generic parser, supporting both MATPOWER and PSSE files. The idea is: - to use `PowerFlowData` for...