Implementation roadmap
A comprehensive listing of methods in the references. Please remove any methods are impractical/obsolete and add methods worth implementing to the list.
Linear systems
- Stationary methods
- [x] Jacobi (LT)
- [x] Gauss-Seidel (LT)
- [x] Successive overrelaxation, SOR (LT)
- [x] Symmetric SOR (LT)
- Nonstationary methods
- [x] GMRES (LT, Saa)
- [x] MINRES (LT)
- [x] Conjugate Gradients, CG (LT)
- [x] LSQR
- [x] LSMR
- [x] BiCGStab(l) (Sle)
- [x] Chebyshev iteration (LT)
- [ ] Quasi-Minimal Residual, QMR (LT)
- [ ] DQGMRES
(Generalized) eigenproblems
- Without subspace
- [x] (Shift-and-inverted) power iteration (ET)
- [ ] Rayleigh Quotient Iteration [1]
- Krylov subspace methods
- [ ] Implicitly restarted Lanczos (ET)
- [ ] Implicitly restarted Arnoldi (ET)
- Other subspace methods
- [ ] Jacobi-Davidson method (ET)
- [x] LOPCG
- [ ] Accelerated Rayleigh Quotient Iteration [1]
- [ ] (Shift-and-inverted) subspace iteration (ET)
Singular value decomposition
- [x] Golub-Kahan-Lanczos (ET)
References
- LT: Templates for the Solution of Linear Systems
- ET: Templates for the Solution of Algebraic Eigenvalue Problems
- Saa: GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems (pdf)
- Sle: BiCGstab(l) and other hybrid Bi-CG methods (pdf)
- TB: Numerical linear algebra
- HV: Iterative Krylov methods for large linear systems
[1] Low-priority methods: they are quite expensive per iteration.
An awesome list. Somewhere on my harddrive I have an lsqr.jl file I could contribute, if interested.
I would say that all the stationary methods are obsolete (they are only useful now as a component of multigrid). CGN (conjugate-gradient on the normal equations) seems hardly used these days; the fact that it squares the condition number makes it useless for large systems. I don't see the point of the simple power method and friends; it seems subsumed by restarted Arnoldi (it is just the special case of restarting every iteration). I don't see the point of implementing non-restarted Arnoldi or Lanczos (these are just the special case of a large restart period).
Great list. I guess the non-restarted methods are more starting points for the restarted methods.
For the same reason, we only want restarted GMRES (of which there are several variants, if I recall correctly).
Note that I added LOPCG to the list; this is one of the best iterative eigensolvers for Hermitian systems in my experience (and has the added benefit of supporting preconditioning).
Also, the solvers for ordinary eigenproblems are merely special cases of the solvers for generalized eigenproblems, so we shouldn't have to implement both variants in most cases. With a little care in implementation, it should be possible to have negliglble performance penalty in the case where the right-hand side operator is the identity, since I(x) = x is essentially free and doesn't make a copy.
This is an amazing issue :heart:
A bcgstab was posted here: https://github.com/JuliaLang/julia/issues/4723
We do not know about its license. For starters, the one from templates should be good.
As @stevengj pointed out in the original discussion (JuliaLang/julia#4474), the state of the art for biconjugate gradients is the BiCGSTAB(l) (or ell?) variant. I purposely left the original BiCGSTAB algorithm off the list for that reason.
I spent a few minutes transcribing symbol-for-symbol the BiCGSTAB(ell) algorithm in this gist. Unicode combining diacritics make it laughably easy to do this transcription. You can see clearly where when l=1 this reduces to BiCGSTAB.
Obviously this is subject to testing, etc.
A number of comments
- Stationary methods are important and should be there. They can be used as smoothers in multigrid as well as first line preconditioners for other iterative methods.
- Need to add least square solvers, in particular CGLS and LSQR
- The solvers I added (bicgstab and gmres) are a direct translation from the book Templates so I do not see any license issue.
- I plan to add cgls soon
@timholy 's https://github.com/timholy/Grid.jl provides restriction and interpolation operators on grids.
With regard to the usefulness of CGN and CGS I would point to this paper:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.210.62
I'd be interested to see some of the symmetric indefinite methods MINRES, SYMMLQ, MINRES-QLP (http://www.stanford.edu/group/SOL/software.html)
I tried the gauss-seidel function of the package as smoother for multigrid. The implementation given doesn't account for sparse matrices and is thus not usable in such contexts (large, sparse). Probably all the methods in stationary.jl have this issue.
@thraen: You might want to look into the function sor and other iterative solvers that are optimized for sparse matrices in https://github.com/lruthotto/KrylovMethods.jl. I use the sor a lot as a preconditioner and it is reasonably fast. It makes use of the sparse matrix format.
Thanks a lot for pointing that out!
On Wed, Jun 10, 2015 at 8:39 AM, Lars Ruthotto [email protected] wrote:
@thraen https://github.com/thraen: You might want to look into the function sor and other iterative solvers that are optimized for sparse matrices in https://github.com/lruthotto/KrylovMethods.jl. I use the sor a lot as a preconditioner and it is reasonably fast. It makes use of the sparse matrix format.
— Reply to this email directly or view it on GitHub https://github.com/JuliaLang/IterativeSolvers.jl/issues/1#issuecomment-110614177 .
Is it worthwhile to add Aitken's delta-squared process into the list?
I noticed that this package was listed for Julia GSOC 2016. Mike Innes suggested that I pipe up here if I wanted to help. Is anybody here able and willing to serve as a mentor?
@klkeys Thanks for your interest. I'd be thrilled to have someone help out with this package!
One of the biggest challenges is to unify the various methods available in similar packages like https://github.com/lruthotto/KrylovMethods.jl and https://github.com/JuliaOptimizers/Krylov.jl. In some sense, this package has been more experimental in nature whereas the other two have been more focused on efficient implementations that work well for practical applications. Now that we (or at least, I) have some experience in writing these algorithms, I'd be very interested to figure out a common API across all the various solvers available.
cc: @lruthotto @dpo
I'd be glad to help work out a common API and a convergence of the packages in terms of efficiency. I'm not sure how much time I'll have to work on it myself during this summer, but I'd be happy to provide input as much as I can. Please keep me posted if you start a discussion somewhere.
I'm very interested in working out a good common API as well. I'm currently quite happy with the API in Krylov.jl but I'm of course open to discussing it. It's very unlikely that I'll have free time this summer to mentor though, but I'm happy to participate in discussions when I can.
Will anyone work on Julia GSOC 2016 ?
It should be nice to have a common API to easily switch between the different implementations or to merge them. For example,
- the preconditioning scheme is different (e.g.,
M(x)in KrylovMethods.jl andM\xin IterativeSolvers.jl. - an abstract layer to manage preconditioners and then implement classic ones (see there e.g., sec. 4.4 Preconditioners and Tab. 4 )
- write a "bridge" to be able to transparently use PETSc.jl, other GSOC 2016.
cc: @cortner related issue and #71
I think MINRES should be added to this list.
@zimoun: We chose to provide the preconditioner as a function handle that computes the inverse, for example, M(x) = M\x to allow as much flexibility to the user as possible. This way you should be able to use routines from PETSc without an issue (you may have to write a wrapper). I'm not sure you need another abstract layer for preconditioners.
Something we could to in KrylovMethods is to allow M being a matrix and then building the function. We do something similar for A already. Do you think that's a good way to go?
@ChrisRackauckas : There is a basic version of minres in KrylovMethods (https://github.com/lruthotto/KrylovMethods.jl/blob/master/src/minres.jl) which might be a good starting point.
I'm not sure you need another abstract layer for preconditioners.
My own perspective: in optimisation you need more than just M(x). You need to provide M * x, M \ x, <x, My>, <x, M\y>.
But I would argue even in linear algebra you may want to create some abstraction. E.g., is the preconditioner right- or left-preconditioned? Maybe I am overcomplicating it. I do think for Optim.jl it will be useful to use dispatch instead of function handles. I won't get too stressed if this is incompatible with IterativeSolvers.jl, though it might be a shame.
I agree that we should distinguish between left and right preconditioners. How about having two variables, for example, Mr and Ml doing this?
Regarding the abstraction issue: Can you give me an example when you need to multiply with the preconditioner?
Regarding the abstraction issue: Can you give me an example when you need to multiply with the preconditioner?
Basically to compute distance, but I only need the inner products for that. So now that you pressure me to think about it, I am not sure whether it was just sloppiness on my part. I will think about it some more, but it is possible that only M \ x, <x, My>, <x, M \ y> is needed.
But don't you need M * x for right-preconditioning?
No, you need the inverse for left- and right-preconditioning. See, e.g., Chapter 9 in Saad's book: www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf
Regarding the inner product: Think, also about when you need that inside the linear solver. So far, I haven't seen the necessity and still think that inside the methods you only need M\x.