@@ -41,11 +41,9 @@ The interface is detailed [here](@ref custom).
4141
4242### RecursiveFactorization.jl
4343
44- - ` RFLUFactorization() ` : a fast pure Julia LU-factorization implementation
45- using RecursiveFactorization.jl. This is by far the fastest LU-factorization
46- implementation, usually outperforming OpenBLAS and MKL, but currently optimized
47- only for Base ` Array ` with ` Float32 ` or ` Float64 ` . Additional optimization for
48- complex matrices is in the works.
44+ ``` @docs
45+ RFLUFactorization
46+ ```
4947
5048### Base.LinearAlgebra
5149
@@ -54,51 +52,26 @@ solving, using the overloads provided by the respective packages. Given that thi
5452customized per-package, details given below describe a subset of important arrays
5553(` Matrix ` , ` SparseMatrixCSC ` , ` CuMatrix ` , etc.)
5654
57- - ` LUFactorization(pivot=LinearAlgebra.RowMaximum()) ` : Julia's built in ` lu ` .
58-
59- + On dense matrices, this uses the current BLAS implementation of the user's computer,
60- which by default is OpenBLAS but will use MKL if the user does ` using MKL ` in their
61- system.
62- + On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
63- cache the symbolic factorization.
64- + On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
65- + On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
66-
67- - ` QRFactorization(pivot=LinearAlgebra.NoPivot(),blocksize=16) ` : Julia's built in ` qr ` .
68-
69- + On dense matrices, this uses the current BLAS implementation of the user's computer
70- which by default is OpenBLAS but will use MKL if the user does ` using MKL ` in their
71- system.
72- + On sparse matrices, this will use SPQR from SuiteSparse
73- + On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
74- + On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
75- - ` SVDFactorization(full=false,alg=LinearAlgebra.DivideAndConquer()) ` : Julia's built in ` svd ` .
76-
77- + On dense matrices, this uses the current BLAS implementation of the user's computer
78- which by default is OpenBLAS but will use MKL if the user does ` using MKL ` in their
79- system.
80- - ` GenericFactorization(;fact_alg=LinearAlgebra.factorize()) ` : Constructs a linear solver from a generic
81- factorization algorithm ` fact_alg ` which complies with the Base.LinearAlgebra
82- factorization API. Quoting from Base:
83-
84- + If ` A ` is upper or lower triangular (or diagonal), no factorization of ` A ` is
85- required. The system is then solved with either forward or backward substitution.
86- For non-triangular square matrices, an LU factorization is used.
87- For rectangular ` A ` the result is the minimum-norm least squares solution computed by a
88- pivoted QR factorization of ` A ` and a rank estimate of ` A ` based on the R factor.
89- When ` A ` is sparse, a similar polyalgorithm is used. For indefinite matrices, the ` LDLt `
90- factorization does not use pivoting during the numerical factorization and therefore the
91- procedure can fail even for invertible matrices.
92- - CholeskyFactorization
93- - BunchKaufmanFactorization
94- - CHOLMODFactorization
55+ ``` @docs
56+ LUFactorization
57+ GenericLUFactorization
58+ QRFactorization
59+ SVDFactorization
60+ CholeskyFactorization
61+ BunchKaufmanFactorization
62+ CHOLMODFactorization
63+ NormalCholeskyFactorization
64+ NormalBunchKaufmanFactorization
65+ ```
9566
9667### LinearSolve.jl
9768
98- LinearSolve.jl contains some linear solvers built in.
69+ LinearSolve.jl contains some linear solvers built in for specailized cases .
9970
100- - ` SimpleLUFactorization ` : a simple LU-factorization implementation without BLAS. Fast for small matrices.
101- - ` DiagonalFactorization ` : a special implementation only for solving ` Diagonal ` matrices fast.
71+ ``` @docs
72+ SimpleLUFactorization
73+ DiagonalFactorization
74+ ```
10275
10376### FastLapackInterface.jl
10477
@@ -108,79 +81,48 @@ LinearSolve.jl provides a wrapper to these routines in a way where an initialize
10881has a non-allocating LU factorization. In theory, this post-initialized solve should always
10982be faster than the Base.LinearAlgebra version.
11083
111- - ` FastLUFactorization ` the ` FastLapackInterface ` version of the LU factorization. Notably,
112- this version does not allow for choice of pivoting method.
113- - ` FastQRFactorization(pivot=NoPivot(),blocksize=32) ` , the ` FastLapackInterface ` version of
114- the QR factorization.
84+ ``` @docs
85+ FastLUFactorization
86+ FastQRFactorization
87+ ```
11588
11689### SuiteSparse.jl
11790
118- By default, the SuiteSparse.jl are implemented for efficiency by caching the
119- symbolic factorization. I.e., if ` set_A ` is used, it is expected that the new
120- ` A ` has the same sparsity pattern as the previous ` A ` . If this algorithm is to
121- be used in a context where that assumption does not hold, set ` reuse_symbolic=false ` .
91+ ``` @docs
92+ KLUFactorization
93+ UMFPACKFactorization
94+ ```
95+
96+ ### Sparspak.jl
97+
98+ ``` @docs
99+ SparspakFactorization
100+ ```
101+
102+ ### Krylov.jl
122103
123- - ` KLUFactorization(;reuse_symbolic=true) ` : A fast sparse LU-factorization which
124- specializes on sparsity patterns with “less structure”.
125- - ` UMFPACKFactorization(;reuse_symbolic=true) ` : A fast sparse multithreaded
126- LU-factorization which specializes on sparsity patterns with “more structure”.
104+ ``` @docs
105+ KrylovJL_CG
106+ KrylovJL_MINRES
107+ KrylovJL_GMRES
108+ KrylovJL_BICGSTAB
109+ KrylovJL_LSMR
110+ KrylovJL_CRAIGMR
111+ KrylovJL
112+ ```
127113
128114### Pardiso.jl
129115
130116!!! note
131117
132118 Using this solver requires adding the package Pardiso.jl, i.e. ` using Pardiso `
133119
134- The following algorithms are pre-specified:
135-
136- - ` MKLPardisoFactorize(;kwargs...) ` : A sparse factorization method.
137- - ` MKLPardisoIterate(;kwargs...) ` : A mixed factorization+iterative method.
138-
139- Those algorithms are defined via:
140-
141- ``` julia
142- function MKLPardisoFactorize (; kwargs... )
143- PardisoJL (; fact_phase = Pardiso. NUM_FACT,
144- solve_phase = Pardiso. SOLVE_ITERATIVE_REFINE,
145- kwargs... )
146- end
147- function MKLPardisoIterate (; kwargs... )
148- PardisoJL (; solve_phase = Pardiso. NUM_FACT_SOLVE_REFINE,
149- kwargs... )
150- end
120+ ``` @docs
121+ MKLPardisoFactorize
122+ MKLPardisoIterate
123+ PardisoJL
151124```
152125
153- The full set of keyword arguments for ` PardisoJL ` are:
154-
155- ``` julia
156- Base. @kwdef struct PardisoJL <: SciMLLinearSolveAlgorithm
157- nprocs:: Union{Int, Nothing} = nothing
158- solver_type:: Union{Int, Pardiso.Solver, Nothing} = nothing
159- matrix_type:: Union{Int, Pardiso.MatrixType, Nothing} = nothing
160- fact_phase:: Union{Int, Pardiso.Phase, Nothing} = nothing
161- solve_phase:: Union{Int, Pardiso.Phase, Nothing} = nothing
162- release_phase:: Union{Int, Nothing} = nothing
163- iparm:: Union{Vector{Tuple{Int, Int}}, Nothing} = nothing
164- dparm:: Union{Vector{Tuple{Int, Int}}, Nothing} = nothing
165- end
166- ```
167-
168- ### Sparspak.jl
169-
170- This is the translation of the well-known sparse matrix software Sparspak
171- (Waterloo Sparse Matrix Package), solving
172- large sparse systems of linear algebraic equations. Sparspak is composed of the
173- subroutines from the book "Computer Solution of Large Sparse Positive Definite
174- Systems" by Alan George and Joseph Liu. Originally written in Fortran 77, later
175- rewritten in Fortran 90. Here is the software translated into Julia.
176- The Julia rewrite is released under the MIT license with an express permission
177- from the authors of the Fortran package. The package uses multiple
178- dispatch to route around standard BLAS routines in the case e.g. of arbitrary-precision
179- floating point numbers or ForwardDiff.Dual.
180- This e.g. allows for Automatic Differentiation (AD) of a sparse-matrix solve.
181-
182- - ` SparspakFactorization() ` : A Julia-native sparse linear solver.
183-
184126### CUDA.jl
185127
186128Note that ` CuArrays ` are supported by ` GenericFactorization ` in the “normal” way.
@@ -190,55 +132,34 @@ The following are non-standard GPU factorization routines.
190132
191133 Using this solver requires adding the package CUDA.jl, i.e. ` using CUDA `
192134
193- - ` CudaOffloadFactorization() ` : An offloading technique used to GPU-accelerate CPU-based
194- computations. Requires a sufficiently large ` A ` to overcome the data transfer
195- costs.
196-
197- ### IterativeSolvers.jl
198-
199- - ` IterativeSolversJL_CG(args...;kwargs...) ` : A generic CG implementation
200- - ` IterativeSolversJL_GMRES(args...;kwargs...) ` : A generic GMRES implementation
201- - ` IterativeSolversJL_BICGSTAB(args...;kwargs...) ` : A generic BICGSTAB implementation
202- - ` IterativeSolversJL_MINRES(args...;kwargs...) ` : A generic MINRES implementation
203-
204- The general algorithm is:
205-
206- ``` julia
207- IterativeSolversJL (args... ;
208- generate_iterator = IterativeSolvers. gmres_iterable!,
209- Pl = nothing , Pr = nothing ,
210- gmres_restart = 0 , kwargs... )
135+ ``` @docs
136+ CudaOffloadFactorization
211137```
212138
213- ### Krylov.jl
214-
215- - ` KrylovJL_CG(args...;kwargs...) ` : A generic CG implementation for Hermitian and positive definite linear systems
216- - ` KrylovJL_MINRES(args...;kwargs...) ` : A generic MINRES implementation for Hermitian linear systems
217- - ` KrylovJL_GMRES(args...;kwargs...) ` : A generic GMRES implementation for square non-Hermitian linear systems
218- - ` KrylovJL_BICGSTAB(args...;kwargs...) ` : A generic BICGSTAB implementation for square non-Hermitian linear systems
219- - ` KrylovJL_LSMR(args...;kwargs...) ` : A generic LSMR implementation for least-squares problems
220- - ` KrylovJL_CRAIGMR(args...;kwargs...) ` : A generic CRAIGMR implementation for least-norm problems
221-
222- The general algorithm is:
139+ ### IterativeSolvers.jl
223140
224- ``` julia
225- KrylovJL (args... ; KrylovAlg = Krylov. gmres!,
226- Pl = nothing , Pr = nothing ,
227- gmres_restart = 0 , window = 0 ,
228- kwargs... )
141+ !!! note
142+
143+ Using these solvers requires adding the package IterativeSolvers.jl, i.e. ` using IterativeSolvers `
144+
145+ ``` @docs
146+ IterativeSolversJL_CG
147+ IterativeSolversJL_GMRES
148+ IterativeSolversJL_BICGSTAB
149+ IterativeSolversJL_MINRES
150+ IterativeSolversJL
229151```
230152
231153### KrylovKit.jl
232154
233- - ` KrylovKitJL_CG(args...;kwargs...) ` : A generic CG implementation
234- - ` KrylovKitJL_GMRES(args...;kwargs...) ` : A generic GMRES implementation
235-
236- The general algorithm is:
155+ !!! note
156+
157+ Using these solvers requires adding the package KrylovKit.jl, i.e. ` using KrylovKit `
237158
238- ``` julia
239- KrylovKitJL (args ... ;
240- KrylovAlg = KrylovKit . GMRES, gmres_restart = 0 ,
241- kwargs ... )
159+ ``` @docs
160+ KrylovKitJL_CG
161+ KrylovKitJL_GMRES
162+ KrylovKitJL
242163```
243164
244165### HYPRE.jl
@@ -248,43 +169,6 @@ KrylovKitJL(args...;
248169 Using HYPRE solvers requires Julia version 1.9 or higher, and that the package HYPRE.jl
249170 is installed.
250171
251- [ HYPRE.jl] ( https://github.com/fredrikekre/HYPRE.jl ) is an interface to
252- [ ` hypre ` ] ( https://computing.llnl.gov/projects/hypre-scalable-linear-solvers-multigrid-methods )
253- and provide iterative solvers and preconditioners for sparse linear systems. It is mainly
254- developed for large multi-process distributed problems (using MPI), but can also be used for
255- single-process problems with Julias standard sparse matrices.
256-
257- The algorithm is defined as:
258-
259- ``` julia
260- alg = HYPREAlgorithm (X)
172+ ``` @docs
173+ HYPREAlgorithm
261174```
262-
263- where ` X ` is one of the following supported solvers:
264-
265- - ` HYPRE.BiCGSTAB `
266- - ` HYPRE.BoomerAMG `
267- - ` HYPRE.FlexGMRES `
268- - ` HYPRE.GMRES `
269- - ` HYPRE.Hybrid `
270- - ` HYPRE.ILU `
271- - ` HYPRE.ParaSails ` (as preconditioner only)
272- - ` HYPRE.PCG `
273-
274- Some of the solvers above can also be used as preconditioners by passing via the ` Pl `
275- keyword argument.
276-
277- For example, to use ` HYPRE.PCG ` as the solver, with ` HYPRE.BoomerAMG ` as the preconditioner,
278- the algorithm should be defined as follows:
279-
280- ``` julia
281- A, b = setup_system (... )
282- prob = LinearProblem (A, b)
283- alg = HYPREAlgorithm (HYPRE. PCG)
284- prec = HYPRE. BoomerAMG
285- sol = solve (prob, alg; Pl = prec)
286- ```
287-
288- If you need more fine-grained control over the solver/preconditioner options you can
289- alternatively pass an already created solver to ` HYPREAlgorithm ` (and to the ` Pl ` keyword
290- argument). See HYPRE.jl docs for how to set up solvers with specific options.
0 commit comments