Next: , Previous: , Up: Sparse Matrices   [Contents][Index]


22.2 Linear Algebra on Sparse Matrices

Octave includes a polymorphic solver for sparse matrices, where the exact solver used to factorize the matrix, depends on the properties of the sparse matrix itself. Generally, the cost of determining the matrix type is small relative to the cost of factorizing the matrix itself, but in any case the matrix type is cached once it is calculated, so that it is not re-determined each time it is used in a linear equation.

The selection tree for how the linear equation is solve is

  1. If the matrix is diagonal, solve directly and goto 8
  2. If the matrix is a permuted diagonal, solve directly taking into account the permutations. Goto 8
  3. If the matrix is square, banded and if the band density is less than that given by spparms ("bandden") continue, else goto 4.
    1. If the matrix is tridiagonal and the right-hand side is not sparse continue, else goto 3b.
      1. If the matrix is Hermitian, with a positive real diagonal, attempt Cholesky factorization using LAPACK xPTSV.
      2. If the above failed or the matrix is not Hermitian with a positive real diagonal use Gaussian elimination with pivoting using LAPACK xGTSV, and goto 8.
    2. If the matrix is Hermitian with a positive real diagonal, attempt Cholesky factorization using LAPACK xPBTRF.
    3. if the above failed or the matrix is not Hermitian with a positive real diagonal use Gaussian elimination with pivoting using LAPACK xGBTRF, and goto 8.
  4. If the matrix is upper or lower triangular perform a sparse forward or backward substitution, and goto 8
  5. If the matrix is an upper triangular matrix with column permutations or lower triangular matrix with row permutations, perform a sparse forward or backward substitution, and goto 8
  6. If the matrix is square, Hermitian with a real positive diagonal, attempt sparse Cholesky factorization using CHOLMOD.
  7. If the sparse Cholesky factorization failed or the matrix is not Hermitian with a real positive diagonal, and the matrix is square, factorize, solve, and perform one refinement iteration using UMFPACK.
  8. If the matrix is not square, or any of the previous solvers flags a singular or near singular matrix, find a minimum norm solution using CXSPARSE10.

The band density is defined as the number of nonzero values in the band divided by the total number of values in the full band. The banded matrix solvers can be entirely disabled by using spparms to set bandden to 1 (i.e., spparms ("bandden", 1)).

The QR solver factorizes the problem with a Dulmage-Mendelsohn decomposition, to separate the problem into blocks that can be treated as over-determined, multiple well determined blocks, and a final over-determined block. For matrices with blocks of strongly connected nodes this is a big win as LU decomposition can be used for many blocks. It also significantly improves the chance of finding a solution to over-determined problems rather than just returning a vector of NaN’s.

All of the solvers above, can calculate an estimate of the condition number. This can be used to detect numerical stability problems in the solution and force a minimum norm solution to be used. However, for narrow banded, triangular or diagonal matrices, the cost of calculating the condition number is significant, and can in fact exceed the cost of factoring the matrix. Therefore the condition number is not calculated in these cases, and Octave relies on simpler techniques to detect singular matrices or the underlying LAPACK code in the case of banded matrices.

The user can force the type of the matrix with the matrix_type function. This overcomes the cost of discovering the type of the matrix. However, it should be noted that identifying the type of the matrix incorrectly will lead to unpredictable results, and so matrix_type should be used with care.

: nest = normest (A)
: nest = normest (A, tol)
: [nest, iter] = normest (…)

Estimate the 2-norm of the matrix A using a power series analysis.

This is typically used for large matrices, where the cost of calculating norm (A) is prohibitive and an approximation to the 2-norm is acceptable.

tol is the tolerance to which the 2-norm is calculated. By default tol is 1e-6.

The optional output iter returns the number of iterations that were required for normest to converge.

See also: normest1, norm, cond, condest.

: nest = normest1 (A)
: nest = normest1 (A, t)
: nest = normest1 (A, t, x0)
: nest = normest1 (Afun, t, x0, p1, p2, …)
: [nest, v] = normest1 (A, …)
: [nest, v, w] = normest1 (A, …)
: [nest, v, w, iter] = normest1 (A, …)

Estimate the 1-norm of the matrix A using a block algorithm.

normest1 is best for large sparse matrices where only an estimate of the norm is required. For small to medium sized matrices, consider using norm (A, 1). In addition, normest1 can be used for the estimate of the 1-norm of a linear operator A when matrix-vector products A * x and A' * x can be cheaply computed. In this case, instead of the matrix A, a function Afun (flag, x) is used; it must return:

  • the dimension n of A, if flag is "dim"
  • true if A is a real operator, if flag is "real"
  • the result A * x, if flag is "notransp"
  • the result A' * x, if flag is "transp"

A typical case is A defined by b ^ m, in which the result A * x can be computed without even forming explicitly b ^ m by:

y = x;
for i = 1:m
  y = b * y;
endfor

The parameters p1, p2, … are arguments of Afun (flag, x, p1, p2, …).

The default value for t is 2. The algorithm requires matrix-matrix products with sizes n x n and n x t.

The initial matrix x0 should have columns of unit 1-norm. The default initial matrix x0 has the first column ones (n, 1) / n and, if t > 1, the remaining columns with random elements -1 / n, 1 / n, divided by n.

On output, nest is the desired estimate, v and w are vectors such that w = A * v, with norm (w, 1) = c * norm (v, 1). iter contains in iter(1) the number of iterations (the maximum is hardcoded to 5) and in iter(2) the total number of products A * x or A' * x performed by the algorithm.

Algorithm Note: normest1 uses random numbers during evaluation. Therefore, if consistent results are required, the "state" of the random generator should be fixed before invoking normest1.

Reference: N. J. Higham and F. Tisseur, A block algorithm for matrix 1-norm estimation, with and application to 1-norm pseudospectra, SIAM J. Matrix Anal. Appl., pp. 1185–1201, Vol 21, No. 4, 2000.

See also: normest, norm, cond, condest.

: cest = condest (A)
: cest = condest (A, t)
: cest = condest (A, Ainvfcn)
: cest = condest (A, Ainvfcn, t)
: cest = condest (A, Ainvfcn, t, p1, p2, …)
: cest = condest (Afcn, Ainvfcn)
: cest = condest (Afcn, Ainvfcn, t)
: cest = condest (Afcn, Ainvfcn, t, p1, p2, …)
: [cest, v] = condest (…)

Estimate the 1-norm condition number of a square matrix A using t test vectors and a randomized 1-norm estimator.

The optional input t specifies the number of test vectors (default 5).

The input may be a matrix A (the algorithm is particularly appropriate for large, sparse matrices). Alternatively, the behavior of the matrix can be defined implicitly by functions. When using an implicit definition, condest requires the following functions:

  • - Afcn (flag, x) which must return
    • the dimension n of A, if flag is "dim"
    • true if A is a real operator, if flag is "real"
    • the result A * x, if flag is "notransp"
    • the result A' * x, if flag is "transp"
  • - Ainvfcn (flag, x) which must return
    • the dimension n of inv (A), if flag is "dim"
    • true if inv (A) is a real operator, if flag is "real"
    • the result inv (A) * x, if flag is "notransp"
    • the result inv (A)' * x, if flag is "transp"

Any parameters p1, p2, … are additional arguments of Afcn (flag, x, p1, p2, …) and Ainvfcn (flag, x, p1, p2, …).

The principal output is the 1-norm condition number estimate cest.

The optional second output v is an approximate null vector; it satisfies the equation norm (A*v, 1) == norm (A, 1) * norm (v, 1) / cest.

Algorithm Note: condest uses a randomized algorithm to approximate the 1-norms. Therefore, if consistent results are required, the "state" of the random generator should be fixed before invoking condest.

References:

See also: cond, rcond, norm, normest1, normest.

: spparms ()
: vals = spparms ()
: [keys, vals] = spparms ()
: val = spparms (key)
: spparms (vals)
: spparms ("default")
: spparms ("tight")
: spparms (key, val)

Query or set the parameters used by the sparse solvers and factorization functions.

The first four calls above get information about the current settings, while the others change the current settings. The parameters are stored as pairs of keys and values, where the values are all floats and the keys are one of the following strings:

spumoni

Printing level of debugging information of the solvers (default 0)

ths_rel

Included for compatibility. Not used. (default 1)

ths_abs

Included for compatibility. Not used. (default 1)

exact_d

Included for compatibility. Not used. (default 0)

supernd

Included for compatibility. Not used. (default 3)

rreduce

Included for compatibility. Not used. (default 3)

wh_frac

Included for compatibility. Not used. (default 0.5)

autommd

Flag whether the LU/QR and the ’\’ and ’/’ operators will automatically use the sparsity preserving mmd functions (default 1)

autoamd

Flag whether the LU and the ’\’ and ’/’ operators will automatically use the sparsity preserving amd functions (default 1)

piv_tol

The pivot tolerance of the UMFPACK solvers (default 0.1)

sym_tol

The pivot tolerance of the UMFPACK symmetric solvers (default 0.001)

bandden

The density of nonzero elements in a banded matrix before it is treated by the LAPACK banded solvers (default 0.5)

umfpack

Flag whether the UMFPACK or mmd solvers are used for the LU, ’\’ and ’/’ operations (default 1)

The value of individual keys can be set with spparms (key, val). The default values can be restored with the special keyword "default". The special keyword "tight" can be used to set the mmd solvers to attempt a sparser solution at the potential cost of longer running time.

See also: chol, colamd, lu, qr, symamd.

: p = sprank (S)

Calculate the structural rank of the sparse matrix S.

Note that only the structure of the matrix is used in this calculation based on a Dulmage-Mendelsohn permutation to block triangular form. As such the numerical rank of the matrix S is bounded by sprank (S) >= rank (S). Ignoring floating point errors sprank (S) == rank (S).

See also: dmperm.

: [count, h, parent, post, R] = symbfact (S)
: […] = symbfact (S, typ)
: […] = symbfact (S, typ, mode)

Perform a symbolic factorization analysis of the sparse matrix S.

The input variables are

S

S is a real or complex sparse matrix.

typ

Is the type of the factorization and can be one of

"sym" (default)

Factorize S. Assumes S is symmetric and uses the upper triangular portion of the matrix.

"col"

Factorize S' * S.

"row"

Factorize S * S'.

"lo"

Factorize S'. Assumes S is symmetric and uses the lower triangular portion of the matrix.

mode

When mode is unspecified return the Cholesky factorization for R. If mode is "lower" or "L" then return the conjugate transpose R' which is a lower triangular factor. The conjugate transpose version is faster and uses less memory, but still returns the same values for all other outputs: count, h, parent, and post.

The output variables are:

count

The row counts of the Cholesky factorization as determined by typ. The computational difficulty of performing the true factorization using chol is sum (count .^ 2).

h

The height of the elimination tree.

parent

The elimination tree itself.

post

A sparse boolean matrix whose structure is that of the Cholesky factorization as determined by typ.

See also: chol, etree, treelayout.

For non square matrices, the user can also utilize the spaugment function to find a least squares solution to a linear equation.

: s = spaugment (A, c)

Create the augmented matrix of A.

This is given by

[c * eye(m, m), A;
            A', zeros(n, n)]

This is related to the least squares solution of A \ b, by

s * [ r / c; x] = [ b, zeros(n, columns(b)) ]

where r is the residual error

r = b - A * x

As the matrix s is symmetric indefinite it can be factorized with lu, and the minimum norm solution can therefore be found without the need for a qr factorization. As the residual error will be zeros (m, m) for underdetermined problems, and example can be

m = 11; n = 10; mn = max (m, n);
A = spdiags ([ones(mn,1), 10*ones(mn,1), -ones(mn,1)],
             [-1, 0, 1], m, n);
x0 = A \ ones (m,1);
s = spaugment (A);
[L, U, P, Q] = lu (s);
x1 = Q * (U \ (L \ (P  * [ones(m,1); zeros(n,1)])));
x1 = x1(end - n + 1 : end);

To find the solution of an overdetermined problem needs an estimate of the residual error r and so it is more complex to formulate a minimum norm solution using the spaugment function.

In general the left division operator is more stable and faster than using the spaugment function.

See also: mldivide.

Finally, the function eigs can be used to calculate a limited number of eigenvalues and eigenvectors based on a selection criteria and likewise for svds which calculates a limited number of singular values and vectors.

: d = eigs (A)
: d = eigs (A, k)
: d = eigs (A, k, sigma)
: d = eigs (A, k, sigma, opts)
: d = eigs (A, B)
: d = eigs (A, B, k)
: d = eigs (A, B, k, sigma)
: d = eigs (A, B, k, sigma, opts)
: d = eigs (Af, n)
: d = eigs (Af, n, k)
: d = eigs (Af, n, k, sigma)
: d = eigs (Af, n, k, sigma, opts)
: d = eigs (Af, n, B)
: d = eigs (Af, n, B, k)
: d = eigs (Af, n, B, k, sigma)
: d = eigs (Af, n, B, k, sigma, opts)
: [V, D] = eigs (…)
: [V, D, flag] = eigs (…)

Calculate a limited number of eigenvalues and eigenvectors based on a selection criteria.

By default, eigs solve the equation where is the corresponding eigenvector. If given the positive definite matrix B then eigs solves the general eigenvalue equation

The input A is a square matrix of dimension n-by-n. Typically, A is also large and sparse.

The input B for the generalized eigenvalue problem is a square matrix with the same size as A (n-by-n). Typically, B is also large and sparse.

The number of eigenvalues and eigenvectors to calculate is given by k and defaults to 6.

The argument sigma determines which eigenvalues are returned. sigma can be either a scalar or a string. When sigma is a scalar, the k eigenvalues closest to sigma are returned. If sigma is a string, it must be one of the following values.

"lm"

Largest Magnitude (default).

"sm"

Smallest Magnitude.

"la"

Largest Algebraic (valid only for real symmetric problems).

"sa"

Smallest Algebraic (valid only for real symmetric problems).

"be"

Both Ends, with one more from the high-end if k is odd (valid only for real symmetric problems).

"lr"

Largest Real part (valid only for complex or unsymmetric problems).

"sr"

Smallest Real part (valid only for complex or unsymmetric problems).

"li"

Largest Imaginary part (valid only for complex or unsymmetric problems).

"si"

Smallest Imaginary part (valid only for complex or unsymmetric problems).

If opts is given, it is a structure defining possible options that eigs should use. The fields of the opts structure are:

issym

If Af is given then this flag (true/false) determines whether the function Af defines a symmetric problem. It is ignored if a matrix A is given. The default is false.

isreal

If Af is given then this flag (true/false) determines whether the function Af defines a real problem. It is ignored if a matrix A is given. The default is true.

tol

Defines the required convergence tolerance, calculated as tol * norm (A). The default is eps.

maxit

The maximum number of iterations. The default is 300.

p

The number of Lanczos basis vectors to use. More vectors will result in faster convergence, but a greater use of memory. The optimal value of p is problem dependent and should be in the range k + 1 to n. The default value is 2 * k.

v0

The starting vector for the algorithm. An initial vector close to the final vector will speed up convergence. The default is for ARPACK to randomly generate a starting vector. If specified, v0 must be an n-by-1 vector where n = rows (A).

disp

The level of diagnostic printout (0|1|2). If disp is 0 then diagnostics are disabled. The default value is 0.

cholB

If the generalized eigenvalue problem is being calculated, this flag (true/false) specifies whether the B input represents chol (B) or simply the matrix B. The default is false.

permB

The permutation vector of the Cholesky factorization for B if cholB is true. It is obtained by [R, ~, permB] = chol (B, "vector"). The default is 1:n.

It is also possible to represent A by a function denoted Af. Af must be followed by a scalar argument n defining the length of the vector argument accepted by Af. Af can be a function handle, an inline function, or a string. When Af is a string it holds the name of the function to use.

Af is a function of the form y = Af (x) where the required return value of Af is determined by the value of sigma. The four possible forms are

A * x

if sigma is not given or is a string other than "sm".

A \ x

if sigma is 0 or "sm".

(A - sigma * I) \ x

if sigma is a scalar not equal to 0; I is the identity matrix of the same size as A.

(A - sigma * B) \ x

for the general eigenvalue problem.

The return arguments and their form depend on the number of return arguments requested. For a single return argument, a column vector d of length k is returned containing the k eigenvalues that have been found. For two return arguments, V is an n-by-k matrix whose columns are the k eigenvectors corresponding to the returned eigenvalues. The eigenvalues themselves are returned in D in the form of a k-by-k matrix, where the elements on the diagonal are the eigenvalues.

The third return argument flag returns the status of the convergence. If flag is 0 then all eigenvalues have converged. Any other value indicates a failure to converge.

Programming Notes: For small problems, n < 500, consider using eig (full (A)).

If ARPACK fails to converge consider increasing the number of Lanczos vectors (opt.p), increasing the number of iterations (opt.maxiter), or decreasing the tolerance (opt.tol).

Reference: This function is based on the ARPACK package, written by R. Lehoucq, K. Maschhoff, D. Sorensen, and C. Yang. For more information see http://www.caam.rice.edu/software/ARPACK/.

See also: eig, svds.

: s = svds (A)
: s = svds (A, k)
: s = svds (A, k, sigma)
: s = svds (A, k, sigma, opts)
: [u, s, v] = svds (…)
: [u, s, v, flag] = svds (…)

Find a few singular values of the matrix A.

The singular values are calculated using

[m, n] = size (A);
s = eigs ([sparse(m, m), A;
                     A', sparse(n, n)])

The eigenvalues returned by eigs correspond to the singular values of A. The number of singular values to calculate is given by k and defaults to 6.

The argument sigma specifies which singular values to find. When sigma is the string 'L', the default, the largest singular values of A are found. Otherwise, sigma must be a real scalar and the singular values closest to sigma are found. As a corollary, sigma = 0 finds the smallest singular values. Note that for relatively small values of sigma, there is a chance that the requested number of singular values will not be found. In that case sigma should be increased.

opts is a structure defining options that svds will pass to eigs. The possible fields of this structure are documented in eigs. By default, svds sets the following three fields:

tol

The required convergence tolerance for the singular values. The default value is 1e-10. eigs is passed tol / sqrt (2).

maxit

The maximum number of iterations. The default is 300.

disp

The level of diagnostic printout (0|1|2). If disp is 0 then diagnostics are disabled. The default value is 0.

If more than one output is requested then svds will return an approximation of the singular value decomposition of A

A_approx = u*s*v'

where A_approx is a matrix of size A but only rank k.

flag returns 0 if the algorithm has successfully converged, and 1 otherwise. The test for convergence is

norm (A*v - u*s, 1) <= tol * norm (A, 1)

svds is best for finding only a few singular values from a large sparse matrix. Otherwise, svd (full (A)) will likely be more efficient.

See also: svd, eigs.


Footnotes

(10)

The CHOLMOD, UMFPACK and CXSPARSE packages were written by Tim Davis and are available at http://faculty.cse.tamu.edu/davis/suitesparse.html


Next: Iterative Techniques Applied to Sparse Matrices, Previous: Creation and Manipulation of Sparse Matrices, Up: Sparse Matrices   [Contents][Index]