AA =
balance (A)
¶AA =
balance (A, opt)
¶[DD, AA] =
balance (A, opt)
¶[D, P, AA] =
balance (A, opt)
¶[CC, DD, AA, BB] =
balance (A, B, opt)
¶Balance the matrix A to reduce numerical errors in future calculations.
Compute AA = DD \ A * DD
in which AA
is a matrix whose row and column norms are roughly equal in magnitude, and
DD = P * D
, in which P is a permutation
matrix and D is a diagonal matrix of powers of two. This allows the
equilibration to be computed without round-off. Results of eigenvalue
calculation are typically improved by balancing first.
If two output values are requested, balance
returns
the diagonal D and the permutation P separately as vectors.
In this case, DD = eye(n)(:,P) * diag (D)
, where
n is the matrix size.
If four output values are requested, compute AA =
CC*A*DD
and BB = CC*B*DD
,
in which AA and BB have nonzero elements of approximately the
same magnitude and CC and DD are permuted diagonal matrices as
in DD for the algebraic eigenvalue problem.
The eigenvalue balancing option opt may be one of:
"noperm"
, "S"
Scale only; do not permute.
"noscal"
, "P"
Permute only; do not scale.
Algebraic eigenvalue balancing uses standard LAPACK routines.
Generalized eigenvalue problem balancing uses Ward’s algorithm (SIAM Journal on Scientific and Statistical Computing, 1981).
bw =
bandwidth (A, type)
¶[lower, upper] =
bandwidth (A)
¶Compute the bandwidth of A.
The type argument is the string "lower"
for the lower
bandwidth and "upper"
for the upper bandwidth. If no type is
specified return both the lower and upper bandwidth of A.
The lower/upper bandwidth of a matrix is the number of subdiagonals/superdiagonals with nonzero entries.
c =
cond (A)
¶c =
cond (A, p)
¶Compute the p-norm condition number of a matrix with respect to inversion.
cond (A)
is defined as
norm (A, p) * norm (inv (A), p)
.
By default, p = 2
is used which implies a (relatively slow)
singular value decomposition. Other possible selections are
p = 1, Inf, "fro"
which are generally faster. For a full
discussion of possible p values, see norm
.
The condition number of a matrix quantifies the sensitivity of the matrix inversion operation when small changes are made to matrix elements. Ideally the condition number will be close to 1. When the number is large this indicates small changes (such as underflow or round-off error) will produce large changes in the resulting output. In such cases the solution results from numerical computing are not likely to be accurate.
c =
condeig (a)
¶[v, lambda, c] =
condeig (a)
¶Compute condition numbers of a matrix with respect to eigenvalues.
The condition numbers are the reciprocals of the cosines of the angles between the left and right eigenvectors; Large values indicate that the matrix has multiple distinct eigenvalues.
The input a must be a square numeric matrix.
The outputs are:
[v, lambda] = eig (a)
.
[v, lambda] = eig (a)
.
Example
a = [1, 2; 3, 4]; c = condeig (a) ⇒ c = 1.0150 1.0150
d =
det (A)
¶[d, rcond] =
det (A)
¶Compute the determinant of A.
Return an estimate of the reciprocal condition number if requested.
Programming Notes: Routines from LAPACK are used for full matrices and code from UMFPACK is used for sparse matrices.
The determinant should not be used to check a matrix for singularity.
For that, use any of the condition number functions: cond
,
condest
, rcond
.
lambda =
eig (A)
¶lambda =
eig (A, B)
¶[V, lambda] =
eig (A)
¶[V, lambda] =
eig (A, B)
¶[V, lambda, W] =
eig (A)
¶[V, lambda, W] =
eig (A, B)
¶[…] =
eig (A, balanceOption)
¶[…] =
eig (A, B, algorithm)
¶[…] =
eig (…, eigvalOption)
¶Compute the eigenvalues (lambda) and optionally the right eigenvectors (V) and the left eigenvectors (W) of a matrix or pair of matrices.
The flag balanceOption can be one of:
"balance"
(default)Preliminary balancing is on.
"nobalance"
Disables preliminary balancing.
The flag eigvalOption can be one of:
"matrix"
Return the eigenvalues in a diagonal matrix. (default if 2 or 3 outputs are requested)
"vector"
Return the eigenvalues in a column vector. (default if only 1 output is requested, e.g., lambda = eig (A))
The flag algorithm can be one of:
"chol"
Use the Cholesky factorization of B. (default if A is symmetric (Hermitian) and B is symmetric (Hermitian) positive definite)
"qz"
Use the QZ algorithm. (used whenever A or B are not symmetric)
no flag | chol | qz | |
---|---|---|---|
both are symmetric | "chol" | "chol" | "qz" |
at least one is not symmetric | "qz" | "qz" | "qz" |
The eigenvalues returned by eig
are not ordered.
G =
givens (x, y)
¶[c, s] =
givens (x, y)
¶Compute the Givens rotation matrix G.
The Givens matrix is a 2-by-2 orthogonal matrix
G = [ c , s -s', c]
such that
G * [x; y] = [*; 0]
with x and y scalars.
If two output arguments are requested, return the factors c and s rather than the Givens rotation matrix.
For example:
givens (1, 1) ⇒ 0.70711 0.70711 -0.70711 0.70711
Note: The Givens matrix represents a counterclockwise rotation of a 2-D plane and can be used to introduce zeros into a matrix prior to complete factorization.
S =
gsvd (A, B)
¶[U, V, X, C, S] =
gsvd (A, B)
¶[U, V, X, C, S] =
gsvd (A, B, 0)
¶Compute the generalized singular value decomposition of (A, B).
The generalized singular value decomposition is defined by the following relations:
A = U*C*X' B = V*S*X' C'*C + S'*S = eye (columns (A))
The function gsvd
normally returns just the vector of generalized
singular values
sqrt (diag (C'*C) ./ diag (S'*S))
.
If asked for five return values, it also computes
U, V, X, and C.
If the optional third input is present, gsvd
constructs the
"economy-sized" decomposition where the number of columns of U, V
and the number of rows of C, S is less than or equal to the number
of columns of A. This option is not yet implemented.
Programming Note: the code is a wrapper to the corresponding LAPACK dggsvd and zggsvd routines. If matrices A and B are both rank deficient then LAPACK will return an incorrect factorization. Programmers should avoid this combination.
See also: svd.
[G, y] =
planerot (x)
¶Compute the Givens rotation matrix for the two-element column vector x.
The Givens matrix is a 2-by-2 orthogonal matrix
G = [ c , s -s', c]
such that
y = G * [x(1); x(2)] ≡ [*; 0]
Note: The Givens matrix represents a counterclockwise rotation of a 2-D plane and can be used to introduce zeros into a matrix prior to complete factorization.
x =
inv (A)
¶[x, rcond] =
inv (A)
¶[…] =
inverse (…)
¶Compute the inverse of the square matrix A.
Return an estimate of the reciprocal condition number if requested, otherwise warn of an ill-conditioned matrix if the reciprocal condition number is small.
In general it is best to avoid calculating the inverse of a matrix directly.
For example, it is both faster and more accurate to solve systems of
equations (A*x = b) with
y = A \ b
, rather than
y = inv (A) * b
.
If called with a sparse matrix, then in general x will be a full matrix requiring significantly more storage. Avoid forming the inverse of a sparse matrix if possible.
Programming Note: inverse
is an alias for inv
and can be used
interchangeably.
x =
linsolve (A, b)
¶x =
linsolve (A, b, opts)
¶[x, R] =
linsolve (…)
¶Solve the linear system A*x = b
.
With no options, this function is equivalent to the left division operator
(x = A \ b
) or the matrix-left-divide function
(x = mldivide (A, b)
).
Octave ordinarily examines the properties of the matrix A and chooses
a solver that best matches the matrix. By passing a structure opts
to linsolve
you can inform Octave directly about the matrix A.
In this case Octave will skip the matrix examination and proceed directly
to solving the linear system.
Warning: If the matrix A does not have the properties listed in the opts structure then the result will not be accurate AND no warning will be given. When in doubt, let Octave examine the matrix and choose the appropriate solver as this step takes little time and the result is cached so that it is only done once per linear system.
Possible opts fields (set value to true/false):
A is lower triangular
A is upper triangular
A is upper Hessenberg (currently makes no difference)
A is symmetric or complex Hermitian (currently makes no difference)
A is positive definite
A is general rectangular (currently makes no difference)
Solve A'*x = b
if true rather than A*x = b
The optional second output R is the inverse condition number of A (zero if matrix is singular).
See also: mldivide, matrix_type, rcond.
type =
matrix_type (A)
¶type =
matrix_type (A, "nocompute")
¶A =
matrix_type (A, type)
¶A =
matrix_type (A, "upper", perm)
¶A =
matrix_type (A, "lower", perm)
¶A =
matrix_type (A, "banded", nl, nu)
¶Identify the matrix type or mark a matrix as a particular type.
This allows more rapid solutions of linear equations involving A to be performed.
Called with a single argument, matrix_type
returns the type of the
matrix and caches it for future use.
Called with more than one argument, matrix_type
allows the type of
the matrix to be defined.
If the option "nocompute"
is given, the function will not attempt
to guess the type if it is still unknown. This is useful for debugging
purposes.
The possible matrix types depend on whether the matrix is full or sparse, and can be one of the following
"unknown"
Remove any previously cached matrix type, and mark type as unknown.
"full"
Mark the matrix as full.
"positive definite"
Probable full positive definite matrix.
"diagonal"
Diagonal matrix. (Sparse matrices only)
"permuted diagonal"
Permuted Diagonal matrix. The permutation does not need to be specifically indicated, as the structure of the matrix explicitly gives this. (Sparse matrices only)
"upper"
Upper triangular. If the optional third argument perm is given, the matrix is assumed to be a permuted upper triangular with the permutations defined by the vector perm.
"lower"
Lower triangular. If the optional third argument perm is given, the matrix is assumed to be a permuted lower triangular with the permutations defined by the vector perm.
"banded"
"banded positive definite"
Banded matrix with the band size of nl below the diagonal and nu above it. If nl and nu are 1, then the matrix is tridiagonal and treated with specialized code. In addition the matrix can be marked as probably a positive definite. (Sparse matrices only)
"singular"
The matrix is assumed to be singular and will be treated with a minimum norm solution.
Note that the matrix type will be discovered automatically on the first
attempt to solve a linear equation involving A. Therefore
matrix_type
is only useful to give Octave hints of the matrix type.
Incorrectly defining the matrix type will result in incorrect results from
solutions of linear equations; it is entirely the responsibility of
the user to correctly identify the matrix type.
Also, the test for positive definiteness is a low-cost test for a Hermitian
matrix with a real positive diagonal. This does not guarantee that the
matrix is positive definite, but only that it is a probable candidate. When
such a matrix is factorized, a Cholesky factorization is first
attempted, and if that fails the matrix is then treated with an
LU factorization. Once the matrix has been factorized,
matrix_type
will return the correct classification of the matrix.
n =
norm (A)
¶n =
norm (A, p)
¶n =
norm (A, p, opt)
¶Compute the p-norm of the matrix A.
If the second argument is not given, p = 2
is used.
If A is a matrix (or sparse matrix):
1
1-norm, the largest column sum of the absolute values of A.
2
Largest singular value of A.
Inf
or "inf"
¶Infinity norm, the largest row sum of the absolute values of A.
"fro"
¶Frobenius norm of A,
sqrt (sum (diag (A' * A)))
.
p > 1
¶maximum norm (A*x, p)
such that norm (x, p) == 1
If A is a vector or a scalar:
Inf
or "inf"
max (abs (A))
.
-Inf
min (abs (A))
.
"fro"
Frobenius norm of A, sqrt (sumsq (abs (A)))
.
Hamming norm—the number of nonzero elements.
p > 1
p-norm of A, (sum (abs (A) .^ p)) ^ (1/p)
.
p < 1
the p-pseudonorm defined as above.
If opt is the value "rows"
, treat each row as a vector and
compute its norm. The result is returned as a column vector.
Similarly, if opt is "columns"
or "cols"
then
compute the norms of each column and return a row vector.
Z =
null (A)
¶Z =
null (A, tol)
¶Return an orthonormal basis Z of the null space of A.
The dimension of the null space Z is taken as the number of singular values of A not greater than tol. If the argument tol is missing, it is computed as
max (size (A)) * max (svd (A, 0)) * eps
B =
orth (A)
¶B =
orth (A, tol)
¶Return an orthonormal basis of the range space of A.
The dimension of the range space is taken as the number of singular values of A greater than tol. If the argument tol is missing, it is computed as
max (size (A)) * max (svd (A)) * eps
See also: null.
[y, h] =
mgorth (x, v)
¶Orthogonalize a given column vector x with respect to a set of orthonormal vectors comprising the columns of v using the modified Gram-Schmidt method.
On exit, y is a unit vector such that:
norm (y) = 1 v' * y = 0 x = [v, y]*h'
B =
pinv (A)
¶B =
pinv (A, tol)
¶Return the Moore-Penrose pseudoinverse of A.
Singular values less than tol are ignored.
If the second argument is omitted, it is taken to be
tol = max ([rows(x), columns(x)]) * norm (x) * eps
k =
rank (A)
¶k =
rank (A, tol)
¶Compute the rank of matrix A, using the singular value decomposition.
The rank is taken to be the number of singular values of A that are greater than the specified tolerance tol. If the second argument is omitted, it is taken to be
tol = max (size (A)) * sigma(1) * eps;
where eps
is machine precision and sigma(1)
is the largest
singular value of A.
The rank of a matrix is the number of linearly independent rows or columns
and equals the dimension of the row and column space. The function
orth
may be used to compute an orthonormal basis of the column space.
For testing if a system A*x = b
of linear equations
is solvable, one can use
rank (A) == rank ([A b])
In this case, x = A \ b
finds a particular solution
x. The general solution is x plus the null space of matrix
A. The function null
may be used to compute a basis of the
null space.
Example:
A = [1 2 3 4 5 6 7 8 9]; rank (A) ⇒ 2
In this example, the number of linearly independent rows is only 2 because the final row is a linear combination of the first two rows:
A(3,:) == -A(1,:) + 2 * A(2,:)
c =
rcond (A)
¶Compute the 1-norm estimate of the reciprocal condition number as returned by LAPACK.
If the matrix is well-conditioned then c will be near 1 and if the matrix is poorly conditioned it will be close to 0.
The matrix A must not be sparse. If the matrix is sparse then
condest (A)
or rcond (full (A))
should be used
instead.
t =
trace (A)
¶Compute the trace of A, the sum of the elements along the main diagonal.
The implementation is straightforward: sum (diag (A))
.
See also: eig.
r =
rref (A)
¶r =
rref (A, tol)
¶[r, k] =
rref (…)
¶Return the reduced row echelon form of A.
tol defaults to
eps * max (size (A)) * norm (A, inf)
.
The optional return argument k contains the vector of "bound variables", which are those columns on which elimination has been performed.
n =
vecnorm (A)
¶n =
vecnorm (A, p)
¶n =
vecnorm (A, p, dim)
¶Return the vector p-norm of the elements of array A along dimension dim.
The p-norm of a vector is defined as
p-norm (A, p) = sum (abs (A) .^ p)) ^ (1/p)
The input p must be a positive scalar. If omitted it defaults to 2
(Euclidean norm or distance). Other special values of p are 1
(Manhattan norm, sum of absolute values) and Inf
(absolute value of
largest element).
The input dim specifies the dimension of the array on which the function operates and must be a positive integer. If omitted the first non-singleton dimension is used.
See also: norm.