|[ < ]||[ > ]||[ << ]||[ Up ]||[ >> ]||[Top]||[Contents]||[Index]||[ ? ]|
aa = dd \ a * dd in which
aa is a matrix whose
row and column norms are roughly equal in magnitude, and
p * d, in which
p is a permutation
d is a diagonal matrix of powers of two. This allows
the equilibration to be computed without roundoff. Results of
eigenvalue calculation are typically improved by balancing first.
If four output values are requested, compute
aa = cc*a*dd and
bb = cc*b*dd), in which
bb have non-zero
elements of approximately the same magnitude and
are permuted diagonal matrices as in
dd for the algebraic
The eigenvalue balancing option
opt may be one of:
No balancing; arguments copied, transformation(s) set to identity.
Permute argument(s) to isolate eigenvalues where possible.
Scale to improve accuracy of computed eigenvalues.
Permute and scale, in that order. Rows/columns of a (and b) that are isolated by permutation are not scaled. This is the default behavior.
Algebraic eigenvalue balancing uses standard LAPACK routines.
Generalized eigenvalue problem balancing uses Ward's algorithm (SIAM Journal on Scientific and Statistical Computing, 1981).
Compute the (two-norm) condition number of a matrix.
cond (a) is
norm (a) * norm (inv (a)), and is computed via a
singular value decomposition.
See also: norm, svd, rank.
Compute the determinant of a using LAPACK. Return an estimate of the reciprocal condition number if requested.
If a is a vector of length
rows (b), return
diag (a) * b (but computed much more efficiently).
Computes the dot product of two vectors. If x and y are matrices, calculate the dot-product along the first non-singleton dimension. If the optional argument dim is given, calculate the dot-product along this dimension.
The eigenvalues (and eigenvectors) of a matrix are computed in a several step process which begins with a Hessenberg decomposition, followed by a Schur decomposition, from which the eigenvalues are apparent. The eigenvectors, when desired, are computed by further manipulations of the Schur decomposition.
The eigenvalues returned by
eig are not ordered.
givens (1, 1) ⇒ 0.70711 0.70711 -0.70711 0.70711
Compute the inverse of the square matrix a. Return an estimate of the reciprocal condition number if requested, otherwise warn of an ill-conditioned matrix if the reciprocal condition number is small.
Identify the matrix type or mark a matrix as a particular type. This allows rapid
for solutions of linear equations involving a to be performed. Called with a
matrix_type returns the type of the matrix and caches it for
future use. Called with more than one argument,
matrix_type allows the type
of the matrix to be defined.
The possible matrix types depend on whether the matrix is full or sparse, and can be one of the following
Remove any previously cached matrix type, and mark type as unknown
Mark the matrix as full.
Full positive definite matrix.
Diagonal Matrix. (Sparse matrices only)
Permuted Diagonal matrix. The permutation does not need to be specifically indicated, as the structure of the matrix explicitly gives this. (Sparse matrices only)
Upper triangular. If the optional third argument perm is given, the matrix is assumed to be a permuted upper triangular with the permutations defined by the vector perm.
Lower triangular. If the optional third argument perm is given, the matrix is assumed to be a permuted lower triangular with the permutations defined by the vector perm.
Banded matrix with the band size of nl below the diagonal and nu above it. If nl and nu are 1, then the matrix is tridiagonal and treated with specialized code. In addition the matrix can be marked as positive definite (Sparse matrices only)
The matrix is assumed to be singular and will be treated with a minimum norm solution
Note that the matrix type will be discovered automatically on the first attempt to
solve a linear equation involving a. Therefore
matrix_type is only
useful to give Octave hints of the matrix type. Incorrectly defining the
matrix type will result in incorrect results from solutions of linear equations,
and so it is entirely the responsibility of the user to correctly identify the
Compute the p-norm of the matrix a. If the second argument is
p = 2 is assumed.
If a is a matrix:
1-norm, the largest column sum of the absolute values of a.
Largest singular value of a.
Infinity norm, the largest row sum of the absolute values of a.
Frobenius norm of a,
sqrt (sum (diag (a' * a))).
If a is a vector or a scalar:
max (abs (a)).
min (abs (a)).
Frobenius norm of a,
sqrt (sumsq (abs (a))).
p-norm of a,
(sum (abs (a) .^ p)) ^ (1/p).
See also: cond, svd.
Return an orthonormal basis of the null space of a.
The dimension of the null space is taken as the number of singular values of a not greater than tol. If the argument tol is missing, it is computed as
max (size (a)) * max (svd (a)) * eps
Return an orthonormal basis of the range space of a.
The dimension of the range space is taken as the number of singular values of a greater than tol. If the argument tol is missing, it is computed as
max (size (a)) * max (svd (a)) * eps
Return the pseudoinverse of x. Singular values less than tol are ignored.
If the second argument is omitted, it is assumed that
tol = max (size (x)) * sigma_max (x) * eps,
sigma_max (x) is the maximal singular value of x.
Compute the rank of a, using the singular value decomposition. The rank is taken to be the number of singular values of a that are greater than the specified tolerance tol. If the second argument is omitted, it is taken to be
tol = max (size (a)) * sigma(1) * eps;
eps is machine precision and
sigma(1) is the largest
singular value of a.
Compute the trace of a,
sum (diag (a)).
Returns the reduced row echelon form of a. tol defaults
eps * max (size (a)) * norm (a, inf).
Called with two return arguments, k returns the vector of "bound variables", which are those columns on which elimination has been performed.
|[ < ]||[ > ]||[ << ]||[ Up ]||[ >> ]|
This document was generated on December, 26 2007 using texi2html 1.76.