Showing content from https://reference.wolfram.com/language/tutorial/LinearAlgebra.html below:
Linear Algebra—Wolfram Documentation
WOLFRAM Consulting & Solutions
We deliver solutions for the AI eraâcombining symbolic computation, data-driven insights and deep technical expertise
- Data & Computational Intelligence
- Model-Based Design
- Algorithm Development
- Wolfram|Alpha for Business
- Blockchain Technology
- Education Technology
- Quantum Computation
WolframConsulting.com
Table[f,{i,m},{j,n}] build an m×n matrix where f is a function of i and j that gives the value of the i,j th entry Array[f,{m,n}] build an m×n matrix whose i,j th entry is f[i,j] ConstantArray[a,{m,n}] build an m×n matrix with all entries equal to a DiagonalMatrix[list] generate a diagonal matrix with the elements of list on the diagonal IdentityMatrix[n] generate an n×n identity matrix Normal[SparseArray[{{i1,j1}->v1,{i2,j2}->v2,…},{m,n}]] make a matrix with nonzero values vk at positions {ik,jk} Functions for constructing matrices. This generates a 2×2 matrix whose i,j th entry is a[i,j]: Here is another way to produce the same matrix: This creates a 3×2 matrix of zeros: DiagonalMatrix makes a matrix with zeros everywhere except on the leading diagonal: This makes a 3×4 matrix with two nonzero values filled in: MatrixForm prints the matrix in a two‐dimensional form: Table[0,{m},{n}] a matrix of zeros Table[If[i>=j,1,0],{i,m},{j,n}] a lower‐triangular matrix RandomReal[{0,1},{m,n}] a matrix with random numerical entries Constructing special types of matrices. Table evaluates If[i≥j,a++,0] separately for each element, to give a matrix with sequentially increasing entries in the lower-triangular part: Constructing special types of matrices with SparseArray. This sets up a general lower‐triangular matrix: Getting and Setting Pieces of Matrices m[[i,j]] the i,j th entry m[[i]] the i th row m[[All,i]] the i th column Take[m,{i0,i1},{j0,j1}] the submatrix with rows i0 through i1 and columns j0 through j1 m[[i0;;i1,j0;;j1]] the submatrix with rows i0 through i1 and columns j0 through j1 m[[{i1,…,ir }, { j1 , … , js }]] the r×s submatrix with elements having row indices ik and column indices jk Tr[m,List] elements on the diagonal ArrayRules[m] positions of nonzero elements Ways to get pieces of matrices. Matrices in the Wolfram Language are represented as lists of lists. You can use all the standard Wolfram Language list‐manipulation operations on matrices. Here is a sample 3×3 matrix: This picks out the second row of the matrix: Here is the second column of the matrix: This picks out a submatrix: m={{a11,a12,…},{a21,a22,…},…} assign m to be a matrix m[[i,j]]=a reset element {i,j} to be a m[[i]]=a reset all elements in row i to be a m[[i]]={a1,a2,…} reset elements in row i to be {a1,a2,…} m[[i0;;i1]]={v1,v2,…} reset rows i0 through i1 to be vectors {v1,v2,…} m[[All,j]]=a reset all elements in column j to be a m[[All,j]]={a1,a2,…} reset elements in column j to be {a1,a2,…} m[[i0;;i1,j0;;j1]]={{a11,a12,…},{a21,a22,…},…} reset the submatrix with rows i0 through i1 and columns j0 through j1 to new values Resetting parts of matrices. This resets the 2, 2 element to be x, then shows the whole matrix: This resets all elements in the second column to be z: This separately resets the three elements in the second column: This increments all the values in the second column: A range of indices can be specified by using ;; (Span). This resets the first two rows to be new vectors: This resets elements in the first and third columns of each row: This resets elements in the first and third columns of rows 2 through 3: Scalars, Vectors, and Matrices The Wolfram Language represents matrices and vectors using lists. Anything that is not a list the Wolfram Language considers as a scalar. A vector in the Wolfram Language consists of a list of scalars. A matrix consists of a list of vectors, representing each of its rows. In order to be a valid matrix, all the rows must be the same length, so that the elements of the matrix effectively form a rectangular array. VectorQ[expr] give True if expr has the form of a vector, and False otherwise MatrixQ[expr] give True if expr has the form of a matrix, and False otherwise Dimensions[expr] a list of the dimensions of a vector or matrix Functions for testing the structure of vectors and matrices. The list {a,b,c} has the form of a vector: Anything that is not manifestly a list is treated as a scalar, so applying VectorQ gives False: For a vector, Dimensions gives a list with a single element equal to the result from Length: This object does not count as a matrix because its rows are of different lengths: Operations on Scalars, Vectors, and Matrices Most mathematical functions in the Wolfram Language are set up to apply themselves separately to each element in a list. This is true in particular of all functions that carry the attribute Listable. A consequence is that most mathematical functions are applied element by element to matrices and vectors. The Log applies itself separately to each element in the vector: The same is true for a matrix, or, for that matter, for any nested list: The differentiation function D also applies separately to each element in a list: The sum of two vectors is carried out element by element: If you try to add two vectors with different lengths, you get an error: This adds the scalar 1 to each element of the vector: Any object that is not manifestly a list is treated as a scalar. Here c is treated as a scalar, and added separately to each element in the vector: This multiplies each element in the vector by the scalar k: It is important to realize that the Wolfram Language treats an object as a vector in a particular operation only if the object is explicitly a list at the time when the operation is done. If the object is not explicitly a list, the Wolfram Language always treats it as a scalar. This means that you can get different results, depending on whether you assign a particular object to be a list before or after you do a particular operation. The object p is treated as a scalar, and added separately to each element in the vector: This is what happens if you now replace p by the list {c,d}: You would have gotten a different result if you had replaced p by {c,d} before you did the first operation: Multiplying Vectors and Matrices cv , cm , etc. multiply each element by a scalar u.v , v.m , m.v , m1.m2 , etc. vector and matrix multiplication Cross[u,v] vector cross product (also input as u×v ) Outer[Times,t,u] outer product KroneckerProduct[m1,m2,…] Kronecker product Different kinds of vector and matrix multiplication. This multiplies each element of the vector by the scalar k: The "dot" operator gives the scalar product of two vectors: You can also use dot to multiply a matrix by a vector: Dot is also the notation for matrix multiplication in the Wolfram Language: Here are definitions for a matrix m and a vector v: This left‐multiplies the vector v by m. The object v is effectively treated as a column vector in this case: You can also use dot to right‐multiply v by m. Now v is effectively treated as a row vector: You can multiply m by v on both sides to get a scalar: For some purposes, you may need to represent vectors and matrices symbolically without explicitly giving their elements. You can use Dot to represent multiplication of such symbolic objects. Dot effectively acts here as a noncommutative form of multiplication: It is, nevertheless, associative: Dot products of sums are not automatically expanded out: The "dot" operator gives "inner products" of vectors, matrices, and so on. In more advanced calculations, you may also need to construct outer or Kronecker products of vectors and matrices. You can use the general function Outer or KroneckerProduct to do this. The outer product of two vectors is a matrix: The outer product of a matrix and a vector is a rank three tensor: Outer products are discussed in more detail in "Tensors". The Kronecker product of a matrix and a vector is a matrix: The Kronecker product of a pair of 2×2 matrices is a 4×4 matrix: v[[i]] or Part[v,i] give the i th element in the vector v c v scalar multiplication of c times the vector v u.v dot product of two vectors Norm[v] give the norm of v Normalize[v] give a unit vector in the direction of v Standardize[v] shift v to have zero mean and unit sample variance Standardize[v,f1] shift v by f1[v] and scale to have unit sample variance This is a vector in three dimensions: This gives a vector u in the direction opposite to v with twice the magnitude: This reassigns the first component of u to be its negative: This gives the dot product of u and v: This is the unit vector in the same direction as v: This verifies that the norm is 1: Transform v to have zero mean and unit sample variance: This shows the transformed values have mean 0 and variance 1: Two vectors are orthogonal if their dot product is zero. A set of vectors is orthonormal if they are all unit vectors and are pairwise orthogonal. Projection[u,v] give the orthogonal projection of u onto v Orthogonalize[{v1,v2,…}] generate an orthonormal set from the given list of vectors Orthogonal vector operations. This gives the projection of u onto v: p is a scalar multiple of v: Starting from the set of vectors {u,v}, this finds an orthonormal set of two vectors: When one of the vectors is linearly dependent on the vectors preceding it, the corresponding position in the result will be a zero vector: Inverse[m] find the inverse of a square matrix Here is a simple 2×2 matrix: This gives the inverse of m. In producing this formula, the Wolfram Language implicitly assumes that the determinant ad-bc is nonzero: Multiplying the inverse by the original matrix should give the identity matrix: You have to use Together to clear the denominators, and get back a standard identity matrix: Here is a matrix of rational numbers: The Wolfram Language finds the exact inverse of the matrix: Multiplying by the original matrix gives the identity matrix: If you try to invert a singular matrix, the Wolfram Language prints a warning message, and returns the input unchanged: If you give a matrix with exact symbolic or numerical entries, the Wolfram Language gives the exact inverse. If, on the other hand, some of the entries in your matrix are approximate real numbers, then the Wolfram Language finds an approximate numerical result. Here is a matrix containing approximate real numbers: This finds the numerical inverse: Multiplying by the original matrix gives you an identity matrix with small round-off errors: You can get rid of small off‐diagonal terms using Chop: When you try to invert a matrix with exact numerical entries, the Wolfram Language can always tell whether or not the matrix is singular. When you invert an approximate numerical matrix, The Wolfram Language can usually not tell for certain whether or not the matrix is singular: all it can tell is, for example, that the determinant is small compared to the entries of the matrix. When the Wolfram Language suspects that you are trying to invert a singular numerical matrix, it prints a warning. The Wolfram Language prints a warning if you invert a numerical matrix that it suspects is singular: This matrix is singular, but the warning is different, and the result is useless: If you work with high‐precision approximate numbers, the Wolfram Language will keep track of the precision of matrix inverses that you generate. This generates a 6×6 numerical matrix with entries of 20‐digit precision: This takes the matrix, multiplies it by its inverse, and shows the first row of the result: This generates a 20‐digit numerical approximation to a 6×6 Hilbert matrix. Hilbert matrices are notoriously hard to invert numerically: The result is still correct, but the zeros now have lower accuracy: Inverse works only on square matrices. "Advanced Matrix Operations" discusses the function PseudoInverse, which can also be used with nonsquare matrices. Some basic matrix operations. Transposing a matrix interchanges the rows and columns in the matrix. If you transpose an m×n matrix, you get an n×m matrix as the result. Transposing a 2×3 matrix gives a 3×2 result: Minors[m,k] gives the determinants of the k×k submatrices obtained by picking each possible set of k rows and k columns from m. Note that you can apply Minors to rectangular, as well as square, matrices. Here is the determinant of a simple 2×2 matrix: This generates a 3×3 matrix, whose th entry is a[i,j]: Here is the determinant of m: The trace or spur of a matrix Tr[m] is the sum of the terms on the leading diagonal. This finds the trace of a simple 2×2 matrix: The rank of a matrix is the number of linearly independent rows or columns. This finds the rank of a matrix: Powers and exponentials of matrices. This gives the third matrix power of m: It is equivalent to multiplying three copies of the matrix: Here is the millionth matrix power: This gives the matrix exponential of m: Here is an approximation to the exponential of m, based on a power series approximation: Many calculations involve solving systems of linear equations. In many cases, you will find it convenient to write down the equations explicitly, and then solve them using Solve. In some cases, however, you may prefer to convert the system of linear equations into a matrix equation, and then apply matrix manipulation operations to solve it. This approach is often useful when the system of equations arises as part of a general algorithm, and you do not know in advance how many variables will be involved. Note that if your system of equations is sparse, so that most of the entries in the matrix are zero, then it is best to represent the matrix as a SparseArray object. As discussed in "Sparse Arrays: Linear Algebra", you can convert from symbolic equations to SparseArray objects using CoefficientArrays. All the functions described here work on SparseArray objects as well as ordinary matrices. Solving and analyzing linear systems. This gives two linear equations: You can use Solve directly to solve these equations: You can also get the vector of solutions by calling LinearSolve. The result is equivalent to the one you get from Solve: Another way to solve the equations is to invert the matrix m, and then multiply {a,b} by the inverse. This is not as efficient as using LinearSolve: RowReduce performs a version of Gaussian elimination and can also be used to solve the equations: Here is a simple matrix, corresponding to two identical linear equations: The matrix has determinant zero: LinearSolve cannot find a solution to the equation in this case: There is a single basis vector for the null space of m: Multiplying the basis vector for the null space by m gives the zero vector: There is only 1 linearly independent row in m: NullSpace and MatrixRank have to determine whether particular combinations of matrix elements are zero. For approximate numerical matrices, the Tolerance option can be used to specify how close to zero is considered good enough. For exact symbolic matrices, you may sometimes need to specify something like ZeroTest->(FullSimplify[#]==0&) to force more to be done to test whether symbolic expressions are zero. Here is a simple symbolic matrix with determinant zero: The basis for the null space of m contains two vectors: Multiplying m by any linear combination of these vectors gives zero: An important feature of functions like LinearSolve and NullSpace is that they work with rectangular, as well as square, matrices. Underdetermined number of equations less than the number of variables; no solutions or many solutions may exist Overdetermined number of independent equations more than the number of variables; solutions may or may not exist Nonsingular number of independent equations equal to the number of variables, and determinant nonzero; a unique solution exists Consistent at least one solution exists Inconsistent no solutions exist Classes of linear systems represented by rectangular matrices. This matrix represents two equations, for three variables: LinearSolve gives one of the possible solutions to this underdetermined set of equations: When a matrix represents an underdetermined system of equations, the matrix has a nontrivial null space. In this case, the null space is spanned by a single vector: If you take the solution you get from LinearSolve, and add any linear combination of the basis vectors for the null space, you still get a solution: The number of independent equations is the rank of the matrix MatrixRank[m]. The number of redundant equations is Length[NullSpace[m]]. Note that the sum of these quantities is always equal to the number of columns in m. LinearSolve[m] generate a function for solving equations of the form You can apply this to a vector: You get the same result by giving the vector as an explicit second argument to LinearSolve: But you can apply f to any vector you want: Solving least-squares problems. This linear system is inconsistent: Eigenvalues and Eigenvectors Eigenvalues and eigenvectors. Even for a matrix as simple as this, the explicit form of the eigenvalues is quite complicated: If you give a matrix of approximate real numbers, the Wolfram Language will find the approximate numerical eigenvalues and eigenvectors. Here is a 2×2 numerical matrix: The matrix has two eigenvalues, in this case both real: Here are the two eigenvectors of m: Eigensystem computes the eigenvalues and eigenvectors at the same time. The assignment sets vals to the list of eigenvalues, and vecs to the list of eigenvectors: This verifies that the first eigenvalue and eigenvector satisfy the appropriate condition: This finds the eigenvalues of a random 4×4 matrix. For nonsymmetric matrices, the eigenvalues can have imaginary parts: The matrix has three eigenvalues, all equal to zero: There is, however, only one independent eigenvector for the matrix. Eigenvectors appends two zero vectors to give a total of three vectors in this case: This gives the characteristic polynomial of the matrix: Eigenvalues[m,k] the largest k eigenvalues of m Eigenvectors[m,k] the corresponding eigenvectors of m Eigensystem[m,k] the largest k eigenvalues with corresponding eigenvectors Eigenvalues[m,-k] the smallest k eigenvalues of m Eigenvectors[m,-k] the corresponding eigenvectors of m Eigensystem[m,-k] the smallest k eigenvalues with corresponding eigenvectors Finding largest and smallest eigenvalues. Eigenvalues sorts numeric eigenvalues so that the ones with large absolute value come first. In many situations, you may be interested only in the largest or smallest eigenvalues of a matrix. You can get these efficiently using Eigenvalues[m,k] and Eigenvalues[m,-k]. This computes the exact eigenvalues of an integer matrix: The eigenvalues are sorted in decreasing order of size: This gives the three eigenvalues with largest absolute value: Eigenvalues[{m,a}] the generalized eigenvalues of m with respect to a Eigenvectors[{m,a}] the generalized eigenvectors of m with respect to a Eigensystem[{m,a}] the generalized eigensystem of m with respect to a CharacteristicPolynomial[{m,a},x] the generalized characteristic polynomial of m with respect to a Generalized eigenvalues, eigenvectors, and characteristic polynomial. The generalized eigenvalues correspond to zeros of the generalized characteristic polynomial Det[m-x a]. These two matrices share a one‐dimensional null space, so one generalized eigenvalue is Indeterminate: This gives a generalized characteristic polynomial: Advanced Matrix Operations Finding singular values and norms of matrices. Decomposing square matrices into triangular forms. When you create a LinearSolveFunction using LinearSolve[m], this often works by decomposing the matrix into triangular forms, and sometimes it is useful to be able to get such forms explicitly. LU decomposition effectively factors any square matrix into a product of lower‐ and upper‐triangular matrices. Cholesky decomposition effectively factors any Hermitian positive‐definite matrix into a product of a lower‐triangular matrix and its Hermitian conjugate, which can be viewed as the analog of finding a square root of a matrix. Orthogonal decompositions of matrices. Functions related to eigenvalue problems. Most square matrices can be reduced to a diagonal matrix of eigenvalues by applying a matrix of their eigenvectors as a similarity transformation. But even when there are not enough eigenvectors to do this, one can still reduce a matrix to a Jordan form in which there are both eigenvalues and Jordan blocks on the diagonal. Jordan decomposition in general writes any square matrix in the form as . Tensors are mathematical objects that give generalizations of vectors and matrices. In the Wolfram System, a tensor is represented as a set of lists, nested to a certain number of levels. The nesting level is the rank of the tensor. rank 0 scalar rank 1 vector rank 2 matrix rank k rank k tensor Interpretations of nested lists. A tensor of rank k is essentially a k‐dimensional table of values. To be a true rank k tensor, it must be possible to arrange the elements in the table in a k‐dimensional cuboidal array. There can be no holes or protrusions in the cuboid. The indices that specify a particular element in the tensor correspond to the coordinates in the cuboid. The dimensions of the tensor correspond to the side lengths of the cuboid. One simple way that a rank k tensor can arise is in giving a table of values for a function of k variables. In physics, the tensors that occur typically have indices which run over the possible directions in space or spacetime. Notice, however, that there is no built‐in notion of covariant and contravariant tensor indices in the Wolfram System: you have to set these up explicitly using metric tensors. Table[f,{i1,n1},{i2,n2},…,{ik,nk}] create an n1×n2×…×nk tensor whose elements are the values of f Array[a,{n1,n2,…,nk}] create an n1×n2×…×nk tensor with elements given by applying a to each set of indices ArrayQ[t,n] test whether t is a tensor of rank n Dimensions[t] give a list of the dimensions of a tensor ArrayDepth[t] find the rank of a tensor MatrixForm[t] print with the elements of t arranged in a two‐dimensional array Functions for creating and testing the structure of tensors. This is another way to produce the same tensor: MatrixForm displays the elements of the tensor in a two‐dimensional array. You can think of the array as being a 2×3 matrix of column vectors: Here is the element of the tensor: The rank of a tensor is equal to the number of indices needed to specify each element. You can pick out subtensors by using a smaller number of indices. Transpose[t] transpose the first two indices in a tensor Transpose[t,{p1,p2,…}] transpose the indices in a tensor so that the k th becomes the pk th Tr[t,f] form the generalized trace of the tensor t Outer[f,t1,t2] form the generalized outer product of the tensors t1 and t2 with "multiplication operator" f t1.t2 form the dot product of t1 and t2 (last index of t1 contracted with first index of t2) Inner[f,t1,t2,g] form the generalized inner product, with "multiplication operator" f and "addition operator" g Tensor manipulation operations. You can think of a rank k tensor as having k "slots" into which you insert indices. Applying Transpose is effectively a way of reordering these slots. If you think of the elements of a tensor as forming a k‐dimensional cuboid, you can view Transpose as effectively rotating (and possibly reflecting) the cuboid. In the most general case, Transpose allows you to specify an arbitrary reordering to apply to the indices of a tensor. The function Transpose[T,{p1,p2,…,pk}] gives you a new tensor T′ such that the value of T′i1 i2 … ik is given by Tip1 ip2 … ipk. If you originally had an np1×np2×…×npk tensor, then by applying Transpose, you will get an n1×n2×…×nk tensor. Here is a matrix that you can also think of as a 2×3 tensor: Applying Transpose gives you a 3×2 tensor. Transpose effectively interchanges the two "slots" for tensor indices: The element m[[2,3]] in the original tensor becomes the element m[[3,2]] in the transposed tensor: This produces a 2×3×1×2 tensor: This transposes the first two levels of t: The result is a 3×2×1×2 tensor: If you have a tensor that contains lists of the same length at different levels, then you can use Transpose to effectively collapse different levels. This collapses all three levels, giving a list of the elements on the "main diagonal": This collapses only the first two levels: You can also use Tr to extract diagonal elements of a tensor. This forms the ordinary trace of a rank 3 tensor: Here is a generalized trace, with elements combined into a list: This combines diagonal elements only down to level 2: Outer products, and their generalizations, are a way of building higher‐rank tensors from lower‐rank ones. Outer products are also sometimes known as direct, tensor, or Kronecker products. From a structural point of view, the tensor you get from Outer[f,t,u] has a copy of the structure of u inserted at the "position" of each element in t. The elements in the resulting structure are obtained by combining elements of t and u using the function f. This gives the "outer f" of two vectors. The result is a matrix: If you take the "outer f" of a length 3 vector with a length 2 vector, you get a 3×2 matrix: The result of taking the "outer f" of a 2×2 matrix and a length 3 vector is a 2×2×3 tensor: Here are the dimensions of the tensor: If you take the generalized outer product of an m1×m2×…×mr tensor and an n1×n2×…×ns tensor, you get an m1×…×mr×n1×…×ns tensor. If the original tensors have ranks r and s, your result will be a rank r+s tensor. In terms of indices, the result of applying Outer to two tensors Ti1 i2 … ir and Uj1 j2 … js is the tensor Vi1 i2 … irj1 j2 … js with elements f[Ti1 i2 … ir,Uj1 j2 … js]. In doing standard tensor calculations, the most common function f to use in Outer is Times, corresponding to the standard outer product. Particularly in doing combinatorial calculations, however, it is often convenient to take f to be List. Using Outer, you can then get combinations of all possible elements in one tensor, with all possible elements in the other. In constructing Outer[f,t,u] you effectively insert a copy of u at every point in t. To form Inner[f,t,u], you effectively combine and collapse the last dimension of t and the first dimension of u. The idea is to take an m1×m2×…×mr tensor and an n1×n2×…×ns tensor, with mr=n1, and get an m1×m2×…×mr-1×n2×…×ns tensor as the result. The simplest examples are with vectors. If you apply Inner to two vectors of equal length, you get a scalar. Inner[f,v1,v2,g] gives a generalization of the usual scalar product, with f playing the role of multiplication, and g playing the role of addition. This gives a generalization of the standard scalar product of two vectors: This gives a generalization of a matrix product: This gives a 3×2×3×1 tensor: Here are the dimensions of the result: You can think of Inner as performing a "contraction" of the last index of one tensor with the first index of another. If you want to perform contractions across other pairs of indices, you can do so by first transposing the appropriate indices into the first or last position, then applying Inner, and then transposing the result back. In many applications of tensors, you need to insert signs to implement antisymmetry. The function Signature[{i1,i2,…}], which gives the signature of a permutation, is often useful for this purpose. Outer[f,t1,t2,…] form a generalized outer product by combining the lowest‐level elements of t1,t2,… Outer[f,t1,t2,…,n] treat only sublists at level n as separate elements Outer[f,t1,t2,…,n1,n2,…] treat only sublists at level ni in ti as separate elements Inner[f,t1,t2,g] form a generalized inner product using the lowest‐level elements of t1 Inner[f,t1,t2,g,n] contract index n of the first tensor with the first index of the second tensor Treating only certain sublists in tensors as separate elements. Here every single symbol is treated as a separate element: But here only sublists at level 1 are treated as separate elements: Flattening block tensors. Here is a block matrix (a matrix of matrices that can be viewed as blocks that fit edge to edge within a larger matrix): Here is the matrix formed by piecing the blocks together: Sparse Arrays: Linear Algebra Many large-scale applications of linear algebra involve matrices that have many elements, but comparatively few that are nonzero. You can represent such sparse matrices efficiently in the Wolfram System using SparseArray objects, as discussed in "Sparse Arrays: Manipulating Lists". SparseArray objects work by having lists of rules that specify where nonzero values appear. SparseArray[list] a SparseArray version of an ordinary list SparseArray[{{i1,j1}->v1,{i2,j2}->v2,…},{m,n}] an m×n sparse array with element {ik,jk} having value vk SparseArray[{{i1,j1},{i2,j2},…}->{v1,v2,…},{m,n}] the same sparse array Normal[array] the ordinary list corresponding to a SparseArray Specifying sparse arrays. As discussed in "Sparse Arrays: Manipulating Lists", you can use patterns to specify collections of elements in sparse arrays. You can also have sparse arrays that correspond to tensors of any rank. This makes a 50×50 sparse numerical matrix, with 148 nonzero elements: This shows a visual representation of the matrix elements: Here are the four largest eigenvalues of the matrix: You can extract parts just like in an ordinary array: You can apply most standard structural operations directly to SparseArray objects, just as you would to ordinary lists. When the results are sparse, they typically return SparseArray objects. Dimensions[m] the dimensions of an array ArrayRules[m] the rules for nonzero elements in an array m[[i,j]] element i, j m[[i]] the i th row m[[All,j]] the j th column m[[i,j]]=v reset element i, j A few structural operations that can be done directly on SparseArray objects. This gives the first column of m. It has only 2 nonzero elements: This adds 3 to each element in the first column of m: Now all the elements in the first column are nonzero: This gives the rules for the nonzero elements on the second row: SparseArray[rules] generate a sparse array from rules CoefficientArrays[{eqns1,eqns2,…},{x1,x2,…}] get arrays of coefficients from equations Import["file.mtx"] import a sparse array from a file Typical ways to get sparse arrays. This generates a tridiagonal random matrix: Even the tenth power of the matrix is still fairly sparse: This extracts the coefficients as sparse arrays: Here are the corresponding ordinary arrays: This reproduces the original forms: The coefficients of the quadratic part are given in a rank 3 tensor: This reproduces the original forms: For machine-precision numerical sparse matrices, the Wolfram System supports standard file formats such as Matrix Market (.mtx) and Harwell–Boeing. You can import and export matrices in these formats using Import and Export.
RetroSearch is an open source project built by @garambo
| Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4