We deliver solutions for the AI eraâcombining symbolic computation, data-driven insights and deep technology expertise.
Compute a singular value decomposition:
Compute a singular value decomposition for an invertible matrix:
The matrix of singular values is also invertible:
Compute a singular value decomposition for an invertible matrix:
Scope (18) Basic Uses (7)Find the singular value decomposition of a machine-precision matrix:
Singular value decomposition of a complex matrix:
Singular value decomposition for an exact matrix:
Singular value decomposition for an arbitrary-precision matrix:
Singular value decomposition of a symbolic matrix:
The singular value decomposition of a large numerical matrix is computed efficiently:
Singular value decomposition of a non-square matrix:
Subsets of Singular Values (5)Find the singular value decomposition associated with the three largest singular values of a matrix:
Unlike the full decomposition, these matrices do not recreate any part of the matrix exactly:
Find singular value decomposition associated with the three smallest singular values:
Find the "compact" decomposition associated with the nonzero singular values:
This decomposition still has sufficient information to reconstruct the matrix:
The full singular value decomposition contains a row of zeros:
Find the "thin" decomposition of a non-rectangular matrix:
This decomposition still has sufficient information to reconstruct the matrix:
The full singular value decomposition contains rows or columns of zeros in a rectangular :
Find the decomposition associated with the three largest singular values, or as many as there are if fewer:
Compute a truncated singular value decomposition for a matrix with repeated singular values:
Repeated singular values are counted separately when doing a partial decomposition:
Generalized Singular Value Decomposition (2)Find the generalized singular value decomposition of a machine-precision real matrix:
Find the generalized singular value decomposition of a machine-precision complex matrix:
Special Matrices (4)Singular value decomposition of sparse matrices:
Find the decomposition associated to the three largest singular values:
Visualize the three right-singular vectors:
Singular value decomposition of structured matrices:
The units go with the singular values:
Singular value decomposition of an identity matrix:
and could have been chosen to be identity matrices—the decomposition is not unique:
Singular value decomposition of HilbertMatrix:
Options (3) Tolerance (1)m is a nearly singular matrix:
To machine precision, the matrix is effectively singular:
With a smaller tolerance, the nonzero singular value is detected:
The default tolerance is based on precision, so the small value is detected with precision 20:
Applications (11) Geometry of SVD (5)Compute the singular value decomposition of the 2×2 matrix :
The action of is a rotation and possibly—as happens for the axis in this case—a reflection:
The action of is a scaling—either a dilation or compression—along each axis:
The action of is a rotation and possibly—though not in this case—a reflection in the target space:
Compute the singular value decomposition of the 3×2 matrix :
After the rotation in the plane by the matrix, the matrix embeds the unit circle as an ellipse in 3D:
The matrix rotates the ellipse in three dimensions:
Compute the singular value decomposition of the 2×2 matrix :
Let and denote the columns, respectively, of and :
is the direction in which is maximized, and the maximum value is :
Similarly, is the direction in which is minimized, and the minimum value is :
Visualize , and the unit circle along with their image under the multiplication on the left by :
is the direction in which is maximized, and again the maximum value is :
Similarly, is the direction in which is minimized, and again the minimum value is :
Visualize , and the unit circle along with their image under the multiplication on the right by :
Compute the singular value decomposition of the 3×2 matrix :
Let and denote the columns, respectively, of and :
is the direction in which is maximized, and the maximum value is :
Similarly, is the direction in which is minimized, and the minimum value is :
Visualize , and the image of the unit circle in the plane under left-multiplication by :
is the direction in which is maximized, and again the maximum value is :
minimizes —the minimum is zero, as the sphere is compressed into an ellipse in the plane:
maximizes subject to the constraint , and the maximum value is :
Visualize , , and the image of the unit sphere in the plane under right-multiplication by :
Compute the singular value decomposition of the 3×3 matrix :
Let and denote the columns, respectively, of and :
is the direction in which is maximized, and the maximum value is :
is the direction in which is maximized if , and the maximum value is :
Similarly, is the direction in which is minimized, and the minimum value is :
Visualize , , and the unit sphere along with their image under the multiplication on the left by :
is the direction in which is maximized, and again the maximum value is :
is the direction in which is maximized if , and again the maximum value is :
Similarly, is the direction in which is minimized, and again the minimum value is :
Visualize , , and the unit sphere along with their image under the multiplication on the right by :
Least Squares and Curve Fitting (6)If the linear system has no solution, the best approximate solution is the least-squares solution. That is the solution to , where is the orthogonal projection of onto the column space of , which can be computed using the singular value decomposition. Consider the following and :
The linear system is inconsistent:
Find the matrix of the compact singular value decomposition of . Its columns are orthonormal and span :
Compute the orthogonal projection of onto the space spanned by the columns of :
Visualize , its projections onto the columns of and :
Confirm the result using LeastSquares:
Solve the least-squares problem for the following and using only the singular value decomposition:
Compute the compact singular value decomposition where only the nonzero singular values are kept:
By definition, , so , the orthogonal projection of onto :
Thus, is the solution to the least-squares problem, as confirmed by LeastSquares:
Solve the least-squares problem for the following and two different ways: by projecting onto the column space of using just the matrix of the singular value decomposition, and the direct solution using the full decomposition. Compare and explain the results:
Compute the compact singular value decomposition of:
Compute the orthogonal projection of onto :
The direct solution can be found using , as both and are real-valued:
While x and xPerp are different, both solve the least-squares problem because m.x==m.xPerp:
The two solutions differ by an element of NullSpace[m]:
Note that LeastSquares[m,b] gives the result using the direct method:
For the matrices and that follow, find a matrix that minimizes :
One solution, in this case unique, is given by :
This result could also have been obtained using LeastSquares[m,b]:
Confirm the answer using Minimize:
SingularValueDecomposition can be used to find a best-fit curve to data. Consider the following data:
Extract the and coordinates from the data:
Construct a design matrix, whose columns are and , for fitting to a line :
Get the coefficients and for a linear least‐squares fit using a thin singular value decomposition:
Verify the coefficients using Fit:
Plot the best-fit curve along with the data:
Find the best-fit parabola to the following data:
Extract the and coordinates from the data:
Construct a design matrix, whose columns are , and , for fitting to a line :
Get the coefficients , and for a least‐squares fit:
Verify the coefficients using Fit:
Plot the best-fit curve along with the data:
Properties & Relations (13)The singular value decomposition {u,σ,v} of m decomposes m as u.σ.ConjugateTranspose[v]:
If a is an n×m matrix with decomposition {u,σ,v}, then u is an n×n matrix:
SingularValueDecomposition[m] is built from the eigensystems of and :
The columns of are the eigenvectors:
The columns of are the eigenvectors:
Since has fewer rows than columns, the diagonal entries of are :
The first right singular vector can be found by maximizing over all unit vectors:
Each subsequent vector is a maximizer with the constraint that it is perpendicular to all previous vectors:
Compare the with the found by SingularValueDecomposition; they are the same up to sign:
The analogous statement holds for the left singular vectors with :
The diagonal entries of are the respective maximum values:
If is the smaller of the dimensions of , the first columns of and are related by :
The first columns of and are also related by :
If m is a square matrix, the product of the diagonal elements of equals Abs[Det[m]]:
If is a normal matrix, both and are composed of the same vectors:
The vectors will appear in a different order unless is positive semidefinite and Hermitian:
The diagonal entries of equal Abs[Eigenvalues[m]]:
For positive definite and Hermitian , SingularValueDecomposition and Eigensystem coincide:
Their columns are unit eigenvectors of :
The nonzero elements of are the eigenvalues of :
MatrixRank[m] equals the number of nonzero singular values:
The compact decomposition that only keeps nonzero singular values can compute PseudoInverse[m]:
A matrix m that is an outer product of two vectors has MatrixRank[m]==1:
The nonzero singular value of m is the product of the norms of the vectors:
The corresponding left and right singular vectors are the input vectors, normalized:
SingularValueDecomposition[{m,a}] decomposes m as u.w.ConjugateTranspose[v]:
It decomposes a as ua.wa.ConjugateTranspose[v]:
SingularValueDecomposition[{m,a}] can be related to Eigensystem[{m.m,a.a}]:
The diagonal elements of w are /:
The diagonal elements of wa are 1/:
The columns of v are scaled multiples of the columns of Conjugate[Inverse[vλ]]:
The magnitude of the scaling is the ratio of the corresponding diagonal elements of w and vλ.m.m.vλ:
Equivalently, it is the ratio of the corresponding diagonal elements of wa and vλ.ma.ma.vλ:
Possible Issues (1)The full singular value decomposition is very large because u is a 1000×1000 matrix:
The condensed singular value decomposition is much smaller:
It still contains sufficient information to reconstruct m:
Wolfram Research (2003), SingularValueDecomposition, Wolfram Language function, https://reference.wolfram.com/language/ref/SingularValueDecomposition.html (updated 2024). TextWolfram Research (2003), SingularValueDecomposition, Wolfram Language function, https://reference.wolfram.com/language/ref/SingularValueDecomposition.html (updated 2024).
CMSWolfram Language. 2003. "SingularValueDecomposition." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/SingularValueDecomposition.html.
APAWolfram Language. (2003). SingularValueDecomposition. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/SingularValueDecomposition.html
BibTeX@misc{reference.wolfram_2025_singularvaluedecomposition, author="Wolfram Research", title="{SingularValueDecomposition}", year="2024", howpublished="\url{https://reference.wolfram.com/language/ref/SingularValueDecomposition.html}", note=[Accessed: 12-July-2025 ]}
BibLaTeX@online{reference.wolfram_2025_singularvaluedecomposition, organization={Wolfram Research}, title={SingularValueDecomposition}, year={2024}, url={https://reference.wolfram.com/language/ref/SingularValueDecomposition.html}, note=[Accessed: 12-July-2025 ]}
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4