From Wikipedia, the free encyclopedia
Square matrix without an inverse
A singular matrix is a square matrix that is not invertible, unlike non-singular matrix which is invertible. Equivalently, an n {\displaystyle n} -by- n {\displaystyle n} matrix A {\displaystyle A} is singular if and only if determinant, d e t ( A ) = 0 {\displaystyle det(A)=0} .[1] In classical linear algebra, a matrix is called non-singular (or invertible) when it has an inverse; by definition, a matrix that fails this criterion is singular. In more algebraic terms, an n {\displaystyle n} -by- n {\displaystyle n} matrix A is singular exactly when its columns (and rows) are linearly dependent, so that the linear map x → A x {\displaystyle x\rightarrow Ax} is not one-to-one.
In this case the kernel (null space) of A is non-trivial (has dimension ≥1), and the homogeneous system A x = 0 {\displaystyle Ax=0} admits non-zero solutions. These characterizations follow from standard rank-nullity and invertibility theorems: for a square matrix A, d e t ( A ) ≠ 0 {\displaystyle det(A)\neq 0} if and only if r a n k ( A ) = n {\displaystyle rank(A)=n} , and d e t ( A ) = 0 {\displaystyle det(A)=0} if and only if r a n k ( A ) < n {\displaystyle rank(A)<n} .
Conditions and properties[edit]One of the basic condition of a singular matrix is that its determinant is equal to zero. If a matrix has determinant of zero, i.e. d e t ( A ) = 0 {\displaystyle det(A)=0} , then the columns C i {\displaystyle C_{i}} are supposed to be linearly dependent. Determinant is an alternating multilinear form on columns, so any linear dependence among columns makes the determinant zero in magnitude. Hence d e t ( A ) = 0 {\displaystyle det(A)=0} .
For example:
if [ 1 2 2 4 ] {\displaystyle {\begin{bmatrix}1&2\\2&4\end{bmatrix}}}
here C 2 = 2 C 1 {\displaystyle C_{2}=2C_{1}} , which implies that columns are linearly dependent.
Computational implications[edit]An invertible matrix helps in the algorithm by providing an assumption that certain transformations, computations and systems can be reversed and solved uniquely, like A x = B {\displaystyle Ax=B} to x = A − 1 B {\displaystyle x=A^{-1}B} . This helps solver to make sure if a solution is unique or not.
In Gaussian elimination, invertibility of the coefficient matrix A {\displaystyle A} ensures the algorithm produces a unique solution. For example, when matrix is invertible the pivots are non-zero, allowing one to row swap if necessary and solve the system, however in case of a singular matrix, some pivots can be zero which can not be fixed by mere row swaps.[2] This imposes a problem where the elimination either breaks or gives an inconsistent result. One more problem a singular matrix produces when solving a Gaussian Elimination is that it can not solve the back substitution because to back substitute the diagonal entries of the A {\displaystyle A} matrix must be non-zero, i.e. d e t ( A ) ≠ 0 {\displaystyle det(A)\neq 0} . However, in case of singular matrix the result is often infinitely many solutions.
In mechanical and robotic systems, singular Jacobian matrices indicate kinematic singularities. For example, the Jacobian of a robotic manipulator (mapping joint velocities to end-effector velocity) loses rank when the robot reaches a configuration with constrained motion. At a singular configuration, the robot cannot move or apply forces in certain directions.[3]
In graph theory and network physics, the Laplacian matrix of a graph is inherently singular (it has a zero eigenvalue) because each row sums to zero.[4] This reflects the fact that the uniform vector is in its nullspace.
In machine learning and statistics, singular matrices frequently appear due to multicollinearity. For instance, a data matrix X {\displaystyle X} leads to a singular covariance or X T X {\displaystyle X^{T}X} matrix if features are linearly dependent. This occurs in linear regression when predictors are collinear, causing the normal equations matrix X T X {\displaystyle X^{T}X} to be singular.[5] The remedy is often to drop or combine features, or use the pseudoinverse. Dimension-reduction techniques like Principal Component Analysis (PCA) exploit SVD: singular value decomposition yields low-rank approximations of data, effectively treating the data covariance as singular by discarding small singular values.[5]
Certain transformations (e.g. projections from 3D to 2D) are modeled by singular matrices, since they collapse a dimension. Handling these requires care (one cannot invert a projection). In cryptography and coding theory, invertible matrices are used for mixing operations; singular ones would be avoided or detected as errors.[6]
The study of singular matrices is rooted in the early history of linear algebra. Determinants were first developed in Japan by Seki in 1683 and in Europe by Leibniz and Cramer in the 1690s[7] as tools for solving systems of equations. Leibniz explicitly recognized that a system has a solution precisely when a certain determinant expression equals zero. In that sense, singularity (determinant zero) was understood as the critical condition for solvability. Over the 18th and 19th centuries, mathematicians (Laplace, Cauchy, etc.) established many properties of determinants and invertible matrices, formalizing the notion that d e t ( A ) = 0 {\displaystyle det(A)=0} characterizes non-invertibility.
The term "singular matrix" itself emerged later, but the conceptual importance remained. In the 20th century, generalizations like the Moore–Penrose pseudoinverse were introduced to systematically handle singular or non-square cases. As recent scholarship notes, the idea of a pseudoinverse was proposed by E. H. Moore in 1920 and rediscovered by R. Penrose in 1955,[8] reflecting its longstanding utility. The pseudoinverse and singular value decomposition became fundamental in both theory and applications (e.g. in quantum mechanics, signal processing, and more) for dealing with singularity. Today, singular matrices are a canonical subject in linear algebra: they delineate the boundary between invertible (well-behaved) cases and degenerate (ill-posed) cases. In abstract terms, singular matrices correspond to non-isomorphisms in linear mappings and are thus central to the theory of vector spaces and linear transformations.
Example 1 (2×2 matrix):
A = [ 1 2 2 4 ] {\displaystyle A={\begin{bmatrix}1&2\\2&4\end{bmatrix}}}
Compute its determinant: d e t ( A ) {\displaystyle det(A)} = 1 ⋅ 4 − 2 ⋅ 2 = 4 − 4 = 0 {\displaystyle 1\cdot 4-2\cdot 2=4-4=0} . Thus A is singular. One sees directly that the second row is twice the first, so the rows are linearly dependent. To illustrate failure of invertibility, attempt Gaussian elimination:
R o w 2 ← R o w 2 − 2 ⋅ R o w 1 {\displaystyle Row_{2}\leftarrow Row_{2}-2\cdot Row_{1}} = [ 1 2 2 − 1 ( 2 ) 4 − 2 ( 2 ) ] {\displaystyle {\begin{bmatrix}1&2\\2-1(2)&4-2(2)\end{bmatrix}}} = [ 1 2 0 0 ] {\displaystyle {\begin{bmatrix}1&2\\0&0\end{bmatrix}}}
Now the second pivot would be the (2,2) entry, but it is zero. Since no nonzero pivot exists in column 2, elimination stops. This confirms r a n k ( A ) = 1 < 2 {\displaystyle rank(A)=1<2} and that A has no inverse.[9]
Solving A x = b {\displaystyle Ax=b} exhibits infinite/ no solutions. For example, A x = 0 {\displaystyle Ax=0} gives:
x + 2 y = 0 2 x + 4 y = 0 {\displaystyle {\begin{alignedat}{5}x+2y=0\\2x+4y=0\end{alignedat}}}
which are the same equation. Thus the nullspace is one-dimensional, then A x = b {\displaystyle Ax=b} has no solution.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4