Form of a matrix
In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition[2]
A skew-symmetric ⟺ A T = − A . {\displaystyle A{\text{ skew-symmetric}}\quad \iff \quad A^{\textsf {T}}=-A.}
In terms of the entries of the matrix, if a i j {\textstyle a_{ij}} denotes the entry in the i {\textstyle i} -th row and j {\textstyle j} -th column, then the skew-symmetric condition is equivalent to
A skew-symmetric ⟺ a i j = − a j i . {\displaystyle A{\text{ skew-symmetric}}\quad \iff \quad a_{ij}=-a_{ji}.}
The matrix A = [ 0 2 − 45 − 2 0 − 4 45 4 0 ] {\displaystyle A={\begin{bmatrix}0&2&-45\\-2&0&-4\\45&4&0\end{bmatrix}}} is skew-symmetric because A T = [ 0 − 2 45 2 0 4 − 45 − 4 0 ] = − A . {\displaystyle A^{\textsf {T}}={\begin{bmatrix}0&-2&45\\2&0&4\\-45&-4&0\end{bmatrix}}=-A.}
Throughout, we assume that all matrix entries belong to a field F {\textstyle \mathbb {F} } whose characteristic is not equal to 2. That is, we assume that 1 + 1 ≠ 0, where 1 denotes the multiplicative identity and 0 the additive identity of the given field. If the characteristic of the field is 2, then a skew-symmetric matrix is the same thing as a symmetric matrix.
Vector space structure[edit]As a result of the first two properties above, the set of all skew-symmetric matrices of a fixed size forms a vector space. The space of n × n {\textstyle n\times n} skew-symmetric matrices has dimension 1 2 n ( n − 1 ) . {\textstyle {\frac {1}{2}}n(n-1).}
Let Mat n {\displaystyle {\mbox{Mat}}_{n}} denote the space of n × n {\textstyle n\times n} matrices. A skew-symmetric matrix is determined by 1 2 n ( n − 1 ) {\textstyle {\frac {1}{2}}n(n-1)} scalars (the number of entries above the main diagonal); a symmetric matrix is determined by 1 2 n ( n + 1 ) {\textstyle {\frac {1}{2}}n(n+1)} scalars (the number of entries on or above the main diagonal). Let Skew n {\textstyle {\mbox{Skew}}_{n}} denote the space of n × n {\textstyle n\times n} skew-symmetric matrices and Sym n {\textstyle {\mbox{Sym}}_{n}} denote the space of n × n {\textstyle n\times n} symmetric matrices. If A ∈ Mat n {\textstyle A\in {\mbox{Mat}}_{n}} then A = 1 2 ( A − A T ) + 1 2 ( A + A T ) . {\displaystyle A={\tfrac {1}{2}}\left(A-A^{\mathsf {T}}\right)+{\tfrac {1}{2}}\left(A+A^{\mathsf {T}}\right).}
Notice that 1 2 ( A − A T ) ∈ Skew n {\textstyle {\frac {1}{2}}\left(A-A^{\textsf {T}}\right)\in {\mbox{Skew}}_{n}} and 1 2 ( A + A T ) ∈ Sym n . {\textstyle {\frac {1}{2}}\left(A+A^{\textsf {T}}\right)\in {\mbox{Sym}}_{n}.} This is true for every square matrix A {\textstyle A} with entries from any field whose characteristic is different from 2. Then, since Mat n = Skew n + Sym n {\textstyle {\mbox{Mat}}_{n}={\mbox{Skew}}_{n}+{\mbox{Sym}}_{n}} and Skew n ∩ Sym n = { 0 } , {\textstyle {\mbox{Skew}}_{n}\cap {\mbox{Sym}}_{n}=\{0\},} Mat n = Skew n ⊕ Sym n , {\displaystyle {\mbox{Mat}}_{n}={\mbox{Skew}}_{n}\oplus {\mbox{Sym}}_{n},} where ⊕ {\displaystyle \oplus } denotes the direct sum.
Denote by ⟨ ⋅ , ⋅ ⟩ {\textstyle \langle \cdot ,\cdot \rangle } the standard inner product on R n . {\displaystyle \mathbb {R} ^{n}.} The real n × n {\displaystyle n\times n} matrix A {\textstyle A} is skew-symmetric if and only if ⟨ A x , y ⟩ = − ⟨ x , A y ⟩ for all x , y ∈ R n . {\displaystyle \langle Ax,y\rangle =-\langle x,Ay\rangle \quad {\text{ for all }}x,y\in \mathbb {R} ^{n}.}
This is also equivalent to ⟨ x , A x ⟩ = 0 {\textstyle \langle x,Ax\rangle =0} for all x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} (one implication being obvious, the other a plain consequence of ⟨ x + y , A ( x + y ) ⟩ = 0 {\textstyle \langle x+y,A(x+y)\rangle =0} for all x {\displaystyle x} and y {\displaystyle y} ).
Since this definition is independent of the choice of basis, skew-symmetry is a property that depends only on the linear operator A {\displaystyle A} and a choice of inner product.
3 × 3 {\displaystyle 3\times 3} skew symmetric matrices can be used to represent cross products as matrix multiplications.
Furthermore, if A {\displaystyle A} is a skew-symmetric (or skew-Hermitian) matrix, then x T A x = 0 {\displaystyle x^{T}Ax=0} for all x ∈ C n {\displaystyle x\in \mathbb {C} ^{n}} .
Let A {\displaystyle A} be a n × n {\displaystyle n\times n} skew-symmetric matrix. The determinant of A {\displaystyle A} satisfies
det ( A ) = det ( A T ) = det ( − A ) = ( − 1 ) n det ( A ) . {\displaystyle \det(A)=\det \left(A^{\textsf {T}}\right)=\det(-A)={\left(-1\right)}^{n}\det(A).}
In particular, if n {\displaystyle n} is odd, and since the underlying field is not of characteristic 2, the determinant vanishes. Hence, all odd dimension skew symmetric matrices are singular as their determinants are always zero. This result is called Jacobi’s theorem, after Carl Gustav Jacobi (Eves, 1980).
The even-dimensional case is more interesting. It turns out that the determinant of A {\displaystyle A} for n {\displaystyle n} even can be written as the square of a polynomial in the entries of A {\displaystyle A} , which was first proved by Cayley:[3]
det ( A ) = Pf ( A ) 2 . {\displaystyle \det(A)=\operatorname {Pf} (A)^{2}.}
This polynomial is called the Pfaffian of A {\displaystyle A} and is denoted Pf ( A ) {\displaystyle \operatorname {Pf} (A)} . Thus the determinant of a real skew-symmetric matrix is always non-negative. However this last fact can be proved in an elementary way as follows: the eigenvalues of a real skew-symmetric matrix are purely imaginary (see below) and to every eigenvalue there corresponds the conjugate eigenvalue with the same multiplicity; therefore, as the determinant is the product of the eigenvalues, each one repeated according to its multiplicity, it follows at once that the determinant, if it is not 0, is a positive real number.
The number of distinct terms s ( n ) {\displaystyle s(n)} in the expansion of the determinant of a skew-symmetric matrix of order n {\displaystyle n} was considered already by Cayley, Sylvester, and Pfaff. Due to cancellations, this number is quite small as compared the number of terms of the determinant of a generic matrix of order n {\displaystyle n} , which is n ! {\displaystyle n!} . The sequence s ( n ) {\displaystyle s(n)} (sequence A002370 in the OEIS) is
and it is encoded in the exponential generating function ∑ n = 0 ∞ s ( n ) n ! x n = ( 1 − x 2 ) − 1 4 exp ( x 2 4 ) . {\displaystyle \sum _{n=0}^{\infty }{\frac {s(n)}{n!}}x^{n}=\left(1-x^{2}\right)^{-{\frac {1}{4}}}\exp \left({\frac {x^{2}}{4}}\right).}
The latter yields to the asymptotics (for n {\displaystyle n} even) s ( n ) = 2 3 4 π 1 2 Γ ( 3 4 ) ( n e ) n − 1 4 ( 1 + O ( n − 1 ) ) . {\displaystyle s(n)={\frac {2^{\frac {3}{4}}}{\pi ^{\frac {1}{2}}}}\,\Gamma {\left({\frac {3}{4}}\right)}{\left({\frac {n}{e}}\right)}^{n-{\frac {1}{4}}}\left(1+O{\left(n^{-1}\right)}\right).}
The number of positive and negative terms are approximatively a half of the total, although their difference takes larger and larger positive and negative values as n {\displaystyle n} increases (sequence A167029 in the OEIS).
Three-by-three skew-symmetric matrices can be used to represent cross products as matrix multiplications. Consider two vectors a = ( a 1 , a 2 , a 3 ) {\displaystyle \mathbf {a} =\left(a_{1},a_{2},a_{3}\right)} and b = ( b 1 , b 2 , b 3 ) . {\displaystyle \mathbf {b} =\left(b_{1},b_{2},b_{3}\right).} The cross product a × b {\displaystyle \mathbf {a} \times \mathbf {b} } is a bilinear map, which means that by fixing one of the two arguments, for example a {\displaystyle \mathbf {a} } , it induces a linear map with an associated transformation matrix [ a ] × {\displaystyle [\mathbf {a} ]_{\times }} , such that
a × b = [ a ] × b , {\displaystyle \mathbf {a} \times \mathbf {b} =[\mathbf {a} ]_{\times }\mathbf {b} ,}
where [ a ] × {\displaystyle [\mathbf {a} ]_{\times }} is
[ a ] × = [ 0 − a 3 a 2 a 3 0 − a 1 − a 2 a 1 0 ] . {\displaystyle [\mathbf {a} ]_{\times }={\begin{bmatrix}\,\,0&\!-a_{3}&\,\,\,a_{2}\\\,\,\,a_{3}&0&\!-a_{1}\\\!-a_{2}&\,\,a_{1}&\,\,0\end{bmatrix}}.}
This can be immediately verified by computing both sides of the previous equation and comparing each corresponding element of the results.
One actually has [ a × b ] × = [ a ] × [ b ] × − [ b ] × [ a ] × ; {\displaystyle [\mathbf {a\times b} ]_{\times }=[\mathbf {a} ]_{\times }[\mathbf {b} ]_{\times }-[\mathbf {b} ]_{\times }[\mathbf {a} ]_{\times };}
i.e., the commutator of skew-symmetric three-by-three matrices can be identified with the cross-product of two vectors. Since the skew-symmetric three-by-three matrices are the Lie algebra of the rotation group S O ( 3 ) {\textstyle SO(3)} this elucidates the relation between three-space R 3 {\textstyle \mathbb {R} ^{3}} , the cross product and three-dimensional rotations. More on infinitesimal rotations can be found below.
Since a matrix is similar to its own transpose, they must have the same eigenvalues. It follows that the eigenvalues of a skew-symmetric matrix always come in pairs ±λ (except in the odd-dimensional case where there is an additional unpaired 0 eigenvalue). From the spectral theorem, for a real skew-symmetric matrix the nonzero eigenvalues are all pure imaginary and thus are of the form λ 1 i , − λ 1 i , λ 2 i , − λ 2 i , … {\displaystyle \lambda _{1}i,-\lambda _{1}i,\lambda _{2}i,-\lambda _{2}i,\ldots } where each of the λ k {\displaystyle \lambda _{k}} are real.
Real skew-symmetric matrices are normal matrices (they commute with their adjoints) and are thus subject to the spectral theorem, which states that any real skew-symmetric matrix can be diagonalized by a unitary matrix. Since the eigenvalues of a real skew-symmetric matrix are imaginary, it is not possible to diagonalize one by a real matrix. However, it is possible to bring every skew-symmetric matrix to a block diagonal form by a special orthogonal transformation.[4][5] Specifically, every 2 n × 2 n {\displaystyle 2n\times 2n} real skew-symmetric matrix can be written in the form A = Q Σ Q T {\displaystyle A=Q\Sigma Q^{\textsf {T}}} where Q {\displaystyle Q} is orthogonal and Σ = [ 0 λ 1 − λ 1 0 0 ⋯ 0 0 0 λ 2 − λ 2 0 0 ⋮ ⋱ ⋮ 0 0 ⋯ 0 λ r − λ r 0 0 ⋱ 0 ] {\displaystyle \Sigma ={\begin{bmatrix}{\begin{matrix}0&\lambda _{1}\\-\lambda _{1}&0\end{matrix}}&0&\cdots &0\\0&{\begin{matrix}0&\lambda _{2}\\-\lambda _{2}&0\end{matrix}}&&0\\\vdots &&\ddots &\vdots \\0&0&\cdots &{\begin{matrix}0&\lambda _{r}\\-\lambda _{r}&0\end{matrix}}\\&&&&{\begin{matrix}0\\&\ddots \\&&0\end{matrix}}\end{bmatrix}}}
for real positive-definite λ k {\displaystyle \lambda _{k}} . The nonzero eigenvalues of this matrix are ±λk i. In the odd-dimensional case Σ always has at least one row and column of zeros.
More generally, every complex skew-symmetric matrix can be written in the form A = U Σ U T {\displaystyle A=U\Sigma U^{\mathrm {T} }} where U {\displaystyle U} is unitary and Σ {\displaystyle \Sigma } has the block-diagonal form given above with λ k {\displaystyle \lambda _{k}} still real positive-definite. This is an example of the Youla decomposition of a complex square matrix.[6]
Skew-symmetric and alternating forms[edit]A skew-symmetric form φ {\displaystyle \varphi } on a vector space V {\displaystyle V} over a field K {\displaystyle K} of arbitrary characteristic is defined to be a bilinear form
φ : V × V ↦ K {\displaystyle \varphi :V\times V\mapsto K}
such that for all v , w {\displaystyle v,w} in V , {\displaystyle V,}
φ ( v , w ) = − φ ( w , v ) . {\displaystyle \varphi (v,w)=-\varphi (w,v).}
This defines a form with desirable properties for vector spaces over fields of characteristic not equal to 2, but in a vector space over a field of characteristic 2, the definition is equivalent to that of a symmetric form, as every element is its own additive inverse.
Where the vector space V {\displaystyle V} is over a field of arbitrary characteristic including characteristic 2, we may define an alternating form as a bilinear form φ {\displaystyle \varphi } such that for all vectors v {\displaystyle v} in V {\displaystyle V}
φ ( v , v ) = 0. {\displaystyle \varphi (v,v)=0.}
This is equivalent to a skew-symmetric form when the field is not of characteristic 2, as seen from
0 = φ ( v + w , v + w ) = φ ( v , v ) + φ ( v , w ) + φ ( w , v ) + φ ( w , w ) = φ ( v , w ) + φ ( w , v ) , {\displaystyle 0=\varphi (v+w,v+w)=\varphi (v,v)+\varphi (v,w)+\varphi (w,v)+\varphi (w,w)=\varphi (v,w)+\varphi (w,v),}
whence
φ ( v , w ) = − φ ( w , v ) . {\displaystyle \varphi (v,w)=-\varphi (w,v).}
A bilinear form φ {\displaystyle \varphi } will be represented by a matrix A {\displaystyle A} such that φ ( v , w ) = v T A w {\displaystyle \varphi (v,w)=v^{\textsf {T}}Aw} , once a basis of V {\displaystyle V} is chosen, and conversely an n × n {\displaystyle n\times n} matrix A {\displaystyle A} on K n {\displaystyle K^{n}} gives rise to a form sending ( v , w ) {\displaystyle (v,w)} to v T A w . {\displaystyle v^{\textsf {T}}Aw.} For each of symmetric, skew-symmetric and alternating forms, the representing matrices are symmetric, skew-symmetric and alternating respectively.
Infinitesimal rotations[edit]Skew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group O ( n ) {\displaystyle \mathrm {O} (n)} at the identity matrix; formally, the special orthogonal Lie algebra. In this sense, then, skew-symmetric matrices can be thought of as infinitesimal rotations.
Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra o ( n ) {\displaystyle {\mathfrak {o}}(n)} of the Lie group O ( n ) {\displaystyle \mathrm {O} (n)} . The Lie bracket on this space is given by the commutator:
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
The matrix exponential of a skew-symmetric matrix A {\displaystyle A} is then an orthogonal matrix R {\displaystyle R} :
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that contains the identity element. In the case of the Lie group O ( n ) {\displaystyle \mathrm {O} (n)} , this connected component is the special orthogonal group S O ( n ) {\displaystyle \mathrm {SO} (n)} , consisting of all orthogonal matrices with determinant 1. So R = exp ( A ) {\displaystyle R=\exp(A)} will have determinant +1. Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that every orthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix.
In the particular important case of dimension n = 2 , {\displaystyle n=2,} the exponential representation for an orthogonal matrix reduces to the well-known polar form of a complex number of unit modulus. Indeed, if n = 2 {\displaystyle n=2} , a special orthogonal matrix has the form
with a 2 + b 2 = 1 {\displaystyle a^{2}+b^{2}=1} . Therefore, putting a = cos θ {\displaystyle a=\cos \theta } and b = sin θ {\displaystyle b=\sin \theta } , it can be written
which corresponds exactly to the polar form cos θ + i sin θ = exp ( i θ ) {\displaystyle \cos \theta +i\sin \theta =\exp(i\theta )} of a complex number of unit modulus.
In 3 dimensions, the matrix exponential is Rodrigues' rotation formula in matrix notation, and when expressed via the Euler-Rodrigues formula, the algebra of its four parameters gives rise to quaternions.
The exponential representation of an orthogonal matrix of order
n {\displaystyle n}can also be obtained starting from the fact that in dimension
n {\displaystyle n}any special orthogonal matrix
R {\displaystyle R}can be written as
R = Q S Q T {\displaystyle R=QSQ^{\textsf {T}}} , where
Q {\displaystyle Q}is orthogonal and S is a
block diagonal matrixwith
⌊ n / 2 ⌋ {\textstyle \lfloor n/2\rfloor }blocks of order 2, plus one of order 1 if
n {\displaystyle n}is odd; since each single block of order 2 is also an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix
Swrites as exponential of a skew-symmetric block matrix
Σ {\displaystyle \Sigma }of the form above,
S = exp ( Σ ) {\displaystyle S=\exp(\Sigma )} , so that
R = Q exp ( Σ ) Q T = exp ( Q Σ Q T ) {\displaystyle R=Q\exp(\Sigma )Q^{\textsf {T}}=\exp(Q\Sigma Q^{\textsf {T}})} , exponential of the skew-symmetric matrix
Q Σ Q T {\displaystyle Q\Sigma Q^{\textsf {T}}} . Conversely, the surjectivity of the exponential map, together with the above-mentioned block-diagonalization for skew-symmetric matrices, implies the block-diagonalization for orthogonal matrices.
More intrinsically (i.e., without using coordinates), skew-symmetric linear transformations on a vector space V {\displaystyle V} with an inner product may be defined as the bivectors on the space, which are sums of simple bivectors (2-blades) v ∧ w . {\textstyle v\wedge w.} The correspondence is given by the map v ∧ w ↦ v ⊗ w − w ⊗ v {\textstyle v\wedge w\mapsto v\otimes w-w\otimes v} ; in orthonormal coordinates these are exactly the elementary skew-symmetric matrices. This characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name.
Skew-symmetrizable matrix[edit]An n × n {\displaystyle n\times n} matrix A {\displaystyle A} is said to be skew-symmetrizable if there exists an invertible diagonal matrix D {\displaystyle D} such that D A {\displaystyle DA} is skew-symmetric. For real n × n {\displaystyle n\times n} matrices, sometimes the condition for D {\displaystyle D} to have positive entries is added.[7]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4