A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://en.wikipedia.org/wiki/Projected_normal_distribution below:

Projected normal distribution - Wikipedia

Probability distribution

In directional statistics, the projected normal distribution (also known as offset normal distribution, angular normal distribution or angular Gaussian distribution)[1] is a probability distribution over directions that describes the radial projection of a random variable with n-variate normal distribution over the unit (n-1)-sphere.

Definition and properties[edit]

Given a random variable X ∈ R n {\displaystyle {\boldsymbol {X}}\in \mathbb {R} ^{n}} that follows a multivariate normal distribution N n ( μ , Σ ) {\displaystyle {\mathcal {N}}_{n}({\boldsymbol {\mu }},\,{\boldsymbol {\Sigma }})} , the projected normal distribution P N n ( μ , Σ ) {\displaystyle {\mathcal {PN}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} represents the distribution of the random variable Y = X ‖ X ‖ {\displaystyle {\boldsymbol {Y}}={\frac {\boldsymbol {X}}{\lVert {\boldsymbol {X}}\rVert }}} obtained projecting X {\displaystyle {\boldsymbol {X}}} over the unit sphere. In the general case, the projected normal distribution can be asymmetric and multimodal. In case μ {\displaystyle {\boldsymbol {\mu }}} is parallel to an eigenvector of Σ {\displaystyle {\boldsymbol {\Sigma }}} , the distribution is symmetric.[3] The first version of such distribution was introduced in Pukkila and Rao (1988).

The support of this distribution is the unit (n-1)-sphere, which can be variously given in terms of a set of ( n − 1 ) {\displaystyle (n-1)} -dimensional angular spherical cooordinates:

Θ = [ 0 , π ] n − 2 × [ 0 , 2 π ) ⊂ R n − 1 {\displaystyle {\boldsymbol {\Theta }}=[0,\pi ]^{n-2}\times [0,2\pi )\subset \mathbb {R} ^{n-1}}

or in terms of n {\displaystyle n} -dimensional Cartesian coordinates:

S n − 1 = { z ∈ R n : ‖ z ‖ = 1 } ⊂ R n {\displaystyle \mathbb {S} ^{n-1}=\{{\boldsymbol {z}}\in \mathbb {R} ^{n}:\lVert {\boldsymbol {z}}\rVert =1\}\subset \mathbb {R} ^{n}}

The two are linked via the embedding function, e : Θ → R n {\displaystyle e:{\boldsymbol {\Theta }}\to \mathbb {R} ^{n}} , with range e ( Θ ) = S n − 1 . {\displaystyle e({\boldsymbol {\Theta }})=\mathbb {S} ^{n-1}.} This function is defined by the formula for spherical coordinates at r = 1. {\displaystyle r=1.}

The density of the projected normal distribution P N n ( μ , Σ ) {\displaystyle {\mathcal {PN}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} can be constructed from the density of its generator n-variate normal distribution N n ( μ , Σ ) {\displaystyle {\mathcal {N}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} by re-parametrising to n-dimensional spherical coordinates and then integrating over the radial coordinate.

In full spherical coordinates with radial component r ∈ [ 0 , ∞ ) {\displaystyle r\in [0,\infty )} and angles θ = ( θ 1 , … , θ n − 1 ) ∈ Θ {\displaystyle {\boldsymbol {\theta }}=(\theta _{1},\dots ,\theta _{n-1})\in {\boldsymbol {\Theta }}} , a point x = ( x 1 , … , x n ) ∈ R n {\displaystyle {\boldsymbol {x}}=(x_{1},\dots ,x_{n})\in \mathbb {R} ^{n}} can be written as x = r v {\displaystyle {\boldsymbol {x}}=r{\boldsymbol {v}}} , with v ∈ S n − 1 {\displaystyle {\boldsymbol {v}}\in \mathbb {S} ^{n-1}} . To be clear, v = e ( θ ) {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})} , as given by the above-defined embedding function. The joint density becomes

p ( r , θ | μ , Σ ) = r n − 1 N n ( r v ∣ μ , Σ ) = r n − 1 | Σ | ( 2 π ) n 2 e − 1 2 ( r v − μ ) ⊤ Σ − 1 ( r v − μ ) {\displaystyle p(r,{\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})=r^{n-1}{\mathcal {N}}_{n}(r{\boldsymbol {v}}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {r^{n-1}}{{\sqrt {|{\boldsymbol {\Sigma }}|}}(2\pi )^{\frac {n}{2}}}}e^{-{\frac {1}{2}}(r{\boldsymbol {v}}-{\boldsymbol {\mu }})^{\top }\Sigma ^{-1}(r{\boldsymbol {v}}-{\boldsymbol {\mu }})}}

where the factor r n − 1 {\displaystyle r^{n-1}} is due to the change of variables x = r v {\displaystyle {\boldsymbol {x}}=r{\boldsymbol {v}}} . The density of P N n ( μ , Σ ) {\displaystyle {\mathcal {PN}}_{n}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} can then be obtained via marginalization over r {\displaystyle r} as[5]

p ( θ | μ , Σ ) = ∫ 0 ∞ p ( r , θ | μ , Σ ) d r . {\displaystyle p({\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})=\int _{0}^{\infty }p(r,{\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})dr.}

The same density had been previously obtained in Pukkila and Rao (1988, Eq. (2.4)) using a different notation.

Note on density definition[edit]

This subsection gives some clarification lest the various forms of probability density used in this article be misunderstood. Take for example a random variate u ∈ ( 0 , 1 ] {\displaystyle u\in (0,1]} , with uniform density, p U ( u ) = 1 {\displaystyle p_{U}(u)=1} . If ℓ = − log ⁡ u {\displaystyle \ell =-\log u} , it has density, p L ( ℓ ) = e − ℓ {\displaystyle p_{L}(\ell )=e^{-\ell }} . This works if both densities are defined with respect to Lebesgue measure on the real line. By default convention:

Neither of these conventions apply to the P N n {\displaystyle {\mathcal {PN_{n}}}} densities in this article:

The pullback and Hausdorff measures agree, so that:

p ( θ ∣ μ , Σ ) = p ~ ( v ∣ μ , Σ ) {\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\tilde {p}}({\boldsymbol {v}}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})}

where there is no change-of-variables factor, because the densities use different measures.

To better understand what is meant by a density being defined w.r.t. a measure (a function that maps subsets in sample space to a non-negative real-valued 'volume'), consider a measureable subset, U ⊆ Θ {\displaystyle U\subseteq {\boldsymbol {\Theta }}} , with embedded image V = e ( U ) ⊆ S n − 1 {\displaystyle V=e(U)\subseteq \mathbb {S} ^{n-1}} and let v = e ( θ ) ∼ P N n {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})\sim {\mathcal {PN_{n}}}} , then the probability for finding the sample in the subset is:

P ( θ ∈ U ) = ∫ U p d π = P ( v ∈ V ) = ∫ V p ~ d h {\displaystyle P({\boldsymbol {\theta }}\in U)=\int _{U}p\,d\pi =P({\boldsymbol {v}}\in V)=\int _{V}{\tilde {p}}\,dh}

where π , h {\displaystyle \pi ,h} are respectively the pullback and Hausdorff measures; and the integrals are Lebesgue integrals, which can be rewritten as Riemann integrals thus:

∫ U p d π = ∫ 0 ∞ π ( { θ ∈ U : p ( θ ) > t } ) d t ( 1 ) {\displaystyle \int _{U}p\,d\pi =\int _{0}^{\infty }\pi \left(\{{\boldsymbol {\theta }}\in U:p({\boldsymbol {\theta }})>t\}\right)\,dt\quad (1)}

The tangent space at v ∈ S n − 1 {\displaystyle {\boldsymbol {v}}\in \mathbb {S} ^{n-1}} is the ( n − 1 ) {\displaystyle (n-1)} -dimensional linear subspace perpendicular to v {\displaystyle {\boldsymbol {v}}} , where Lebesgue measure can be used. At very small scale, the tangent space is indistinguishable from the sphere (e.g. Earth looks locally flat), so that Lebesgue measure in tangent space agrees with area on the hypersphere. The tangent space Lebesgue measure is pulled back via the embedding function, as follows, to define the measure in coordinate space. For U ⊆ Θ , {\displaystyle U\subseteq {\boldsymbol {\Theta }},} a measureable subset in coordinate space, the pullback measure, as a Riemann integral is:

π ( U ) = ∫ U | det ⁡ ( E θ ′ E θ ) | d θ 1 ⋯ d θ n − 1 ( 2 ) {\displaystyle \pi (U)=\int _{U}{\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}\,d\theta _{1}\,\cdots \,d\theta _{n-1}\quad (2)}

where the Jacobian of the embedding function, e ( θ ) {\displaystyle e({\boldsymbol {\theta }})} , is the n -by- ( n − 1 ) {\displaystyle n{\text{-by-}}(n-1)} matrix E θ , {\displaystyle \mathbf {E} _{\boldsymbol {\theta }},} the columns of which span the ( n − 1 ) {\displaystyle (n-1)} -dimensional tangent space where the Lebesgue measure is applied. It can be shown: | det ⁡ ( E θ ′ E θ ) | = ∏ i = 1 n − 2 sin n − 1 − i ⁡ ( θ i ) . {\displaystyle {\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}=\prod _{i=1}^{n-2}\sin ^{n-1-i}(\theta _{i}).} When plugging the pullback measure (2), into equation (1) and exchanging the order of integration:

P ( θ ∈ U ) = ∫ U p d π = ∫ U p ( θ ∣ μ , Σ ) | det ⁡ ( E θ ′ E θ ) | d θ 1 ⋯ d θ n − 1 {\displaystyle P({\boldsymbol {\theta }}\in {\mathcal {U}})=\int _{U}p\,d\pi =\int _{U}p({\boldsymbol {\theta }}\mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }})\,{\sqrt {\left|\operatorname {det} (\mathbf {E} _{\boldsymbol {\theta }}'\mathbf {E} _{\boldsymbol {\theta }})\right|}}\,d\theta _{1}\,\cdots \,d\theta _{n-1}}

where the first integral is Lebesgue and the second Riemann. Finally, for better geometric understanding of the square-root factor, consider:

Circular distribution[edit]

For n = 2 {\displaystyle n=2} , parametrising the position on the unit circle in polar coordinates as v = ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle {\boldsymbol {v}}=(\cos \theta ,\sin \theta )} , the density function can be written with respect to the parameters μ {\displaystyle {\boldsymbol {\mu }}} and Σ {\displaystyle {\boldsymbol {\Sigma }}} of the initial normal distribution as

p ( θ | μ , Σ ) = e − 1 2 μ ⊤ Σ − 1 μ 2 π | Σ | v ⊤ Σ − 1 v ( 1 + T ( θ ) Φ ( T ( θ ) ) ϕ ( T ( θ ) ) ) I [ 0 , 2 π ) ( θ ) {\displaystyle p(\theta |{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {e^{-{\frac {1}{2}}{\boldsymbol {\mu }}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {\mu }}}}{2\pi {\sqrt {|{\boldsymbol {\Sigma }}|}}{\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {v}}}}\left(1+T(\theta ){\frac {\Phi (T(\theta ))}{\phi (T(\theta ))}}\right)I_{[0,2\pi )}(\theta )}

where ϕ {\displaystyle \phi } and Φ {\displaystyle \Phi } are the density and cumulative distribution of a standard normal distribution, T ( θ ) = v ⊤ Σ − 1 μ v ⊤ Σ − 1 v {\displaystyle T(\theta )={\frac {{\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {\mu }}}{\sqrt {{\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {v}}}}}} , and I {\displaystyle I} is the indicator function.[3]

In the circular case, if the mean vector μ {\displaystyle {\boldsymbol {\mu }}} is parallel to the eigenvector associated to the largest eigenvalue of the covariance, the distribution is symmetric and has a mode at θ = α {\displaystyle \theta =\alpha } and either a mode or an antimode at θ = α + π {\displaystyle \theta =\alpha +\pi } , where α {\displaystyle \alpha } is the polar angle of μ = ( r cos ⁡ α , r sin ⁡ α ) {\displaystyle {\boldsymbol {\mu }}=(r\cos \alpha ,r\sin \alpha )} . If the mean is parallel to the eigenvector associated to the smallest eigenvalue instead, the distribution is also symmetric but has either a mode or an antimode at θ = α {\displaystyle \theta =\alpha } and an antimode at θ = α + π {\displaystyle \theta =\alpha +\pi } .[7]

Spherical distribution[edit]

For n = 3 {\displaystyle n=3} , parametrising the position on the unit sphere in spherical coordinates as v = ( cos ⁡ θ 1 sin ⁡ θ 2 , sin ⁡ θ 1 sin ⁡ θ 2 , cos ⁡ θ 2 ) {\displaystyle {\boldsymbol {v}}=(\cos \theta _{1}\sin \theta _{2},\sin \theta _{1}\sin \theta _{2},\cos \theta _{2})} where θ = ( θ 1 , θ 2 ) {\displaystyle {\boldsymbol {\theta }}=(\theta _{1},\theta _{2})} are the azimuth θ 1 ∈ [ 0 , 2 π ) {\displaystyle \theta _{1}\in [0,2\pi )} and inclination θ 2 ∈ [ 0 , π ] {\displaystyle \theta _{2}\in [0,\pi ]} angles respectively, the density function becomes

p ( θ | μ , Σ ) = e − 1 2 μ ⊤ Σ − 1 μ | Σ | ( 2 π v ⊤ Σ − 1 v ) 3 2 ( Φ ( T ( θ ) ) ϕ ( T ( θ ) ) + T ( θ ) ( 1 + T ( θ ) Φ ( T ( θ ) ) ϕ ( T ( θ ) ) ) ) I [ 0 , 2 π ) ( θ 1 ) I [ 0 , π ] ( θ 2 ) {\displaystyle p({\boldsymbol {\theta }}|{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {e^{-{\frac {1}{2}}{\boldsymbol {\mu }}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {\mu }}}}{{\sqrt {|{\boldsymbol {\Sigma }}|}}\left(2\pi {\boldsymbol {v}}^{\top }{\boldsymbol {\Sigma }}^{-1}{\boldsymbol {v}}\right)^{\frac {3}{2}}}}\left({\frac {\Phi (T({\boldsymbol {\theta }}))}{\phi (T({\boldsymbol {\theta }}))}}+T({\boldsymbol {\theta }})\left(1+T({\boldsymbol {\theta }}){\frac {\Phi (T({\boldsymbol {\theta }}))}{\phi (T({\boldsymbol {\theta }}))}}\right)\right)I_{[0,2\pi )}(\theta _{1})I_{[0,\pi ]}(\theta _{2})}

where ϕ {\displaystyle \phi } , Φ {\displaystyle \Phi } , T {\displaystyle T} , and I {\displaystyle I} have the same meaning as the circular case.[8]

Angular Central Gaussian Distribution[edit]

In the special case, μ = 0 {\displaystyle {\boldsymbol {\mu }}=\mathbf {0} } , the projected normal distribution, with n ≥ 2 {\displaystyle n\geq 2} is known as the angular central Gaussian (ACG) and in this case, the density function can be obtained in closed form as a function of Cartesian coordinates. Let x ∼ N n ( 0 , Σ ) {\displaystyle \mathbf {x} \sim {\mathcal {N}}_{n}(\mathbf {0} ,{\boldsymbol {\Sigma }})} and project radially: v = ‖ x ‖ − 1 x {\displaystyle \mathbf {v} =\lVert \mathbf {x} \rVert ^{-1}\mathbf {x} } so that v ∈ S n − 1 = { z ∈ R n : ‖ z ‖ = 1 } {\displaystyle \mathbf {v} \in \mathbb {S} ^{n-1}=\{\mathbf {z} \in \mathbb {R} ^{n}:\lVert \mathbf {z} \rVert =1\}} (the unit hypersphere). We write v ∼ ACG ⁡ ( Σ ) {\displaystyle \mathbf {v} \sim \operatorname {ACG} ({\boldsymbol {\Sigma }})} , which as explained above, at v = e ( θ ) {\displaystyle {\boldsymbol {v}}=e({\boldsymbol {\theta }})} , has density:

p ~ ACG ( v ∣ Σ ) = p ( θ ∣ 0 , Σ ) = ∫ 0 ∞ r n − 1 N n ( r v ∣ 0 , Σ ) d r = Γ ( n 2 ) 2 π n 2 | Σ | − 1 2 ( v ′ Σ − 1 v ) − n 2 {\displaystyle {\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid {\boldsymbol {\Sigma }})=p({\boldsymbol {\theta }}\mid {\boldsymbol {0}},{\boldsymbol {\Sigma }})=\int _{0}^{\infty }r^{n-1}{\mathcal {N}}_{n}(r\mathbf {v} \mid \mathbf {0} ,{\boldsymbol {\Sigma }})\,dr={\frac {\Gamma ({\frac {n}{2}})}{2\pi ^{\frac {n}{2}}}}\left|{\boldsymbol {\Sigma }}\right|^{-{\frac {1}{2}}}(\mathbf {v} '{\boldsymbol {\Sigma }}^{-1}\mathbf {v} )^{-{\frac {n}{2}}}}

where the integral can be solved by a change of variables and then using the standard definition of the gamma function. Notice that:

p ~ ACG ( v ∣ k Σ ) = p ~ ACG ( v ∣ Σ ) {\displaystyle {\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid k{\boldsymbol {\Sigma }})={\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid {\boldsymbol {\Sigma }})} .
p ~ ACG ( v ∣ k I n ) = p uniform = Γ ( n 2 ) 2 π n 2 {\displaystyle {\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid k\mathbf {I} _{n})=p_{\text{uniform}}={\frac {\Gamma ({\frac {n}{2}})}{2\pi ^{\frac {n}{2}}}}}
ACG via transformation of normal or uniform variates[edit]

Let T {\displaystyle \mathbf {T} } be any n {\displaystyle n} -by- n {\displaystyle n} invertible matrix such that T T ′ = Σ {\displaystyle \mathbf {T} \mathbf {T} '={\boldsymbol {\Sigma }}} . Let u ∼ ACG ⁡ ( I n ) {\displaystyle \mathbf {u} \sim \operatorname {ACG} (\mathbf {I} _{n})} (uniform) and s ∼ χ ( n ) {\displaystyle s\sim \chi (n)} (chi distribution), so that: x = s T u ∼ N n ( 0 , Σ ) {\displaystyle \mathbf {x} =s\mathbf {Tu} \sim {\mathcal {N}}_{n}(\mathbf {0} ,{\boldsymbol {\Sigma }})} (multivariate normal). Now consider:

v = T u ‖ T u ‖ = x ‖ x ‖ ∼ ACG ⁡ ( Σ ) {\displaystyle \mathbf {v} ={\frac {\mathbf {Tu} }{\lVert \mathbf {Tu} \rVert }}={\frac {\mathbf {x} }{\lVert \mathbf {x} \rVert }}\sim \operatorname {ACG} ({\boldsymbol {\Sigma }})}

which shows that the ACG distribution also results from applying, to uniform variates, the normalized linear transform:

f T ( u ) = T u ‖ T u ‖ {\displaystyle f_{\mathbf {T} }(\mathbf {u} )={\frac {\mathbf {Tu} }{\lVert \mathbf {Tu} \rVert }}}

Some further explanation of these two ways to obtain v ∼ ACG ⁡ ( Σ ) {\displaystyle \mathbf {v} \sim \operatorname {ACG} ({\boldsymbol {\Sigma }})} may be helpful:

Caveat: when μ {\displaystyle {\boldsymbol {\mu }}} is nonzero, although s T u + μ ∼ N d ( μ , Σ ) {\displaystyle s\mathbf {Tu} +{\boldsymbol {\mu }}\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }})} , a similar duality does not hold:

T u + μ ‖ T u + μ ‖ ≠ s T u + μ ‖ s T u + μ ‖ ∼ P N n ( μ , Σ ) {\displaystyle {\frac {\mathbf {Tu} +{\boldsymbol {\mu }}}{\lVert \mathbf {Tu} +{\boldsymbol {\mu }}\rVert }}\neq {\frac {s\mathbf {Tu} +{\boldsymbol {\mu }}}{\lVert s\mathbf {Tu} +{\boldsymbol {\mu }}\rVert }}\sim {\mathcal {PN}}_{n}({\boldsymbol {\mu ,\Sigma }})}

Although we can radially project affine-transformed normal variates to get P N n {\displaystyle {\mathcal {PN}}_{n}} variates, this does not work for uniform variates.

Wider application of the normalized linear transform[edit]

The normalized linear transform, v = f T ( u ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {u} )} , is a bijection from the unitsphere to itself; the inverse is u = f T − 1 ( v ) {\displaystyle \mathbf {u} =f_{\mathbf {T} ^{-1}}(\mathbf {v} )} . This transform is of independent interest, as it may be applied as a probabilistic flow on the hypersphere (similar to a normalizing flow) to generalize also other (non-uniform) distributions on hyperspheres, for example the Von Mises-Fisher distribution. The fact that we have a closed form for the ACG density allows us to recover also in closed form the differential volume change induced by this transform.

For the change of variables, v = f T ( u ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {u} )} on the manifold, S n − 1 {\displaystyle \mathbb {S} ^{n-1}} , the uniform and ACG densities are related as:

p ~ ACG ( v ∣ Σ ) = p uniform R ( v , Σ ) {\displaystyle {\tilde {p}}_{\text{ACG}}(\mathbf {v} \mid {\boldsymbol {\Sigma }})={\frac {p_{\text{uniform}}}{R(\mathbf {v} ,{\boldsymbol {\Sigma }})}}}

where the (constant) uniform density is p uniform = Γ ( n / 2 ) 2 π n / 2 {\displaystyle p_{\text{uniform}}={\frac {\Gamma (n/2)}{2\pi ^{n/2}}}} and where R ( v , Σ ) {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})} is the differential volume change factor from the input to the output of the transformation; specifically, it is given by the absolute value of the determinant of an ( n − 1 ) {\displaystyle (n-1)} -by- ( n − 1 ) {\displaystyle (n-1)} matrix:

R ( v , Σ ) = abs ⁡ | Q v ′ J u Q u | {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})=\operatorname {abs} \left|\mathbf {Q} _{\mathbf {v} }'\mathbf {J} _{\mathbf {u} }\mathbf {Q} _{\mathbf {u} }\right|}

where J u {\displaystyle \mathbf {J} _{\mathbf {u} }} is the n {\displaystyle n} -by- n {\displaystyle n} Jacobian matrix of the transformation in Euclidean space, f T : R n → R n {\displaystyle f_{\mathbf {T} }:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} , evaluated at u {\displaystyle \mathbf {u} } . In Euclidean space, the transformation and its Jacobian are non-invertible, but when the domain and co-domain are restricted to S n − 1 {\displaystyle \mathbb {S} ^{n-1}} , then f T : S n − 1 → S n − 1 {\displaystyle f_{\mathbf {T} }:\mathbb {S} ^{n-1}\to \mathbb {S} ^{n-1}} is a bijection and the induced differential volume ratio, R ( v , Σ ) {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})} is obtained by projecting J u {\displaystyle \mathbf {J} _{\mathbf {u} }} onto the ( n − 1 ) {\displaystyle (n-1)} -dimensional tangent spaces at the transformation input and output: Q u , Q v {\displaystyle \mathbf {Q} _{\mathbf {u} },\mathbf {Q} _{\mathbf {v} }} are n {\displaystyle n} -by- ( n − 1 ) {\displaystyle (n-1)} matrices whose orthonormal columns span the tangent spaces. Although the above determinant formula is relatively easy to evaluate numerically on a software platform equipped with linear algebra and automatic differentiation, a simple closed form is hard to derive directly. However, since we already have p ~ ACG {\displaystyle {\tilde {p}}_{\text{ACG}}} , we can recover:

R ( v , Σ ) = | Σ | 1 2 ( v ′ Σ − 1 v ) n 2 = abs ⁡ | T | ‖ T u ‖ n {\displaystyle R(\mathbf {v} ,{\boldsymbol {\Sigma }})=\left|{\boldsymbol {\Sigma }}\right|^{\frac {1}{2}}(\mathbf {v} '{\boldsymbol {\Sigma }}^{-1}\mathbf {v} )^{\frac {n}{2}}={\frac {\operatorname {abs} \left|\mathbf {T} \right|}{\lVert \mathbf {Tu} \rVert ^{n}}}}

where in the final RHS it is understood that Σ = T T ′ {\displaystyle {\boldsymbol {\Sigma }}=\mathbf {T} \mathbf {T} '} and u = f T − 1 ( v ) {\displaystyle \mathbf {u} =f_{\mathbf {T} ^{-1}}(\mathbf {v} )} .

The normalized linear transform can now be used, for example, to give a closed-form density for a more flexible distribution on the hypersphere, that is generalized from the Von Mises-Fisher. Let x ∼ VMF ( μ , κ ) {\displaystyle \mathbf {x} \sim {\text{VMF}}({\boldsymbol {\mu }},\kappa )} and v = f T ( x ) {\displaystyle \mathbf {v} =f_{\mathbf {T} }(\mathbf {x} )} ; the resulting density is:

p ( v ∣ μ , κ , T ) = p ~ VMF ( f T − 1 ( v ) ∣ μ , κ ) R ( v , T T ′ ) {\displaystyle p(\mathbf {v} \mid {\boldsymbol {\mu }},\kappa ,\mathbf {T} )={\frac {{\tilde {p}}_{\text{VMF}}{\bigl (}\mathbf {f} _{T^{-1}}(\mathbf {v} )\mid {\boldsymbol {\mu }},\kappa {\bigr )}}{R(\mathbf {v} ,\mathbf {T} \mathbf {T} ')}}}

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4