A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://en.wikipedia.org/wiki/Autocovariance below:

Autocovariance - Wikipedia

From Wikipedia, the free encyclopedia

Concept in probability and statistics

In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question.

Auto-covariance of stochastic processes[edit]

With the usual notation E {\displaystyle \operatorname {E} } for the expectation operator, if the stochastic process { X t } {\displaystyle \left\{X_{t}\right\}} has the mean function μ t = E ⁡ [ X t ] {\displaystyle \mu _{t}=\operatorname {E} [X_{t}]} , then the autocovariance is given by[1]: p. 162 

K X X ⁡ ( t 1 , t 2 ) = cov ⁡ [ X t 1 , X t 2 ] = E ⁡ [ ( X t 1 − μ t 1 ) ( X t 2 − μ t 2 ) ] = E ⁡ [ X t 1 X t 2 ] − μ t 1 μ t 2 {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {cov} \left[X_{t_{1}},X_{t_{2}}\right]=\operatorname {E} [(X_{t_{1}}-\mu _{t_{1}})(X_{t_{2}}-\mu _{t_{2}})]=\operatorname {E} [X_{t_{1}}X_{t_{2}}]-\mu _{t_{1}}\mu _{t_{2}}} Eq.1

where t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} are two instances in time.

Definition for weakly stationary process[edit]

If { X t } {\displaystyle \left\{X_{t}\right\}} is a weakly stationary (WSS) process, then the following are true:[1]: p. 163 

μ t 1 = μ t 2 ≜ μ {\displaystyle \mu _{t_{1}}=\mu _{t_{2}}\triangleq \mu } for all t 1 , t 2 {\displaystyle t_{1},t_{2}}

and

E ⁡ [ | X t | 2 ] < ∞ {\displaystyle \operatorname {E} [|X_{t}|^{2}]<\infty } for all t {\displaystyle t}

and

K X X ⁡ ( t 1 , t 2 ) = K X X ⁡ ( t 2 − t 1 , 0 ) ≜ K X X ⁡ ( t 2 − t 1 ) = K X X ⁡ ( τ ) , {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {K} _{XX}(t_{2}-t_{1},0)\triangleq \operatorname {K} _{XX}(t_{2}-t_{1})=\operatorname {K} _{XX}(\tau ),}

where τ = t 2 − t 1 {\displaystyle \tau =t_{2}-t_{1}} is the lag time, or the amount of time by which the signal has been shifted.

The autocovariance function of a WSS process is therefore given by:[2]: p. 517 

K X X ⁡ ( τ ) = E ⁡ [ ( X t − μ t ) ( X t − τ − μ t − τ ) ] = E ⁡ [ X t X t − τ ] − μ t μ t − τ {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} [(X_{t}-\mu _{t})(X_{t-\tau }-\mu _{t-\tau })]=\operatorname {E} [X_{t}X_{t-\tau }]-\mu _{t}\mu _{t-\tau }} Eq.2

which is equivalent to

K X X ⁡ ( τ ) = E ⁡ [ ( X t + τ − μ t + τ ) ( X t − μ t ) ] = E ⁡ [ X t + τ X t ] − μ 2 {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} [(X_{t+\tau }-\mu _{t+\tau })(X_{t}-\mu _{t})]=\operatorname {E} [X_{t+\tau }X_{t}]-\mu ^{2}} .

It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.

The definition of the normalized auto-correlation of a stochastic process is

ρ X X ( t 1 , t 2 ) = K X X ⁡ ( t 1 , t 2 ) σ t 1 σ t 2 = E ⁡ [ ( X t 1 − μ t 1 ) ( X t 2 − μ t 2 ) ] σ t 1 σ t 2 {\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{t_{1}}\sigma _{t_{2}}}}={\frac {\operatorname {E} [(X_{t_{1}}-\mu _{t_{1}})(X_{t_{2}}-\mu _{t_{2}})]}{\sigma _{t_{1}}\sigma _{t_{2}}}}} .

If the function ρ X X {\displaystyle \rho _{XX}} is well-defined, its value must lie in the range [ − 1 , 1 ] {\displaystyle [-1,1]} , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.

For a WSS process, the definition is

ρ X X ( τ ) = K X X ⁡ ( τ ) σ 2 = E ⁡ [ ( X t − μ ) ( X t + τ − μ ) ] σ 2 {\displaystyle \rho _{XX}(\tau )={\frac {\operatorname {K} _{XX}(\tau )}{\sigma ^{2}}}={\frac {\operatorname {E} [(X_{t}-\mu )(X_{t+\tau }-\mu )]}{\sigma ^{2}}}} .

where

K X X ⁡ ( 0 ) = σ 2 {\displaystyle \operatorname {K} _{XX}(0)=\sigma ^{2}} .
K X X ⁡ ( t 1 , t 2 ) = K X X ⁡ ( t 2 , t 1 ) ¯ {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})={\overline {\operatorname {K} _{XX}(t_{2},t_{1})}}} [3]: p.169 

respectively for a WSS process:

K X X ⁡ ( τ ) = K X X ⁡ ( − τ ) ¯ {\displaystyle \operatorname {K} _{XX}(\tau )={\overline {\operatorname {K} _{XX}(-\tau )}}} [3]: p.173 

The autocovariance of a linearly filtered process { Y t } {\displaystyle \left\{Y_{t}\right\}}

Y t = ∑ k = − ∞ ∞ a k X t + k {\displaystyle Y_{t}=\sum _{k=-\infty }^{\infty }a_{k}X_{t+k}\,}

is

K Y Y ( τ ) = ∑ k , l = − ∞ ∞ a k a l K X X ( τ + k − l ) . {\displaystyle K_{YY}(\tau )=\sum _{k,l=-\infty }^{\infty }a_{k}a_{l}K_{XX}(\tau +k-l).\,}
Calculating turbulent diffusivity[edit]

Autocovariance can be used to calculate turbulent diffusivity.[4] Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations[citation needed].

Reynolds decomposition is used to define the velocity fluctuations u ′ ( x , t ) {\displaystyle u'(x,t)} (assume we are now working with 1D problem and U ( x , t ) {\displaystyle U(x,t)} is the velocity along x {\displaystyle x} direction):

U ( x , t ) = ⟨ U ( x , t ) ⟩ + u ′ ( x , t ) , {\displaystyle U(x,t)=\langle U(x,t)\rangle +u'(x,t),}

where U ( x , t ) {\displaystyle U(x,t)} is the true velocity, and ⟨ U ( x , t ) ⟩ {\displaystyle \langle U(x,t)\rangle } is the expected value of velocity. If we choose a correct ⟨ U ( x , t ) ⟩ {\displaystyle \langle U(x,t)\rangle } , all of the stochastic components of the turbulent velocity will be included in u ′ ( x , t ) {\displaystyle u'(x,t)} . To determine ⟨ U ( x , t ) ⟩ {\displaystyle \langle U(x,t)\rangle } , a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required.

If we assume the turbulent flux ⟨ u ′ c ′ ⟩ {\displaystyle \langle u'c'\rangle } ( c ′ = c − ⟨ c ⟩ {\displaystyle c'=c-\langle c\rangle } , and c is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term:

J turbulence x = ⟨ u ′ c ′ ⟩ ≈ D T x ∂ ⟨ c ⟩ ∂ x . {\displaystyle J_{{\text{turbulence}}_{x}}=\langle u'c'\rangle \approx D_{T_{x}}{\frac {\partial \langle c\rangle }{\partial x}}.}

The velocity autocovariance is defined as

K X X ≡ ⟨ u ′ ( t 0 ) u ′ ( t 0 + τ ) ⟩ {\displaystyle K_{XX}\equiv \langle u'(t_{0})u'(t_{0}+\tau )\rangle } or K X X ≡ ⟨ u ′ ( x 0 ) u ′ ( x 0 + r ) ⟩ , {\displaystyle K_{XX}\equiv \langle u'(x_{0})u'(x_{0}+r)\rangle ,}

where τ {\displaystyle \tau } is the lag time, and r {\displaystyle r} is the lag distance.

The turbulent diffusivity D T x {\displaystyle D_{T_{x}}} can be calculated using the following 3 methods:

Auto-covariance of random vectors[edit]

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4