From Wikipedia, the free encyclopedia
Mathematical concept
In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.
Measure-theoretic definition[edit]Uniform integrability is an extension to the notion of a family of functions being dominated in L 1 {\displaystyle L_{1}} which is central in dominated convergence. Several textbooks on real analysis and measure theory use the following definition:[1][2]
Definition A: Let ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} be a positive measure space. A set Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is called uniformly integrable if sup f ∈ Φ ‖ f ‖ L 1 ( μ ) < ∞ {\displaystyle \sup _{f\in \Phi }\|f\|_{L_{1}(\mu )}<\infty } , and to each ε > 0 {\displaystyle \varepsilon >0} there corresponds a δ > 0 {\displaystyle \delta >0} such that
- ∫ E | f | d μ < ε {\displaystyle \int _{E}|f|\,d\mu <\varepsilon }
whenever f ∈ Φ {\displaystyle f\in \Phi } and μ ( E ) < δ . {\displaystyle \mu (E)<\delta .}
Definition A is rather restrictive for infinite measure spaces. A more general definition[3] of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt.
Definition H: Let ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} be a positive measure space. A set Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is called uniformly integrable if and only if
- inf g ∈ L + 1 ( μ ) sup f ∈ Φ ∫ { | f | > g } | f | d μ = 0 {\displaystyle \inf _{g\in L_{+}^{1}(\mu )}\sup _{f\in \Phi }\int _{\{|f|>g\}}|f|\,d\mu =0}
where L + 1 ( μ ) = { g ∈ L 1 ( μ ) : g ≥ 0 } {\displaystyle L_{+}^{1}(\mu )=\{g\in L^{1}(\mu ):g\geq 0\}} .
Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics.
The following result[4] provides another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability.
Theorem 1: If ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is a (positive) finite measure space, then a set Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is uniformly integrable if and only if
- inf g ∈ L + 1 ( μ ) sup f ∈ Φ ∫ ( | f | − g ) + d μ = 0 {\displaystyle \inf _{g\in L_{+}^{1}(\mu )}\sup _{f\in \Phi }\int (|f|-g)^{+}\,d\mu =0}
If in addition μ ( X ) < ∞ {\displaystyle \mu (X)<\infty } , then uniform integrability is equivalent to either of the following conditions
1. inf a > 0 sup f ∈ Φ ∫ ( | f | − a ) + d μ = 0 {\displaystyle \inf _{a>0}\sup _{f\in \Phi }\int (|f|-a)_{+}\,d\mu =0} .
2. inf a > 0 sup f ∈ Φ ∫ { | f | > a } | f | d μ = 0 {\displaystyle \inf _{a>0}\sup _{f\in \Phi }\int _{\{|f|>a\}}|f|\,d\mu =0}
When the underlying space ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is σ {\displaystyle \sigma } -finite, Hunt's definition is equivalent to the following:
Theorem 2: Let ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} be a σ {\displaystyle \sigma } -finite measure space, and h ∈ L 1 ( μ ) {\displaystyle h\in L^{1}(\mu )} be such that h > 0 {\displaystyle h>0} almost everywhere. A set Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is uniformly integrable if and only if sup f ∈ Φ ‖ f ‖ L 1 ( μ ) < ∞ {\displaystyle \sup _{f\in \Phi }\|f\|_{L_{1}(\mu )}<\infty } , and for any ε > 0 {\displaystyle \varepsilon >0} , there exits δ > 0 {\displaystyle \delta >0} such that
- sup f ∈ Φ ∫ A | f | d μ < ε {\displaystyle \sup _{f\in \Phi }\int _{A}|f|\,d\mu <\varepsilon }
whenever ∫ A h d μ < δ {\displaystyle \int _{A}h\,d\mu <\delta } .
A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking h ≡ 1 {\displaystyle h\equiv 1} in Theorem 2.
Probability definition[edit]In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables.,[5][6][7] that is,
1. A class C {\displaystyle {\mathcal {C}}} of random variables is called uniformly integrable if:
or alternatively
2. A class C {\displaystyle {\mathcal {C}}} of random variables is called uniformly integrable (UI) if for every ε > 0 {\displaystyle \varepsilon >0} there exists K ∈ [ 0 , ∞ ) {\displaystyle K\in [0,\infty )} such that E ( | X | I | X | ≥ K ) ≤ ε for all X ∈ C {\displaystyle \operatorname {E} (|X|I_{|X|\geq K})\leq \varepsilon \ {\text{ for all }}X\in {\mathcal {C}}} , where I | X | ≥ K {\displaystyle I_{|X|\geq K}} is the indicator function I | X | ≥ K = { 1 if | X | ≥ K , 0 if | X | < K . {\displaystyle I_{|X|\geq K}={\begin{cases}1&{\text{if }}|X|\geq K,\\0&{\text{if }}|X|<K.\end{cases}}} .
Tightness and uniform integrability[edit]Another concept associated with uniform integrability is that of tightness. In this article tightness is taken in a more general setting.
Definition: Suppose measurable space ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is a measure space. Let K ⊂ M {\displaystyle {\mathcal {K}}\subset {\mathfrak {M}}} be a collection of sets of finite measure. A family Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L_{1}(\mu )} is tight with respect to K {\displaystyle {\mathcal {K}}} if
- inf K ∈ K sup f ∈ Φ ∫ X ∖ K | f | μ = 0 {\displaystyle \inf _{K\in {\mathcal {K}}}\sup _{f\in \Phi }\int _{X\setminus K}|f|\,\mu =0}
A tight family with respect to Φ = M ∩ L 1 ( u ) {\displaystyle \Phi ={\mathfrak {M}}\cap L_{1}(\,u)} is just said to be tight.
When the measure space ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is a metric space equipped with the Borel σ {\displaystyle \sigma } algebra, μ {\displaystyle \mu } is a regular measure, and K {\displaystyle {\mathcal {K}}} is the collection of all compact subsets of X {\displaystyle X} , the notion of K {\displaystyle {\mathcal {K}}} -tightness discussed above coincides with the well known concept of tightness used in the analysis of regular measures in metric spaces
For σ {\displaystyle \sigma } -finite measure spaces, it can be shown that if a family Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L_{1}(\mu )} is uniformly integrable, then Φ {\displaystyle \Phi } is tight. This is capture by the following result which is often used as definition of uniform integrabiliy in the Analysis literature:
Uniform absolute continuity[edit]Theorem 3: Suppose ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is a σ {\displaystyle \sigma } finite measure space. A family Φ ⊂ L 1 ( μ ) {\displaystyle \Phi \subset L_{1}(\mu )} is uniformly integrable if and only if
- sup f ∈ Φ ‖ f ‖ 1 < ∞ {\displaystyle \sup _{f\in \Phi }\|f\|_{1}<\infty } .
- inf a > 0 sup f ∈ Φ ∫ { | f | > a } | f | d μ = 0 {\displaystyle \inf _{a>0}\sup _{f\in \Phi }\int _{\{|f|>a\}}|f|\,d\mu =0}
- Φ {\displaystyle \Phi } is tight.
When μ ( X ) < ∞ {\displaystyle \mu (X)<\infty } , condition 3 is redundant (see Theorem 1 above).
There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral
Definition: Suppose ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} is a probability space. A class C {\displaystyle {\mathcal {C}}} of random variables is uniformly absolutely continuous with respect to P {\displaystyle P} if for any ε > 0 {\displaystyle \varepsilon >0} , there is δ > 0 {\displaystyle \delta >0} such that E [ | X | I A ] < ε {\displaystyle E[|X|I_{A}]<\varepsilon } whenever P ( A ) < δ {\displaystyle P(A)<\delta } .
It is equivalent to uniform integrability if the measure is finite and has no atoms.
The term "uniform absolute continuity" is not standard,[citation needed] but is used by some authors.[9][10]
The following results apply to the probabilistic definition.
In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of L 1 ( μ ) {\displaystyle L^{1}(\mu )} .
Uniform integrability and stochastic ordering[edit]A family of random variables { X i } i ∈ I {\displaystyle \{X_{i}\}_{i\in I}} is uniformly integrable if and only if[16] there exists a random variable X {\displaystyle X} such that E X < ∞ {\displaystyle EX<\infty } and | X i | ≤ i c x X {\displaystyle |X_{i}|\leq _{\mathrm {icx} }X} for all i ∈ I {\displaystyle i\in I} , where ≤ i c x {\displaystyle \leq _{\mathrm {icx} }} denotes the increasing convex stochastic order defined by A ≤ i c x B {\displaystyle A\leq _{\mathrm {icx} }B} if E ϕ ( A ) ≤ E ϕ ( B ) {\displaystyle E\phi (A)\leq E\phi (B)} for all nondecreasing convex real functions ϕ {\displaystyle \phi } .
Relation to convergence of random variables[edit]A sequence { X n } {\displaystyle \{X_{n}\}} converges to X {\displaystyle X} in the L 1 {\displaystyle L_{1}} norm if and only if it converges in measure to X {\displaystyle X} and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable.[17] This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4