0% found this document useful (0 votes)
33 views

Lec 3

The document defines convergence in distribution and provides examples. It then proves that convergence in probability implies convergence in distribution using characteristic functions. The central limit theorem is demonstrated using Levy's continuity theorem. Finally, the strong law of large numbers and related concepts are introduced.

Uploaded by

Atom Carbon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Lec 3

The document defines convergence in distribution and provides examples. It then proves that convergence in probability implies convergence in distribution using characteristic functions. The central limit theorem is demonstrated using Levy's continuity theorem. Finally, the strong law of large numbers and related concepts are introduced.

Uploaded by

Atom Carbon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

8 A. K.

MAJEE

Definition 1.4 (Convergence in distribution). Let X, X1 , X2 , . . . be real-valued random


variables with distribution functions FX , FX1 , FX2 , . . . respectively. We say that (Xn ) converges
d
to X is distribution, denoted by Xn → X, if
lim FXn (x) = FX (x) for all continuity points x of FX .
n→∞
Remark 1.2. In the above definition, the random variables X, {Xn } need not be defined on
the same probability space.
1
Example 1.11. Let Xn = and X = 0. Then
n
( (
1 if x ≥ n1 1 x ≥ 0,
FXn (x) = P(Xn ≤ x) = and FX (x) =
0, otherwise 0, x < 0.
Observe that 0 is the only discontinuity point of FX , and limn→∞ FXn (x) = FX (x) for x 6= 0.
d
Thus, Xn → 0.
Example 1.12. Let X be a real-valued random variable with distribution function F . Define
Xn = X + n1 . Then
1 1
FXn (x) = P(X + ≤ x) = F (x − )
n n
=⇒ lim FXn (x) = F (x−) = F (x) for continuity point x of F .
n→∞
d
This implies that Xn → X.
P d
Theorem 1.12. Xn → X implies that Xn → X.
Proof. Let ε > 0. Since FXn (t) = P(Xn ≤ t), we have
FXn (t) = P(Xn ≤ t, |Xn − X| > ε) + P (Xn ≤ t, |Xn − X| ≤ ε)
≤ P(|Xn − X| > ε) + P(Xn ≤ t, |Xn − X| ≤ ε)
≤ P(|Xn − X| > ε) + P(X ≤ t + ε)
≤ P(|Xn − X| > ε) + FX (t + ε),
and
FX (t − ε) = P(X ≤ t − ε) = P(X ≤ t − ε, |Xn − X| > ε) + P(X ≤ t − ε, |Xn − X| ≤ ε)
≤ P(|Xn − X| > ε) + P(X ≤ t − ε, |Xn − X| ≤ ε)
≤ P(|Xn − X| > ε) + P(Xn ≤ t)
≤ P(|Xn − X| > ε) + FXn (t) .
Thus, since limn→∞ P(|Xn − X| > ε) = 0, we obtain from the above inequalities
FX (t − ε) ≤ lim inf FXn (t) ≤ lim sup FXn (t) ≤ FX (t + ε).
n→∞ n→∞
Thet t be the continuity point of F . Then sending ε → 0 in the above inequality, we get
d
lim FXn (t) = FX (t), i.e., Xn → X.
n→∞

Converse of this theorem is NOT true in general.
Example 1.13. Let X ∼ N (0, 1). Define Xn = −X for n = 1, 2, 3, . . .. Then Xn ∼ N (0, 1)
d
and hence Xn → X. But
ε P
P(|Xn − X| > ε) = P(|2X| > ε) = P(|X| > ) 6= 0 =⇒ Xn 9 X.
2
PROBABILITY AND STOCHASTIC PROCESS 9

Theorem 1.13 (Continuity theorem). Let X, {Xn } be random variables having the charac-
teristic function φX , {φXn } respectively. Then the followings are equivalent.
d
i) Xn → X.
ii) E(g(Xn )) → E(g(X)) for all bounded Lipschitz continuous function.
iii) limn→∞ φXn (t) = φX (t) for all t ∈ R.
Example 1.14. Let {Xn } be a sequence of Poisson random variables with parameter λn = n.
n −n
Define Zn = X√ n
. Then
d
Zn → Z, where L(Z) = N (0, 1).
Solution: To see this, we use Levy’s continuity theorem. Let ΦZn : R → C be the characteristic
function of Zn . Then we have
√ √ iu

i √u X n
ΦZn (u) = E eiuZn = e−iu n E e n n = e−iu n en(e −1) .

Using Taylor‘s series expansion, we have


iu
√ iu u2 iu3
e n −1= √ − − + ...
n 2n 6n 32
and hence we get
iu 2 3
√ n( √ −u − iu 3 +...) √ √ 2 h(u,n)
−iu n n+−iu n − u2 − √n
= e−iu
n 2n
ΦZn (u) = e e 6n 2 e e
h(u,n)
where h(u, n) stays bounded in n for each u and hence limn→∞ √
n
= 0. Therefore, we have
√ √ 2 h(u,n) u2
n+−iu n − u2 − √n
lim ΦZn (u) = lim e−iu e e = e− 2 .
n→∞ n→∞
u2 d
Since e− 2 is the characteristic function of N (0, 1), we conclude that Zn → Z.
Theorem 1.14 (Strong law of large number (SLLN)). Let {Xi } be a sequence of i.i.d.
random variables with finite mean µ and variance σ 2 . Then
n
Sn a.s. X
→ µ, where Sn = Xi .
n
i=1
The special case of 4-th order moment, above theorem is refered as Borel’s SLLN. Before
going to prove the theorem, let us consider some examples.
Example 1.15. 1) Let {Xn } be a sequence of i.i.d. random variables that are bounded,
a.s.
i.e., there exists C < ∞ such that P(|X1 | ≤ C) = 1. Then Snn → E(X1 ).
2) Let {Xn } be a sequence of i.i.d. Bernoulli(p) random variables. Then µ = p and σ 2 =
p(1 − p). Hence by SLLN theorem, we have
n
1X
lim Xi = p with probability 1.
n→∞ n
i=1
To prove the theorem, we need following lemma.
Lemma 1.15. Let {Xi } be a sequence of random variables defined on a given probability space
(Ω, F, P).
i) If {Xn } are positive, then

X ∞
X
E Xn = E[Xn ] . (1.1)
n=1 n=1
P∞ P∞
ii) If n=1 E[|Xn |] < ∞, then i=1 Xi converges almost surely and (1.1) holds as well.

You might also like