SlideShare a Scribd company logo
Stochastic Processes Assignment Help
For any help regarding Statistics Homework Help visit :
https://www.statisticshomeworkhelper.com/,
Email - info@statisticshomeworkhelper.com or call us at - +1 678
648 4277
Statistics Homework Helper
Problem 1. The following identity was used in the proof of Theorem 1 in Lec
1
t > 0
ture 4: sup{θ > 0: M (θ) < exp(Cθ)} = inf tI(C + ) (see the proof for
t
details). Establish this identity.
Hint: Establish the convexity of log M (θ) in the region where M (θ) is finite.
∗
dlogM (θ) dθ
Letting θ = sup{θ> 0: M (θ) < exp(Cθ)} use the convexity property above
∗
to argue that logM (θ) ≥θ C +
C
(θ−θ∗). Use this property to finish
C θ∗
the proof of the identity.
Problem 2. This problem concerns the rate of convergence to the limits for the large deviations bounds. Namely, how quickly does n−1 log P(n−1Sn ∈A)
converge to − infx∈A I(x), where Sn is the sum of n i.i.d. random variables? Of course the question is relevant only to the cases when this convergence takes
place.
(a) Let Sn be the sum of n i.i.d. random variables Xi , 1 ≤ i ≤ n taking values in R. Suppose the moment generating function M (θ) = E[exp(θX)] is finite
everywhere. Let a ≥µ = E[X]. Recall that we have established in class that in this case the convergence limn n−1 log P(n−1Sn ≥a) =
−I(a) takes place. Show that in fact there exists a constant C such that
−1 −1
n C
C
n log P(n S ≥ a) + I (a) ≤ ,
C n
for all sufficiently large n. Namely, the rate of convergence is at least as fast as O(1/n).
Hint: Examine the lower bound part of the Crame´r’s Theorem.
(b) Show that the rate O(1/n) cannot beimproved.
Hint: Consider the case a = µ.
Problem 3. Let Xn be an i.i.d. sequence with a Gaussian distribution N (0, 1)
and let Yn be an i.i.d. sequence with a uniform distribution on [−1,1]. Prove
Statistics Homework Helper
Problems
� �
� �
that the limit
1
lim log P( i
2
X ) + ( 1
n
i
2
Y ) ≥ 1
n → ∞ n
1
n
1≤i≤n 1≤i≤n
exist and compute it numerically. You may use MATLAB (or any other software of your choice) and an approximate numeric answer is acceptable.
Problem 4. Let Zn be the set of all length-n sequences of 0, 1 such that a) 1 is always followed by 0 (so for, for example 001101 · · · is not a legitimate
sequence, but 001010 · · · is); b) the percentage of 0-s in the sequence is atleast 70%. (Namely, if Xn(z) is the number of zeros in a sequence z ∈Zn, then
1
X (z)/n ≥0.7 for every z ∈Z ). Assuming that the limit lim
n n n→ ∞ logZ n
n
exists, compute its limit. You may use MATLAB (or any other software of your choice) to assist you with computations, and an approximate numerical
answer is acceptable. You need to provide details of your reasoning.
Problem 5.
Problem 6.
Statistics Homework Helper
1 Problem1
Proof. The lecture note 4 has shown that {θ> 0 : M (θ) < exp(Cθ)} is nonempty. Let
θ∗:= sup{θ > 0: M (θ) < exp(Cθ)}
If θ∗= ∞, which implies that for all θ > 0, M (θ) < exp(Cθ) holds, we have
1
inf tI(C + )= inf sup{t(Cθ −logM (θ))+θ}= ∞ = θ ∗
t> 0 t t>0 θ∈R
Consider the case in which θ∗is finite. According to the definition of I(C + 1), t
we have
1 1
∗ ∗
I(C + ) ≥θ (C + )−logM (θ )
t t
1 1
∗ ∗
⇒ inftI(C + ) ≥inf t(θ (C + )−logM (θ ))
t> 0 t t> 0 t
=inf t(θ∗C −logM (θ∗))+θ∗
t>0
≥θ∗ (1)
Next, we will establish the convexity of log M (θ) on {θ∈R : M (θ) < ∞}. For two θ1, θ2 ∈{θ∈R : M (θ) < ∞} and 0 < α < 1, Ho¨lder’sinequality gives
1 1
1 2
α
E[exp((αθ +(1−α)θ )X)] ≤E[(exp(αθ1X)) α ] E[(exp((1 1 1−α
−α)θ X )) ] 1−α
Taking the log operations on both sides gives
logM (αθ1 +(1−α)θ2) ≤α logM (θ1)+ (1−α)M (θ2)
By the convexity of log M (θ), we have
1 1 M˙(θ∗)
∗ ∗
(C + )θ −logM (θ) ≤(C + )θ −θ C − (θ −θ )
t t M (θ∗)
˙ ∗
M (θ ) 1 θ∗
∗
=(C −
M (θ∗)
+
t
)(θ −θ )+
t
Statistics Homework Helper
Solutions
Thus, we have
1 M˙(θ∗) 1
inf t sup (C + )θ −log M (θ) ≤inf t sup (C − ∗
+ )(θ −θ ) + θ ∗
t> 0 t t> 0 ∗
M (θ ) t
θ∈R θ∈R
(2)
˙ ∗
M(θ )
M (θ∗)
Then we will establish the fact that sufficiently small h > 0such that
≥C. If not, then there existsa
logM (θ∗ −h) −logM (θ∗)
< C
−h
which implies that
logM (θ∗ −h) > logM (θ∗)−Ch
⇒ logM (θ∗ −h) > C(θ∗ −h) ⇒ M (θ∗ −h) ≥exp(C(θ∗ −h))
which contradicts the definition of θ∗.By the facts that ˙ ∗
M (θ ) 1
inf t sup (C− −θ∗) ≥0, (when θ = θ∗)
t> 0 M (θ∗)
+
t
)(θ
θ∈R
∗
and
˙
M (θ )
M (θ∗)
≥C, we havethat
˙ ∗
M (θ ) 1
inf t sup (C− −θ∗) =0
t> 0 M (θ∗)
+
t
)(θ
θ∈R
∗ 1 ˙ ∗
and the infimum is obtained at t > 0 such that C + − = 0. From (2),
t∗
M (θ )
M (θ∗)
we have
1
inf t sup (C + )θ −log M (θ) ≤θ ∗
t> 0 t
θ∈R
1
⇒ inf tI(C + ) ≤θ ∗ (3)
t> 0 t
From (1) and (3), we have the result inft>0 tI(C + 1 )= θ∗. t
Statistics Homework Helper
�� �
�
2 Problem 2 (Based on Tetsuya Kaji’s Solution)
(a). Let θ0 be the one satisfying I(a) = θ0a − log M (θ0) and δ be a small positive number. Following the proof of the lower bound of Cramer’s theorem, we
have
n−1 logP(n−1Sn ≥a) ≥ n−1 logP(n−1Sn ∈[a, a +δ))
≥−I(a)−θ0δ −n−1 logP(n−1S˜n −a ∈[0, δ))
Namely, this bound can not be improved.
3 Problem 3
For any n ≥0, define a point Mn in R2 by
˜
n 1 n i
where S = Y + ... + Y and Y ( 1 ≤i ≤n) is i.i.d. random variablefollowing
z
the distribution P(Yi ≤z)= M (θ0)−1 exp(θ0x)dP (x). Recallthat −∞
n
(Yi −a) √
n
P(n S −a ∈[0, δ)) =P
−1 ˜ i=1√ ∈[0, nδ)
n
By the CLT, setting δ = O(n−1/2)gives
P(n−1S˜n −a ∈[0, δ)) =O(1)
Thus, we have
n−1 logP(n−1Sn ≥ a)+ I(a) ≥ −θ0δ −n−1 logP(n−1S˜n −a ∈[0,δ))
= −O(n−1/2)
Combining the result from the upper bound n−1 log P(n−1Sn ≥a) ≤ −I(a),
we have
C
−1 −1
|n logP(n Sn ≥a)+ I(a)|≤ √
n
n
−1 1
≥ μ) → as n → ∞ . Recalling that
2
(b). Take a = μ. It is obvious, P(n S I(μ)= 0, wehave
n
|n−1 log P(n−1S ≥μ)+ I(μ)|∼ C
n
1
n
M n
x = X i
i≤n
Statistics Homework Helper
�
and 1
yM n =
n
Yi
i≤n
Let B0(1) be the open ball of radius one in R2. From these definitions, we can rewrite
⎛ ⎞
2 2
n n
1
n
1
n
P⎝ Yi ≥1 ⎠ =P(Mn ∈
/B0(1))
Xi +
i=1 i=1
Wewill apply Cramer’s Theorem in R2:
⎛ ⎞
2 2
n n
1 1 1
lim P⎝ Xi +
(x,y)∈B0(1)
Yi ≥1 ⎠ = − inf C
I(x, y)
n
i=1
n
i=1
n → ∞ n
where I(x, y)= sup (θ1x +θ2y −log(M(θ1, θ2)))
(θ1,θ2)∈R2
with
M (θ1, θ2)= E[exp(θ1X +θ2Y )]
Note that since (X, Y )are presumed independent, log(M(θ1,θ2)) = log(MX (θ1))+ log(MY (θ2)), with MX (θ1) = E[exp(θ1,X)] and MY (θ2) = E[exp(θ2Y
)].
Wecan easily compute that
θ2 MX (θ)= exp(
2
)
and
1 1
� 1
θy �
eθy = (eθ −e−θ)
MY (θ)=E[eY θ ]= dy = e
1 1
2 2θ 2θ
−1 −1
Since (x, y) are decoupled in the definition of (x, y), we obtain I(x, y) =
IX (x)+ IY (y) with
1
X 1 1 1 1
I (x)= sup g (x , θ )= sup(θ x − )=
2
θ2 x2
2
θ1 θ1
Y 2 2 2
1 θ −θ
2 2
I (y) = supg (y, θ ) = sup(θ y −log( (e −e )))
2θ
θ θ
2 2
Since for all y, θ2, g2(y, θ2)= g2(−y, −θ2), for all y, IY (y)= IY (−y).
Statistics Homework Helper
Since IX (x) is increasing in |x| and IY (y) is increasing in |y|, the maxi- mum is attained on the circle x2 + y2 = 1, which can be reparametrized
as a one-dimensional search over an angle φ. Optimizing over φ, we find that the minimum of I(x, y) is obtained at x = 1,y = 0, and that the
value is equal to
1 . Weobtain
2
⎛ ⎞
n 2 2
n
1 1 1
lim logP ⎝ Xi + Yi ≥1 ⎠ = − 1
2
n → ∞ n n
i=1
n
i=1
4 Problem 4
We denote Yn the set of all length-n sequences which satisfy condition (a). The first step of our method will be to construct a Markov Chain with the
following properties:
• For every n ≥0, and any sequence (X1, ..., Xn) generated by the Markov Chain, (X1, X2, ..., Xn) belongs to Yn.
• For every n ≥0, and every (x1, ..., xn) ∈Yn, (x1, ..., xn) has positive probability, and all sequences of Yn are “almost” equally likely.
Consider a general Markov Chain with two states (0, 1) and general transition probabilities (P00 , P01; P10, P11). We immediately realize that if P11 > 0,
se- quences with two consecutive ones don’t have zero probability (in particular, for n = 2, the sequence (1, 1) has probability ν(1)P11. Therefore, we set
P11 = 0 (and thus P10 = 0), and verify this enforces the first condition.
Let now P00 = p, P01 = 1 −p, and let’s find p such that all sequences are almost equiprobable. What is the probability of a sequence (X1, ...,Xn)?
Every 1 in the sequence (X1, ..., Xn) necessarily transited from a 0, with probability (1 −p).
Zeroes in the sequence (X1, ..., Xn) can come either from another 0, in
which case they contribute a p to the joint probability (X1, ..., Xn), or from a 1, in which case they contribute a 1. Denote N0 and N1 the numbers of 0
and 1 in the sequence (X1, ..., Xn). Since each 1 of the sequence transits to a 0 of the sequence, there are N1 zeroes which contribute a probability of 1,
and thus N0 − N1 zeroes contribute a probability of p. This is only ’almost’ correct, though, since we have to account for the initial state X1, and the final
state Xn. By choosing for initial distribution ν(0) = p and ν(1) = (1 −p), the above reasoning applies correctly to X1.
Statistics Homework Helper
Our last problem is when the last state is 1, in which case that 1 does not give a 1 to 0 transition, and the probabilities of zero-zero transitions is
therefore N0 −N1 + 1. In summary, under the assumptions given above, we have:
(1 −p)N1pN0−N1, when Xn =0
P(X1, ..., Xn )=
n
(1 −p)N1pN0−N1+1, when X =1
Since N0 + N1 = n, we can rewrite (1 −p)N1pN0−N1 as (1 −p)N1pn−2N1, or
equivalently as (1−p
)N1 pn.Weconclude
p2
1−p N n 1
( ) p , when Xn =0
P(X1, ..., Xn )= p2
(1−p
)N1pn+1,
p2 n
when X =1
Weconclude that if 1−p
= 1, sequences will be almost equally likely. This√
5−1 .
p2
equation has positive solution p = = 0.6180, which we take in the rest
2
of the problem (trivia: 1/p = φ, the golden ratio). The steady state distribution of the resulting Markov Chain can be easily computed to be π = (π0, π1) =
1 1−p
( , ) ∼(0.7236, 0.2764). Wealso obtain the “almost” equiprobablecon-
2−p 2−p
dition:
n
p , when Xn =0
P(X1, ..., Xn )=
pn+1, n
when X =1
Wenow relate this Markov Chain at hand. Note the following: log(|Zn|) =
n
|Yn|
|Z |
log( )+ log( n
|Y |), and therefore,
1
n
1
lim log(Z ) = lim log( n
|Y |) + lim
1 |Zn|
log
n → ∞ n n → ∞ n n → ∞ n |Yn|
1
Let us compute first lim n→ ∞ n
log(|Y |
). This is easily done using our Markov
n
Chain. Fix n ≥0, and observe that since our Markov Chain only generates sequences which belong to Yn, we have
P(X1, ..., Xn)
1=
(X1,...,Xn)∈Yn
Note that for any (X1, ..., Xn) ∈Yn, we have pn+1 ≤P (X1, ..., Xn) ≤pn, and so we obtain
pn+1|Yn|≤ 1 ≤pn|Yn|
Statistics Homework Helper
and so
φn ≤|Yn|≤ φn+1, n logφ ≤log|Yn|≤ (n +1)logφ
1
which gives lim n→ ∞ n
log(|Y | )= logφ.
n
Wenow consider the term |Zn|
. The above reasoning shows that intuitively,
|Yn|
Yn
1 is the probability of the equally likely sequences of (X1, ..., Xn), and that
|Zn| is the number of such sequences with more than 70% zeroes. Basic prob- ability reasoning gives that the ratio is therefore the probability that a
random sequence (X1, ..., Xn) has more than 70% zeroes. Let us first prove this for- mally, and then compute the said probability. Denote G(X1, ...,
Xn)the percent of zeroes of the sequence (X1, ..., Xn). Then, for any k ∈[0,1]
P(G(X1, ..., Xn) ≥ k)= P (X1, ..., Xn)
(X1,...,Xn)∈Zn
Reusing the same idea as previously,
1 n
P(G(X , ..., X ) ≥k) ≤ p ≤| n n
Z |p ≤|Z |
n n n
n
p ≤|Z |= |Z |
1/p
n
(1/p)n+1
(X1,...,Xn)∈Zn
Similarly,
n
|Z |
P(G(X1, ..., Xn) ≥k) ≤p
|Yn|
Taking logs, we obtain
1
1 n
lim log P(G(X , ..., X ) ≥k) = lim
1 |Zn|
log
n → ∞ n n → ∞ n |Yn|
I(x)= sup(θx −logλ(θ))
θ
where λ(θ) is the largest eigenvalue of the matrix
Wewill use large deviations for Markov Chain to compute that probability. First note that G(X1,..., Xn) is the same as i F (Xi), when F (0) =
1 and F (1) =
0. By Miller’s Theorem, obtain that for any x,
1
lim
n → ∞ n
P( F(Xi) ≥nk)= − inf I(x)
x≥k
i
with
p exp(θ) 1 −p
M (θ)= exp(θ) 0
n
|Z |
≤(1/p)
|Yn|
Statistics Homework Helper
The characteristic equation is λ2 −√(pexp(θ))λ −(1 −p) exp(θ) = 0, whose
2
p2 exp(2θ)+4(1−p) exp(θ)
. The ratefunction
largest solution is λ(θ) = p exp(θ)+
of the MC is
I(x)= sup(θx −log(λ(θ)))
θ
Since the mean of F under the steady state distribution π is above 0.7, the min-
n
|Z |
imum minx≥0.7 I(x) = I(μ) = 0. Thus, lim n→ ∞ log = 0, andwe
1
n |Yn|
conclude
1
lim log|Zn| = logφ = 0.4812
lim log|Zn(k)| = logφ = 0.4812
and for k > μ,
lim log| n
Z (k)
| = logφ −sup(θk −log(λ(θ)))
θ
5.1 1(i)
Consider a standard Brownian motion B, and let U be a uniform random vari- able over [1/2, 1]. Let
B (t), when t = U B(U )= 0,
otherwise
W (t)=
With probability 1, B(U )is not zero, and therefore limt→U W (t) = limr→U B(t)= B(U ) = 0 = W (U ), and W is not continuous in U . For any finite col- lection of
times t = (t1, ..., tn) and real numbers x = (x1, ..., xn); denote
W (t)= (W (t1), ..., W (tn)), x = (x1, ..., xn)
P(W(t) ≤ x) =P(U ∈
/{ti, 1 ≤ i ≤n})P(W(t) ≤x|U ∈
/{ti, 1 ≤ i ≤n})
+ P(U ∈{ti, 1 ≤i ≤n})P(W (t) ≤x|U ∈{ti, 1 ≤i ≤n})
Note that P(U ∈{ti, 1 ≤ i ≤ n}) = 0, and P(W (t) ≤ x|U ∈
/ {ti,i ≤ n})= P(B(t ≤ x)), and thus the Process W has exactly the same distribution properties as B
(gaussian process, independent and stationary increments with zero mean and variance proportional to the size of the interval).
n → ∞ n
In general, for k ≤μ, we will have
1
n → ∞ n
1
n → ∞ n
5 Problem 5
Statistics Homework Helper
5.2 1(ii)
Let X be a Gaussian random variable (mean 0, standard deviation 1), and denote
QX the set {q+ x, q∈Q}∪ R+ where Q is the set of rationalnumbers.
B(t), when t∈
/QX {0} B(t)+ 1, whent ∈QX{0}
W (t)=
Through the exact same argument as 1(i), W has the same distribution properties as B (this is because QX , just like {ti, 1 ≤ i ≤ n}, has measure zero for a random
variable with density).
n
i(t−x)10 l
However, note that for any t > 0, |t −x − −n
| ≤ 10 , proving
10n
n
i(t−x)10 l n
i(t−x)10 l
n
that lim (x + ) = t. However, for any n, x + X
∈Q , and
10n 10n
n
so lim W (x + i(t−x)10n l
) = B(t)+ 1 = B(t). This proves W (t) is surely
10n
discontinuous everywhere.
5.3 2 1
Let t ≥0
, and consider the event E n
= {|B(t + ) −B(t) |> E}.Then,since
n
1 1
√
B(t + )−B(t) is equal in distribution to N , where N is a standard normal,
n n
by Chebychevs inequality, we have
n
−1/ 2
P(E ) = P(n | −1/2 4 4 −2
N| > E)= P(|N| > En )= P(N > E n )≤
3
E4n2
1
< ∞, by Borel-Cantelli lemma, we have that there
Since n P(En)= n n2
almost surely exists N such that for all n ≥N , |B(t + 1/n) −B(t)| ≤E, proving limn→∞ B(t +1/n)= B(t) almost surely.
6 Problem 6
The event B ∈AR is included in the event B(2) −B(1) = B(1) −B(0), and
thus
P(B ∈AR) ≤ P (B(2) −B(1) = B(1) −B(0)) = 0
Since the probability that two atomless, independent random variables are equal is zero (easy to prove using conditional probabilities).
Statistics Homework Helper

More Related Content

PPTX
Stochastic Assignment Help
PPTX
Logistics Management Assignment Help
PPTX
Statistics Homework Help
PPTX
probability assignment help
PPTX
probability assignment help (2)
PPTX
Statistical Method In Economics
PPTX
Chemistry Assignment Help
PPTX
Physical Chemistry Assignment Help
Stochastic Assignment Help
Logistics Management Assignment Help
Statistics Homework Help
probability assignment help
probability assignment help (2)
Statistical Method In Economics
Chemistry Assignment Help
Physical Chemistry Assignment Help

What's hot (20)

PPTX
Statistics Assignment Help
PPTX
statistics assignment help
PPTX
Probability Assignment Help
PPTX
Math Exam Help
PDF
Numerical
PDF
Solution 3.
PPTX
Signal Processing Assignment Help
PPTX
Calculus Homework Help
PPTX
Calculus Homework Help
PPTX
Computer Science Assignment Help
PPT
Chapter 3
PPTX
Statistics Assignment Help
PDF
Week 6
PPT
numerical methods
PPTX
Data Analysis Assignment Help
PPTX
Calculus Homework Help
PDF
Answers withexplanations
PPTX
Statistics Coursework Help
PPT
Numerical method
PDF
Delayed acceptance for Metropolis-Hastings algorithms
Statistics Assignment Help
statistics assignment help
Probability Assignment Help
Math Exam Help
Numerical
Solution 3.
Signal Processing Assignment Help
Calculus Homework Help
Calculus Homework Help
Computer Science Assignment Help
Chapter 3
Statistics Assignment Help
Week 6
numerical methods
Data Analysis Assignment Help
Calculus Homework Help
Answers withexplanations
Statistics Coursework Help
Numerical method
Delayed acceptance for Metropolis-Hastings algorithms
Ad

Similar to stochastic processes assignment help (20)

PDF
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
PDF
Roots equations
PDF
Roots equations
PDF
02 CS316_algorithms_Asymptotic_Notations(2).pdf
PDF
Imc2017 day2-solutions
PDF
Comparison Theorems for SDEs
PDF
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
PDF
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
PDF
Introducing Zap Q-Learning
PDF
Nokton theory-en
PDF
research paper publication
PDF
Complete l fuzzy metric spaces and common fixed point theorems
PDF
Stirling theorem
PDF
Nested sampling
PDF
Stat150 spring06 markov_cts
PDF
Murphy: Machine learning A probabilistic perspective: Ch.9
PDF
Runtime Analysis of Population-based Evolutionary Algorithms
PDF
Runtime Analysis of Population-based Evolutionary Algorithms
PDF
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
PPTX
Signal Processing Homework Help
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
Roots equations
Roots equations
02 CS316_algorithms_Asymptotic_Notations(2).pdf
Imc2017 day2-solutions
Comparison Theorems for SDEs
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
Introducing Zap Q-Learning
Nokton theory-en
research paper publication
Complete l fuzzy metric spaces and common fixed point theorems
Stirling theorem
Nested sampling
Stat150 spring06 markov_cts
Murphy: Machine learning A probabilistic perspective: Ch.9
Runtime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary Algorithms
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Signal Processing Homework Help
Ad

More from Statistics Homework Helper (20)

PPTX
Online NON-Linear Regression Homework Help
PPTX
Regression Techniques in Statistics.pptx
PPTX
Analyzing Complex Potential MATLAB .pptx
PPTX
📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
PPTX
Your Statistics Homework Solver is Here! 📊📚
PDF
Probability Homework Help
PDF
Multiple Linear Regression Homework Help
PDF
Statistics Homework Help
PDF
SAS Homework Help
PDF
R Programming Homework Help
PDF
Statistics Homework Helper
PDF
Statistics Homework Help
PDF
Do My Statistics Homework
PDF
Write My Statistics Homework
PPTX
Quantitative Research Homework Help
PPTX
Probability Homework Help
PPTX
Top Rated Service Provided By Statistics Homework Help
PPTX
Introduction to Statistics
PPTX
Statistics Homework Help
PPT
Multivariate and Monova Assignment Help
Online NON-Linear Regression Homework Help
Regression Techniques in Statistics.pptx
Analyzing Complex Potential MATLAB .pptx
📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
Your Statistics Homework Solver is Here! 📊📚
Probability Homework Help
Multiple Linear Regression Homework Help
Statistics Homework Help
SAS Homework Help
R Programming Homework Help
Statistics Homework Helper
Statistics Homework Help
Do My Statistics Homework
Write My Statistics Homework
Quantitative Research Homework Help
Probability Homework Help
Top Rated Service Provided By Statistics Homework Help
Introduction to Statistics
Statistics Homework Help
Multivariate and Monova Assignment Help

Recently uploaded (20)

PDF
Classroom Observation Tools for Teachers
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
master seminar digital applications in india
PDF
Insiders guide to clinical Medicine.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
GDM (1) (1).pptx small presentation for students
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Classroom Observation Tools for Teachers
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Basic Mud Logging Guide for educational purpose
Final Presentation General Medicine 03-08-2024.pptx
Renaissance Architecture: A Journey from Faith to Humanism
Microbial disease of the cardiovascular and lymphatic systems
master seminar digital applications in india
Insiders guide to clinical Medicine.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Complications of Minimal Access Surgery at WLH
Supply Chain Operations Speaking Notes -ICLT Program
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
GDM (1) (1).pptx small presentation for students
102 student loan defaulters named and shamed – Is someone you know on the list?
STATICS OF THE RIGID BODIES Hibbelers.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
PPH.pptx obstetrics and gynecology in nursing
2.FourierTransform-ShortQuestionswithAnswers.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx

stochastic processes assignment help

  • 1. Stochastic Processes Assignment Help For any help regarding Statistics Homework Help visit : https://www.statisticshomeworkhelper.com/, Email - info@statisticshomeworkhelper.com or call us at - +1 678 648 4277 Statistics Homework Helper
  • 2. Problem 1. The following identity was used in the proof of Theorem 1 in Lec 1 t > 0 ture 4: sup{θ > 0: M (θ) < exp(Cθ)} = inf tI(C + ) (see the proof for t details). Establish this identity. Hint: Establish the convexity of log M (θ) in the region where M (θ) is finite. ∗ dlogM (θ) dθ Letting θ = sup{θ> 0: M (θ) < exp(Cθ)} use the convexity property above ∗ to argue that logM (θ) ≥θ C + C (θ−θ∗). Use this property to finish C θ∗ the proof of the identity. Problem 2. This problem concerns the rate of convergence to the limits for the large deviations bounds. Namely, how quickly does n−1 log P(n−1Sn ∈A) converge to − infx∈A I(x), where Sn is the sum of n i.i.d. random variables? Of course the question is relevant only to the cases when this convergence takes place. (a) Let Sn be the sum of n i.i.d. random variables Xi , 1 ≤ i ≤ n taking values in R. Suppose the moment generating function M (θ) = E[exp(θX)] is finite everywhere. Let a ≥µ = E[X]. Recall that we have established in class that in this case the convergence limn n−1 log P(n−1Sn ≥a) = −I(a) takes place. Show that in fact there exists a constant C such that −1 −1 n C C n log P(n S ≥ a) + I (a) ≤ , C n for all sufficiently large n. Namely, the rate of convergence is at least as fast as O(1/n). Hint: Examine the lower bound part of the Crame´r’s Theorem. (b) Show that the rate O(1/n) cannot beimproved. Hint: Consider the case a = µ. Problem 3. Let Xn be an i.i.d. sequence with a Gaussian distribution N (0, 1) and let Yn be an i.i.d. sequence with a uniform distribution on [−1,1]. Prove Statistics Homework Helper Problems
  • 3. � � � � that the limit 1 lim log P( i 2 X ) + ( 1 n i 2 Y ) ≥ 1 n → ∞ n 1 n 1≤i≤n 1≤i≤n exist and compute it numerically. You may use MATLAB (or any other software of your choice) and an approximate numeric answer is acceptable. Problem 4. Let Zn be the set of all length-n sequences of 0, 1 such that a) 1 is always followed by 0 (so for, for example 001101 · · · is not a legitimate sequence, but 001010 · · · is); b) the percentage of 0-s in the sequence is atleast 70%. (Namely, if Xn(z) is the number of zeros in a sequence z ∈Zn, then 1 X (z)/n ≥0.7 for every z ∈Z ). Assuming that the limit lim n n n→ ∞ logZ n n exists, compute its limit. You may use MATLAB (or any other software of your choice) to assist you with computations, and an approximate numerical answer is acceptable. You need to provide details of your reasoning. Problem 5. Problem 6. Statistics Homework Helper
  • 4. 1 Problem1 Proof. The lecture note 4 has shown that {θ> 0 : M (θ) < exp(Cθ)} is nonempty. Let θ∗:= sup{θ > 0: M (θ) < exp(Cθ)} If θ∗= ∞, which implies that for all θ > 0, M (θ) < exp(Cθ) holds, we have 1 inf tI(C + )= inf sup{t(Cθ −logM (θ))+θ}= ∞ = θ ∗ t> 0 t t>0 θ∈R Consider the case in which θ∗is finite. According to the definition of I(C + 1), t we have 1 1 ∗ ∗ I(C + ) ≥θ (C + )−logM (θ ) t t 1 1 ∗ ∗ ⇒ inftI(C + ) ≥inf t(θ (C + )−logM (θ )) t> 0 t t> 0 t =inf t(θ∗C −logM (θ∗))+θ∗ t>0 ≥θ∗ (1) Next, we will establish the convexity of log M (θ) on {θ∈R : M (θ) < ∞}. For two θ1, θ2 ∈{θ∈R : M (θ) < ∞} and 0 < α < 1, Ho¨lder’sinequality gives 1 1 1 2 α E[exp((αθ +(1−α)θ )X)] ≤E[(exp(αθ1X)) α ] E[(exp((1 1 1−α −α)θ X )) ] 1−α Taking the log operations on both sides gives logM (αθ1 +(1−α)θ2) ≤α logM (θ1)+ (1−α)M (θ2) By the convexity of log M (θ), we have 1 1 M˙(θ∗) ∗ ∗ (C + )θ −logM (θ) ≤(C + )θ −θ C − (θ −θ ) t t M (θ∗) ˙ ∗ M (θ ) 1 θ∗ ∗ =(C − M (θ∗) + t )(θ −θ )+ t Statistics Homework Helper Solutions
  • 5. Thus, we have 1 M˙(θ∗) 1 inf t sup (C + )θ −log M (θ) ≤inf t sup (C − ∗ + )(θ −θ ) + θ ∗ t> 0 t t> 0 ∗ M (θ ) t θ∈R θ∈R (2) ˙ ∗ M(θ ) M (θ∗) Then we will establish the fact that sufficiently small h > 0such that ≥C. If not, then there existsa logM (θ∗ −h) −logM (θ∗) < C −h which implies that logM (θ∗ −h) > logM (θ∗)−Ch ⇒ logM (θ∗ −h) > C(θ∗ −h) ⇒ M (θ∗ −h) ≥exp(C(θ∗ −h)) which contradicts the definition of θ∗.By the facts that ˙ ∗ M (θ ) 1 inf t sup (C− −θ∗) ≥0, (when θ = θ∗) t> 0 M (θ∗) + t )(θ θ∈R ∗ and ˙ M (θ ) M (θ∗) ≥C, we havethat ˙ ∗ M (θ ) 1 inf t sup (C− −θ∗) =0 t> 0 M (θ∗) + t )(θ θ∈R ∗ 1 ˙ ∗ and the infimum is obtained at t > 0 such that C + − = 0. From (2), t∗ M (θ ) M (θ∗) we have 1 inf t sup (C + )θ −log M (θ) ≤θ ∗ t> 0 t θ∈R 1 ⇒ inf tI(C + ) ≤θ ∗ (3) t> 0 t From (1) and (3), we have the result inft>0 tI(C + 1 )= θ∗. t Statistics Homework Helper
  • 6. �� � � 2 Problem 2 (Based on Tetsuya Kaji’s Solution) (a). Let θ0 be the one satisfying I(a) = θ0a − log M (θ0) and δ be a small positive number. Following the proof of the lower bound of Cramer’s theorem, we have n−1 logP(n−1Sn ≥a) ≥ n−1 logP(n−1Sn ∈[a, a +δ)) ≥−I(a)−θ0δ −n−1 logP(n−1S˜n −a ∈[0, δ)) Namely, this bound can not be improved. 3 Problem 3 For any n ≥0, define a point Mn in R2 by ˜ n 1 n i where S = Y + ... + Y and Y ( 1 ≤i ≤n) is i.i.d. random variablefollowing z the distribution P(Yi ≤z)= M (θ0)−1 exp(θ0x)dP (x). Recallthat −∞ n (Yi −a) √ n P(n S −a ∈[0, δ)) =P −1 ˜ i=1√ ∈[0, nδ) n By the CLT, setting δ = O(n−1/2)gives P(n−1S˜n −a ∈[0, δ)) =O(1) Thus, we have n−1 logP(n−1Sn ≥ a)+ I(a) ≥ −θ0δ −n−1 logP(n−1S˜n −a ∈[0,δ)) = −O(n−1/2) Combining the result from the upper bound n−1 log P(n−1Sn ≥a) ≤ −I(a), we have C −1 −1 |n logP(n Sn ≥a)+ I(a)|≤ √ n n −1 1 ≥ μ) → as n → ∞ . Recalling that 2 (b). Take a = μ. It is obvious, P(n S I(μ)= 0, wehave n |n−1 log P(n−1S ≥μ)+ I(μ)|∼ C n 1 n M n x = X i i≤n Statistics Homework Helper
  • 7. � and 1 yM n = n Yi i≤n Let B0(1) be the open ball of radius one in R2. From these definitions, we can rewrite ⎛ ⎞ 2 2 n n 1 n 1 n P⎝ Yi ≥1 ⎠ =P(Mn ∈ /B0(1)) Xi + i=1 i=1 Wewill apply Cramer’s Theorem in R2: ⎛ ⎞ 2 2 n n 1 1 1 lim P⎝ Xi + (x,y)∈B0(1) Yi ≥1 ⎠ = − inf C I(x, y) n i=1 n i=1 n → ∞ n where I(x, y)= sup (θ1x +θ2y −log(M(θ1, θ2))) (θ1,θ2)∈R2 with M (θ1, θ2)= E[exp(θ1X +θ2Y )] Note that since (X, Y )are presumed independent, log(M(θ1,θ2)) = log(MX (θ1))+ log(MY (θ2)), with MX (θ1) = E[exp(θ1,X)] and MY (θ2) = E[exp(θ2Y )]. Wecan easily compute that θ2 MX (θ)= exp( 2 ) and 1 1 � 1 θy � eθy = (eθ −e−θ) MY (θ)=E[eY θ ]= dy = e 1 1 2 2θ 2θ −1 −1 Since (x, y) are decoupled in the definition of (x, y), we obtain I(x, y) = IX (x)+ IY (y) with 1 X 1 1 1 1 I (x)= sup g (x , θ )= sup(θ x − )= 2 θ2 x2 2 θ1 θ1 Y 2 2 2 1 θ −θ 2 2 I (y) = supg (y, θ ) = sup(θ y −log( (e −e ))) 2θ θ θ 2 2 Since for all y, θ2, g2(y, θ2)= g2(−y, −θ2), for all y, IY (y)= IY (−y). Statistics Homework Helper
  • 8. Since IX (x) is increasing in |x| and IY (y) is increasing in |y|, the maxi- mum is attained on the circle x2 + y2 = 1, which can be reparametrized as a one-dimensional search over an angle φ. Optimizing over φ, we find that the minimum of I(x, y) is obtained at x = 1,y = 0, and that the value is equal to 1 . Weobtain 2 ⎛ ⎞ n 2 2 n 1 1 1 lim logP ⎝ Xi + Yi ≥1 ⎠ = − 1 2 n → ∞ n n i=1 n i=1 4 Problem 4 We denote Yn the set of all length-n sequences which satisfy condition (a). The first step of our method will be to construct a Markov Chain with the following properties: • For every n ≥0, and any sequence (X1, ..., Xn) generated by the Markov Chain, (X1, X2, ..., Xn) belongs to Yn. • For every n ≥0, and every (x1, ..., xn) ∈Yn, (x1, ..., xn) has positive probability, and all sequences of Yn are “almost” equally likely. Consider a general Markov Chain with two states (0, 1) and general transition probabilities (P00 , P01; P10, P11). We immediately realize that if P11 > 0, se- quences with two consecutive ones don’t have zero probability (in particular, for n = 2, the sequence (1, 1) has probability ν(1)P11. Therefore, we set P11 = 0 (and thus P10 = 0), and verify this enforces the first condition. Let now P00 = p, P01 = 1 −p, and let’s find p such that all sequences are almost equiprobable. What is the probability of a sequence (X1, ...,Xn)? Every 1 in the sequence (X1, ..., Xn) necessarily transited from a 0, with probability (1 −p). Zeroes in the sequence (X1, ..., Xn) can come either from another 0, in which case they contribute a p to the joint probability (X1, ..., Xn), or from a 1, in which case they contribute a 1. Denote N0 and N1 the numbers of 0 and 1 in the sequence (X1, ..., Xn). Since each 1 of the sequence transits to a 0 of the sequence, there are N1 zeroes which contribute a probability of 1, and thus N0 − N1 zeroes contribute a probability of p. This is only ’almost’ correct, though, since we have to account for the initial state X1, and the final state Xn. By choosing for initial distribution ν(0) = p and ν(1) = (1 −p), the above reasoning applies correctly to X1. Statistics Homework Helper
  • 9. Our last problem is when the last state is 1, in which case that 1 does not give a 1 to 0 transition, and the probabilities of zero-zero transitions is therefore N0 −N1 + 1. In summary, under the assumptions given above, we have: (1 −p)N1pN0−N1, when Xn =0 P(X1, ..., Xn )= n (1 −p)N1pN0−N1+1, when X =1 Since N0 + N1 = n, we can rewrite (1 −p)N1pN0−N1 as (1 −p)N1pn−2N1, or equivalently as (1−p )N1 pn.Weconclude p2 1−p N n 1 ( ) p , when Xn =0 P(X1, ..., Xn )= p2 (1−p )N1pn+1, p2 n when X =1 Weconclude that if 1−p = 1, sequences will be almost equally likely. This√ 5−1 . p2 equation has positive solution p = = 0.6180, which we take in the rest 2 of the problem (trivia: 1/p = φ, the golden ratio). The steady state distribution of the resulting Markov Chain can be easily computed to be π = (π0, π1) = 1 1−p ( , ) ∼(0.7236, 0.2764). Wealso obtain the “almost” equiprobablecon- 2−p 2−p dition: n p , when Xn =0 P(X1, ..., Xn )= pn+1, n when X =1 Wenow relate this Markov Chain at hand. Note the following: log(|Zn|) = n |Yn| |Z | log( )+ log( n |Y |), and therefore, 1 n 1 lim log(Z ) = lim log( n |Y |) + lim 1 |Zn| log n → ∞ n n → ∞ n n → ∞ n |Yn| 1 Let us compute first lim n→ ∞ n log(|Y | ). This is easily done using our Markov n Chain. Fix n ≥0, and observe that since our Markov Chain only generates sequences which belong to Yn, we have P(X1, ..., Xn) 1= (X1,...,Xn)∈Yn Note that for any (X1, ..., Xn) ∈Yn, we have pn+1 ≤P (X1, ..., Xn) ≤pn, and so we obtain pn+1|Yn|≤ 1 ≤pn|Yn| Statistics Homework Helper
  • 10. and so φn ≤|Yn|≤ φn+1, n logφ ≤log|Yn|≤ (n +1)logφ 1 which gives lim n→ ∞ n log(|Y | )= logφ. n Wenow consider the term |Zn| . The above reasoning shows that intuitively, |Yn| Yn 1 is the probability of the equally likely sequences of (X1, ..., Xn), and that |Zn| is the number of such sequences with more than 70% zeroes. Basic prob- ability reasoning gives that the ratio is therefore the probability that a random sequence (X1, ..., Xn) has more than 70% zeroes. Let us first prove this for- mally, and then compute the said probability. Denote G(X1, ..., Xn)the percent of zeroes of the sequence (X1, ..., Xn). Then, for any k ∈[0,1] P(G(X1, ..., Xn) ≥ k)= P (X1, ..., Xn) (X1,...,Xn)∈Zn Reusing the same idea as previously, 1 n P(G(X , ..., X ) ≥k) ≤ p ≤| n n Z |p ≤|Z | n n n n p ≤|Z |= |Z | 1/p n (1/p)n+1 (X1,...,Xn)∈Zn Similarly, n |Z | P(G(X1, ..., Xn) ≥k) ≤p |Yn| Taking logs, we obtain 1 1 n lim log P(G(X , ..., X ) ≥k) = lim 1 |Zn| log n → ∞ n n → ∞ n |Yn| I(x)= sup(θx −logλ(θ)) θ where λ(θ) is the largest eigenvalue of the matrix Wewill use large deviations for Markov Chain to compute that probability. First note that G(X1,..., Xn) is the same as i F (Xi), when F (0) = 1 and F (1) = 0. By Miller’s Theorem, obtain that for any x, 1 lim n → ∞ n P( F(Xi) ≥nk)= − inf I(x) x≥k i with p exp(θ) 1 −p M (θ)= exp(θ) 0 n |Z | ≤(1/p) |Yn| Statistics Homework Helper
  • 11. The characteristic equation is λ2 −√(pexp(θ))λ −(1 −p) exp(θ) = 0, whose 2 p2 exp(2θ)+4(1−p) exp(θ) . The ratefunction largest solution is λ(θ) = p exp(θ)+ of the MC is I(x)= sup(θx −log(λ(θ))) θ Since the mean of F under the steady state distribution π is above 0.7, the min- n |Z | imum minx≥0.7 I(x) = I(μ) = 0. Thus, lim n→ ∞ log = 0, andwe 1 n |Yn| conclude 1 lim log|Zn| = logφ = 0.4812 lim log|Zn(k)| = logφ = 0.4812 and for k > μ, lim log| n Z (k) | = logφ −sup(θk −log(λ(θ))) θ 5.1 1(i) Consider a standard Brownian motion B, and let U be a uniform random vari- able over [1/2, 1]. Let B (t), when t = U B(U )= 0, otherwise W (t)= With probability 1, B(U )is not zero, and therefore limt→U W (t) = limr→U B(t)= B(U ) = 0 = W (U ), and W is not continuous in U . For any finite col- lection of times t = (t1, ..., tn) and real numbers x = (x1, ..., xn); denote W (t)= (W (t1), ..., W (tn)), x = (x1, ..., xn) P(W(t) ≤ x) =P(U ∈ /{ti, 1 ≤ i ≤n})P(W(t) ≤x|U ∈ /{ti, 1 ≤ i ≤n}) + P(U ∈{ti, 1 ≤i ≤n})P(W (t) ≤x|U ∈{ti, 1 ≤i ≤n}) Note that P(U ∈{ti, 1 ≤ i ≤ n}) = 0, and P(W (t) ≤ x|U ∈ / {ti,i ≤ n})= P(B(t ≤ x)), and thus the Process W has exactly the same distribution properties as B (gaussian process, independent and stationary increments with zero mean and variance proportional to the size of the interval). n → ∞ n In general, for k ≤μ, we will have 1 n → ∞ n 1 n → ∞ n 5 Problem 5 Statistics Homework Helper
  • 12. 5.2 1(ii) Let X be a Gaussian random variable (mean 0, standard deviation 1), and denote QX the set {q+ x, q∈Q}∪ R+ where Q is the set of rationalnumbers. B(t), when t∈ /QX {0} B(t)+ 1, whent ∈QX{0} W (t)= Through the exact same argument as 1(i), W has the same distribution properties as B (this is because QX , just like {ti, 1 ≤ i ≤ n}, has measure zero for a random variable with density). n i(t−x)10 l However, note that for any t > 0, |t −x − −n | ≤ 10 , proving 10n n i(t−x)10 l n i(t−x)10 l n that lim (x + ) = t. However, for any n, x + X ∈Q , and 10n 10n n so lim W (x + i(t−x)10n l ) = B(t)+ 1 = B(t). This proves W (t) is surely 10n discontinuous everywhere. 5.3 2 1 Let t ≥0 , and consider the event E n = {|B(t + ) −B(t) |> E}.Then,since n 1 1 √ B(t + )−B(t) is equal in distribution to N , where N is a standard normal, n n by Chebychevs inequality, we have n −1/ 2 P(E ) = P(n | −1/2 4 4 −2 N| > E)= P(|N| > En )= P(N > E n )≤ 3 E4n2 1 < ∞, by Borel-Cantelli lemma, we have that there Since n P(En)= n n2 almost surely exists N such that for all n ≥N , |B(t + 1/n) −B(t)| ≤E, proving limn→∞ B(t +1/n)= B(t) almost surely. 6 Problem 6 The event B ∈AR is included in the event B(2) −B(1) = B(1) −B(0), and thus P(B ∈AR) ≤ P (B(2) −B(1) = B(1) −B(0)) = 0 Since the probability that two atomless, independent random variables are equal is zero (easy to prove using conditional probabilities). Statistics Homework Helper