On discrete approximation of occupation time of diffusion processes with irregular sampling

Abstract. Let X be a diffusion processes and A be some Borel subset of R. In this paper, we introduce an estimator for the occupation time Γ(A)t = R0t I{Xs∈A}ds based on an irregular sample of X and study its asymptotic behavior. Keywords: Occupation time, diffusion processes, irregular sample.

pdf14 trang | Chia sẻ: thanhle95 | Lượt xem: 232 | Lượt tải: 0download
Bạn đang xem nội dung tài liệu On discrete approximation of occupation time of diffusion processes with irregular sampling, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
JOURNAL OF SCIENCE OF HNUE Interdisciplinary Science, 2014, Vol. 59, No. 5, pp. 3-16 This paper is available online at ON DISCRETE APPROXIMATION OF OCCUPATION TIME OF DIFFUSION PROCESSES WITH IRREGULAR SAMPLING Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh Faculty of Mathematics and Informatics, Hanoi National University of Education Abstract. LetX be a diffusion processes and A be some Borel subset of R. In this paper, we introduce an estimator for the occupation time Γ(A)t = ∫ t 0 I{Xs∈A}ds based on an irregular sample of X and study its asymptotic behavior. Keywords: Occupation time, diffusion processes, irregular sample. 1. Introduction Let X be a solution to the following stochastic differential equation dXt = b(Xt)dt+ σ(Xt)dWt, X0 = x0 ∈ R, (1.1) where b and σ are measurable functions and Wt is a standard Brownian motion defined on a filtered probability space (Ω,F , (Ft)t>0,P). For each set A ∈ B(R) the occupation time of X in A is defined by Γ(A)t = ∫ t 0 I{Xs∈A}ds. The quantity Γ(A) is the amount of time the diffusionX spends on set A. The problem of evaluating Γ(A) is very important in many applied domains such as mathematical finance, queueing theory and biology. For example, in mathematical finance, these quantities are of great interest for the pricing of many derivatives, such as Parisian, corridor and Eddoko options (see [1, 2, 9]). In practice, one cannot observe the whole trajectory of X during a fixed interval. In other words, we can only collect the values of X at some discrete times, say 0 = t1 < t2 < . . . Recently, Ngo and Ogawa [10] and Kohatsu-Higa et al. [7] have introduced an estimate for Γ(A) by using a Riemann sum and they studied the rate of convergence of this approximation when X is observed at regular points, i.e. {ti = i n , i 6 [nt]} for all i > 0 Received December 25, 2013. Accepted June 26, 2014. Contact Nguyen Thi Lan Huong, e-mail address: nguyenhuong0011@gmail.com 3 Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh and any n > 0. However, in practice for many reasons we can not observe X at regular observation points. Thus, in this paper, we will construct an estimation scheme for Γ(A) based on an irregular sample {Xti , i = 0, 1, . . .} of X and study its asymptotic behavior. In particular, we first introduce an unbiased estimator for Γ(A) when X is a standard Brownian motion and provide a functional central limit theorem (Theorem 2.2) for the error process. It should be noted here that assumption A, which is obviously satisfied for regular sampling, is the key to construct the limit of the error process for irregular sampling. We then introduce an estimator for Γ(A) for general diffusion process and show that its error is of order 3/4. 2. Main results Throughout out this paper, we suppose that coefficients b and σ satisfy the following conditions: (i) σ is continuously differentiable and σ(x) ≥ σ0 > 0 for all x ∈ R, (ii) |b(x)− b(y)|+ |σ(x)− σ(y)| ≤ C|x− y| for some constantC > 0. (2.1) The above conditions on b and σ guarantee the continuity of sample path and marginal distribution ofX (see [11]). We note here that under a more restrictive condition on the smoothness and boundedness of b, σ and their derivatives, Kohatsu-Higa et al. [7] have studied the strong rate of approximation of Γ(A) via a Riemann sum as one defined in [10]. At the nth stage, we suppose thatX is observed at times tni , i = 0, 1, 2, ... satisfying 0 = tn0 < t n 1 < t n 2 0 such that ∆n ≤ k0min i ∆ni , ∀n, (2.2) where ∆ni = tni − tni−1 and ∆n = max i ∆ni . We assume moreover that limn→∞∆n = 0. We denote ηn(s) = tni if tni ≤ s < tni+1. 2.1. Occupation time of Brownian Motions We first recall the concept of stable convergence: Let (Xn)n≥0 be a sequence of random vectors with values in a Polish space (E, E), all defined on the same probability space (Ω,F , (Ft)t≥0,P) and let G be a sub-σ-algebra of F . We say that Xn converges G-stably in law to X , denote Xn G−st→ X , if X is an E−valued random vector defined on an extension (Ω′,F ′,P′) of the original probability space and limn→∞ E(g(Xn)Z) = E′(g(X)Z), for every bounded continuous functions g : E → R and all bounded G−measurable random variables Z (see [4, 5, 8]). When G = F we write Xn st→ X instead of Xn G−st→ X . We denote by Lt(a) the local time of a standard Brownian motionB at a, up to and including t given by Lt(a) = |Bt − a| − |a| − ∫ t 0 sign(Bs − a)dBs. 4 On discrete approximation of occupation time of diffusion processes with irregular sampling For each Borel function g defined on R and γ > 0, we set βγ(g) = ∫ |x|γ|g(x)|dx, λ(g) = ∫ g(x)dx. In order to study the asymptotic behavior of the estimation error, we need the following assumption. Assumption A: There exists a non-decreasing function Ft(x) such that 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)3/2E(Ltni (x)− Ltni−1(x)|Ftni−1) P−→ Ft(x), ∀ t > 0, x ∈ R. (2.3) Theorem 2.1 (The approximation of local time). Suppose that g satisfies the following conditions: g(x) = o(x) as x→∞, β1(g) <∞, and λ(|g|) <∞. (2.4) Then for all x ∈ R it holds ∑ tni 6t √ tni − tni−1g ( Btni−1 − x√ tni − tni−1 ) P−→ λ(g)Lt(x). Now, we proceed to state the functional central limit theorem for the error process. First, let us recall the definition of F -progressive conditional martingale (see [5] for more details). We call extension of B another stochastic basis B˜ = (Ω˜, F˜ , (F˜t), P˜) constructed as follows: We have an auxiliary filtered space (Ω′ ,F ′, (F ′t)t≥0) such that each σ-field F ′t− is separable, and a transition probability Qω(dω ′ ) from (Ω,F) into (Ω′ ,F ′), and we set Ω˜ = Ω× Ω′ , F˜ = F˜ ⊗ F ′, F˜ = ∩s>tFs ⊗F ′s, P˜(dω, dω ′ ) = P(dω)Qω(dω ′ ). A process X on the extension B˜ is called an F -progressive conditional martingale if it is adapted to F˜ and if for P-almost all ω in the process X(ω, .) is a martingale on the basis Bω = (Ω′,F ′, (F ′t)t≥0,Qω). Theorem 2.2. Suppose that B is a standard Brownian motion defined on a filtered space B = (Ω,F ,Ft, P˜). For each n > 1, t > 0 and K ∈ R we set Γ˜(K)nt = ∫ t 0 Φ ( Bηn(s) −K√ s− ηn(s) ) ds, where Φ is the standard normal distribution function.Then, (i) Γ˜(K)nt is an unbiased estimator for the occupation time Γ([K,∞))t; 5 Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh (ii) Γ˜(K)nt P−→ Γ([K,∞))t; (iii) Moreover, suppose that A holds then there exists a good extension B˜ of B and a continuous B-biased F -progressive conditional martingale with independent increment X ′ on this extension with 〈X ′, X ′〉t = 9 20 √ 2pi Ft(K), 〈X ′, B〉 = 0. 1 (∆n)3/4 ( Γ˜(K)nt − Γ([K,∞))t ) st−→ X ′. Remark 2.1. AssumptionA make sense. It is is obviously satisfied for regular sampling. The following condition is sufficient to have (2.3), lim n→∞ min i ∆ni (∆n) −1 = 1. (2.5) Moreover, we have Ft(x) = Lt(x). Indeed, we set γn = min i (∆ni ) max i (∆ni ) and F nt (x) = 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)3/2E(Ltni (x)− Ltni−1(x)|Ftni ). Hence F nt (x) = ∑ tni 6t E(Ltni (x)− Ltni−1(x)|Ftni )− Snt (x), where 0 6 Snt (x) = ∑ tni 6t ( 1− (∆ n i ) 3/2 (∆n)3/2 ) E(Ltni (x)− Ltni−1(x)|Ftni ) 6 (1− γn) ∑ tni 6t E(Ltni (x)− Ltni−1(x)|Ftni ) P−→ 0, since ∑ tni 6t E(Ltni (x) − Ltni−1(x)|Ftni ) P−→ Lt(x). Thus, Sn(t) P−→ 0, and F nt (x) P−→ Lt(x) as n→∞. The condition (2.5) can be localized a little as follows: Suppose that there exists a sequence of fixed times S1 < S2 < ..., which does not depend on n such that in each interval (Si, Si+1) the condition (2.5) is satisfied. Then the condition (2.3) also holds. 2.2. Occupation time of general diffusion In order to study the rate of convergence, we recall the definition of C-tightness. First, we denote by D(R) the Polish space of all càdlàg function: R+ → R with Skorokhod topology. A sequence of D(R) -valued random vector (Xn) defined on (Ω,F , (F)t,P) is tight if inf K sup n P(Xn /∈ K) = 0, where the infimum is taken over all 6 On discrete approximation of occupation time of diffusion processes with irregular sampling compact sets K in D(R). The sequence (Xn) of processes is called C-tightness if it is tight, and if all limit points of the sequence {L(Xn)} are laws of continuous processes (see [5]). Denote S(x) = ∫ x x0 1 σ(u) du and Yt = S(Xt). For each set A ∈ B(R), A = m⋃ i=0 [a2i, a2i+1) where −∞ 6 a0 < a1 < · · · < a2m+1 6 +∞ we introduce the following estimate for Γ(A)t: Γ˜(A)nt = m∑ j=0 ∫ t 0 Φ (S(a2j+1)− S(Xηn(s))√ s− ηn(s) ) − Φ (S(a2j)− S(Xηn(s))√ s− ηn(s) ) ds. In particular, if A = [K,+∞) then the biased and consistent estimator for the occupation time ∫ t 0 I{Xs>K}ds is defined by Γ˜([K,∞))nt = ∫ t 0 Φ (S(Xηn(s))− S(K)√ s− ηn(s) ) ds. Theorem 2.3. For each set A ∈ B(R), A = n⋃ i=0 [a2i, a2i+1) where −∞ 6 a0 < a1 < · · · < a2n+1 6 +∞ the sequence of stochastic processes( 1 (∆n)3/4 ( Γ˜(A)nt − ∫ t 0 I{Xs∈A}ds) ) t≥0 is C-tight. 3. Proofs We denote (Pt)t>0 a Brownian semigroup given by Ptk(x) = ∫ k(x+ y √ t)ρ(y)dy, where ρ(y) = 1√ 2pi e−y 2/2 and k is a Lebesgue integrable function. 3.1. Some preliminary estimates Throughout this section we denote by K a constant which may change from line to line. If K depends on an additional parameter γ, we write Kγ . We first recall some estimates on the semigroup (Pt). Lemma 3.1 (Jacod [4]). Let k : R→ R be an integrable function. If t > s > 0 and γ > 0 we have: |Ptk(x)| 6 Kλ(|k|)√ t , (3.1) ∣∣∣Ptk(x)− λ(k)√ 2pit e−x 2/2t ∣∣∣ 6 Kγ t ( β1(k) 1 + |x/√t|γ + β1+γ(k) 1 + |x|γ ) , (3.2) ∣∣∣Ptk(x)− λ(k)√ 2pit e−x 2/2t ∣∣∣ 6 K t3/2 (β2(k) + β1(k)|x|). (3.3) 7 Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh We will need the following estimate. Lemma 3.2. Let k : R → R be an integrable function. Suppose that the sequence {tni } satisfies (2.2). Denote γ1(k, x) n t = E (∑ tni 6t i>2 (∆ni ) 2k( x+Btni−1√ ∆ni ) ) , γ2(k, x) n t = E (∑ tni 6t i>2 √ ∆ni k( x+Btni−1√ ∆ni ) ) . Then (i) |γ1(k, x)nt | 6 Kλ(|k|)(∆n)3/2 √ t, (3.4) (ii) |γ2(k, x)nt | 6 Kλ(|k|) √ t. (3.5) Moreover, if λ(k) = 0 then |γ1(k, x)nt | 6 Kβ1(k)(∆n)2k0(1 + log+( tk0 ∆n )), (3.6) |γ1(k, x)nt | 6 K(∆n)2(β2(k) + β1(k)|x|), (3.7) and |γ2(k, x)nt | 6 Kβ1(k) √ ∆nk0(1 + log+( tk0 ∆n )), (3.8) |γ2(k, x)nt | 6 K √ k0 √ ∆n(β2(k) + β1(k)|x|). (3.9) Proof. From (3.1) and estimates (4.1), (4.2), we obtain (3.4) and (3.6). Furthermore, from (3.3) in the Lemma 3.1 we get |γ1(k, x)nt | 6 ∑ tn i 6t, i>2 (∆ni ) 2 K ( tni−1 ∆ni ) 3 2 (β2(k) + β1(k)|x|)) 6 Kk0(∆n) 5/2(β2(k) + β1(k)|x|)) ∫ t ∆n/k0 x−3/2dx 6 K(∆n) 2(β2(k) + β1(k)|x|). By using analogous arguments as above, we obtain (3.5), (3.8) and (3.9). Lemma 3.3. Assume that λ(g) = 0 and g satisfies (2.4), then (i) 1 (∆n)3 E  ∑ tni 6t (tni − tni−1)2g ( x+Btni−1√ tni − tni−1 ) 2 n→∞−−−→ 0. (ii) E  ∑ tni 6t √ tni − tni−1g ( x+Btni−1√ tni − tni−1 ) 2 n→∞−−−→ 0. 8 On discrete approximation of occupation time of diffusion processes with irregular sampling Proof. We first note that condition (2.4) implies that λ(g2) <∞. We write 1 (∆n)3 E  ∑ tni 6t (tni − tni−1)2g ( x+Btni−1√ tni − tni−1 ) 2 = 1 (∆n)3 (∑ tni 6t E((tni − tni−1)4g( x+Btni−1√ ∆ni ) )2) ) + + 2 (∆n)3 ( ∑ i:tni <t n i+16t E ( (∆ni ) 2g( x+Btni−1√ ∆ni )( ∑ j:tni <t n j 6t (∆nj ) 2g( x+Btnj−1√ ∆nj )) )) . (3.10) Using (3.4) and (2.4), the first term of (3.10) is bounded by ∆ng( x√ ∆n1 )2 + (∆n) 1/2Kλ(g2) √ t→ 0 as n→∞. Using (3.6), we have E ( ∑ j:tni <t n j 6t (∆nj ) 2g( x+Btnj−1√ ∆nj )|Ftni−1 ) = E ( ∑ tni <t n j 6t (∆nj ) 2g( y +Btnj−1−tni−1√ ∆nj ) )∣∣∣ y=x+Btn i−1 ≤ Kβ1(k)(∆n)2k0(1 + log+( (t− ti−1)k0 ∆n )) ≤ Kβ1(k)(∆n)2k0(1 + log+(k0n)). Thus the second term of (3.10) is bounded by 2 (∆n)3 ∑ i:tni 6t n i+16t E  (∆ni )2g(x+Btni−1√ ∆ni )E( ∑ j:tni <t n j 6t (∆nj ) 2g( x+Btnj−1√ ∆nj )|Ftni−1)   ≤ Kk0β1(g)(∆n)−1(1 + log+(k0n)) ∑ tni <t n i+16t E ( (∆ni ) 2g( x+Btni−1√ ∆ni ) ) ≤ Kk0β1(g)(∆n)−1(1 + log+(k0n)) ( (∆n1 ) 2g( x√ ∆n1 ) +Kβ1(g)(∆n) 2k0(1 + log+(k0n)) ) , which tends to 0 as n → ∞ because of condition (2.4). We conclude part (i). In an analogous manner, applying (3.5), (3.8) and (2.4) we have (ii). For each set A ∈ B(R) where B(R) is Borel σ-algebra on R we denote Γ(A)nt = ∑ tni 6t ∆ni I{Xtn i ∈A}. Lemma 3.4. Suppose that the conditions (2.2) holds and for each set A ∈ B(R) satisfying∫ ∂A dx = 0, then Γ(A)nt as−→ Γ(A)t. 9 Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh The proof is similar to one of Proposition 2.1[10] and will be omitted. Lemma 3.5. Assume that the condition (2.3) holds and the function g satisfies (2.4). Then for all x ∈ R it holds 1 (∆n)3/2 ∑ tni 6t ( (tni − tni−1)2g( Btni−1 − x√ tni − tni−1 ) ) P−→ λ(g)Ft(x). (3.11) Proof. We set gˆ(x) = E(|x+B1| − |x|). Appying the condition (2.3) we write 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)2gˆ( Btni−1 − x√ tni − tni−1 ) = 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)3/2E(Ltni (x)− Ltni−1(x)|Ftni−1) P−→ Ft(x). Set g′(x) = g−λ(g)gˆ, then λ(g′) = 0. It follows from Lemma 3.3 and the condition (2.3) that 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)2gˆ( Btni−1 − x√ tni − tni−1 ) = λ(g) 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)2gˆ( Btni−1 − x√ tni − tni−1 ) + 1 (∆n)3/2 ∑ tni 6t (tni − tni−1)2g′( Btni−1 − x√ tni − tni−1 ) P−→ λ(g)Ft(x). 3.2. Proof of Theorem 2.1 We denote gˆ(x) as in Lemma 3.5. From the definition of Lt we have E(|Btni − x| − |Btni−1 − x||Ftni−1) = E(Ltni (x)− Ltni−1(x)|Ftni−1), for all x ∈ R. On the other hand, since Btni − Btni−1 is independent of Ftni−1 and it has the same distribution as √ ∆ni B1, we have E(|Btni − x| − |Btni−1 − x||Ftni−1) = E(|Btni−1 +Btni − Btni−1 − x| − |Btni−1 − x||Ftni−1) = E(|y + √ ∆ni B1| − |y|) ∣∣∣ y=Btn i−1 −x = √ ∆ni gˆ( Btni−1 − x√ ∆ni ). Hence, it follows from Lemma 2.14 [3] that ∑ tni 6t √ tni − tni−1gˆ( Btni−1 − x√ tni − tni−1 ) = ∑ tni 6t E(Ltni (x)− Ltni−1(x)|Ftni−1) P−→ Lt(x). 10 On discrete approximation of occupation time of diffusion processes with irregular sampling Set g′(x) = g − λ(g)gˆ, then λ(g′) = 0. From Lemma 3.3 (ii) one gets ∑ tn i 6t √ tni − tni−1g( Btni−1 − x√ tni − tni−1 ) = λ(g) ∑ tni 6t √ tni − tni−1gˆ( Btni−1 − x√ tni − tni−1 ) + ∑ tni 6t √ tni − tni−1g′( Btni−1 − x√ tni − tni−1 ) P−→ λ(g)Lt(x). This concludes the proof of Theorem 2.1. Lemma 3.6. We denote Nnt = ∑ tni 6t Ni,n andMnt = ∑ tni 6t Mi,n, where Ni,n = 1 (∆n)3/4 ( (tni − tni−1)I[K,∞)(Btni−1)− ∫ tni tni−1 I[K,∞)Bsds ) , Mi,n = Ni,n − E(Ni,n|Ftni−1). Then sequence Mn converges stable to a continuous process defined on an extension of original probability space. In particular, the sequence (Mn) is C- tigh under probability measure P. Proof. We will prove the lemma in the following steps: Step 1. A simple calculation using properties of Brownian motion yiels E(Ni,n|Ftni−1) = 1 (∆n)3/4 ( ∆ni I{Btn i−1 >K} − ∫ ∆ni 0 Φ( Btni−1 −K√ u )du ) = ( ∆ni (∆n)3/4 1√ 2pi ∫ +∞ Btn i−1 −K√ ∆n i (1− (Bt n i−1 −K)2 ∆ni t 2 )e−t 2/2dt ) I{Btn i−1 >K} − ( ∆ni (∆n)3/4 1√ 2pi ∫ Btni−1−K√ ∆n i −∞ (1− (Bt n i−1 −K)2 ∆ni t 2 )e−t 2/2dt ) I{Btn i−1 <K}. We set g1(x) = ( ∫∞ x (1 − x 2 t2 )e−t 2/2dtI{x>0} − ∫ x −∞(1 − x2 t2 )e−t 2/2dtI{x<0} )2 . We have ∫ R g1(x)dx = 7 √ 2pi 20 , and g1(x) 6 min{pi2 , x−2e−x 2} for any x ∈ R. Hence it follows from Lemma 3.5 that ∑ tni 6t ( E(Ni,n|Ftni−1) )2 = ∑ tni 6t 1 2pi 1 (∆n)3/2 (∆ni ) 2g1( Btni−1 −K√ ∆ni ) P−→ 7 √ 2pi 20 Ft(K). (3.12) 11 Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh Step 2. Next, by Markov property and Fubini theorem, we have E [( ∫ tni tni−1 I{Bs>K}ds )2 |Ftni−1 ] = ∫ ∆ni 0 ∫ ∆ni 0 E ( I{Bs>−r}I{Bu>−r} ) duds|r=Btn i−1 −K . A direct calculation of the expectation E ( I{Bs>−r}I{Bu>−r} ) yields, that if r 6 0 E( ∫ ∆ni 0 I{Bs>−r}ds) 2 = ∆ni pi ∫ 1 0 z3/2√ 1− z exp (− r2 2∆ni (1− z) )dz, and if r > 0 then E( ∫ ∆ni 0 I{Bs>−r}ds) 2 = (∆ni ) 2 ( 1− ∫ 1 0 1 pi √ z(1 − z) exp (− r2 2z∆ni )dv ) + (∆ni ) 2 pi ∫ 1 0 z3/2√ 1− z exp (− r2 2∆ni (1− z) )dz. We have E(N2i,n|Ftni−1) = (∆ni ) 2 (∆n)3/2 I{Btn i−1 >K} − 2∆ n i (∆n)3/2 I{Btn i−1 >K}E( ∫ tni tni−1 I{Bs>K}ds|Ftni−1) + 1 (∆n)3/2 E( ∫ tni tni−1 I{Bs>K}ds) 2|Ftni−1). Hence E(N2i,n|Ftni−1) = (∆ni ) 2 (∆n)3/2 { 1 pi ∫ 1 0 z3/2√ 1− z exp (− (Btni−1 −K)2 2∆ni (1− z) )dzI{Btn i−1 <K} + 2 ∫ 1 0 (1− Φ( 1√ ∆ni √ u (Btni−1 −K)))duI{Btni−1>K} + 1 pi ∫ 1 0 z3/2√ 1− z exp (− 1 2∆ni z (Btni−1 −K)2)dzI{Btni−1>K} − 1 pi ∫ 1 0 1√ (1− z)z exp (− 1 2∆ni z (Btni−1 −K)2)dzI{Btni−1>K}}. Set g2(x) = 1 pi ∫ 1 0 z3/2√ 1− z exp (− x2 2(1− z))dzI{x<0} + {2 ∫ 1 0 (1− Φ( x√ u ))du+ 1 pi ∫ 1 0 z3/2√ 1− z exp (− x2 2z )dz − 1 pi ∫ 1 0 1√ z(1− z) exp (− x2 2z )dz}I{x>0}. 12 On discrete approximation of occupation time of diffusion processes with irregular sampling We have |g2(x)| 6 Kmin{1, |x|−1e−x2/2+x−2} for all x ∈ R, and ∫ +∞ −∞ g2(x)dx = 2 √ 2 5 √ pi . Therefore, applying Lemma 3.5 we get ∑ tni 6t E(N2i,n|Ftni−1) = 1 (∆n)3/2 ∑ tni 6t (∆ni ) 2g2( Btni−1 −K√ ∆ni ) P−→ 2 √ 2 5 √ pi Ft(K). (3.13) Step 3. It follows from step 1 and step 2 that ∑ tn i 6t E(M2i,n|Ftni−1) = ∑ tn i 6t ( E(N2i,n|Ftni−1)− (E(Ni,n|Ftni−1))2 ) P−→ 9 20 √ 2pi Ft(K). (3.14) Step 4. Since E(Mi,n(Btni −Btni−1)|Ftni−1) = − 1 (∆n)3/4 ∫ tni tni−1 E((Bs −Btni−1)I{Bs>K}|Ftni−1)ds. From Markov’s property, we get E (∑ tni 6t ∣∣E(Mi,n(Btni −Btni−1)|Ftni−1)∣∣) 6 1(∆n)3/4 ∑ tni 6t ∫ ∆ni 0 √ z√ 2pi E exp (−Bt n i−1 −K)2 2z )dz 6 1 (∆n)3/4 ∫ ∆n1 0 √ z√ 2pi dz + 1 (∆n)3/4 ∑ tni 6t, i>2 ∫ ∆ni 0 dz ∫ +∞ −∞ √ z 2pi √ tni−1 exp (−(x−K) 2 2z )dx 6 (∆n) 3/4 + 1 (∆n)3/4 ∆ni ∑ tni 6t, i>2 ∆ni√ tni−1 6 (∆n) 3/4 + (∆n) 1/4 √ t. Therefore, ∑ tni 6t E(Mi,n(Btni −Btni−1)|Ftni−1) P−→ 0. (3.15) Step 5. We have ∑ tni 6t E(M4i,n|Ftni−1) 6 16 ∑ tni 6t E(N4i,n|Ftni−1). Moreover, Markov property yields ∑ tni 6t E(M4i,n|Ftni−1) 6 16 (∆n)3 ∑ tni 6t E ( ∆ni I{r>0} − ∫ ∆ni 0 I{Bs>−r}ds )4 |r=Btn i−1 −K 6 16 (∆n)3 ∑ tni 6t E (∫ ∆ni 0 I{Bs>r}ds )4 |r=|Btn i−1 −K| 6 16 ∑ tni 6t ∫ ∆ni 0 P(Bs > r)ds|r=|Btn i−1 −K|. 13 Nguyen Thi Lan Huong, Ngo Hoang Long and Tran Quang Vinh Hence E| ∑ tn i 6t E(M4i,n|Ftni−1)| 6 16 ∑ tn i 6t ∫ ∆ni 0 E ( 1− Φ( |Bt n i−1 −K|√ s ) ) ds ≤ 16∆n + 128 3 √ ∆n √ t. Therefore, ∑ tni 6t E(M4i,n|Ftni−1) P−→ 0 khi n→∞. (3.16) Step 6. We have that Mn = (Mi,n,Fni ) is a martingale and under probability measure P, any martingale with respect to Ft orthogonal to B is constant. Hence, from (3.14), (3.15), (3.16) and applying Theorem IX.7.28 [5] we obtain thatMn converges stably to a continuous function defined on an extension of original probability space. In particular, (Mn) is C-tight under probability measure P. 3.3. Proof of Theorem 2.2 (i) Since Γ˜(K)nt = ∫ t 0 Φ ( Bηn(s) −K√ s− ηn(s) ) ds = ∑ i>1 E ( ∫ tni ∧t tni−1∧t I{Bs>K}ds|Btni−1 ) , we have E(Γ˜(K)nt ) = ∫ t 0 I{Bs>K}ds = Γ([K,+∞))t. Thus Γ˜(K)nt is an unbiased estimator of Γ([K,+∞))t. (ii) Moreover, we have Γ([K,+∞))t − Γ˜(K)nt = ∑ tni 6t ( ∆ni I{Btn i−1 >K} − E( ∫ tni