1 Introduction
Simulation of random processes is a wide area nowadays, and there are many simulation methods (see, e.g., [9, 10]). There is one substantial problem: for most of traditional simulation methods, it is difficult to measure the quality of approximation of a process by its model in terms of “distance” between paths of the process and the corresponding paths of the model. Therefore, models for which such distance can be estimated are quite interesting.
There is a concept for simulation by such models called simulation with given accuracy and reliability. Simulation with given accuracy and reliability is considered, for example, in [7, 4, 6, 11, 12].
Simulation with given accuracy and reliability can be described in the following way. Suppose that an approximation
of a random process X(t) is constructed. The random process
is called a model of X(t). A model depends on certain parameters. The rate of convergence of a model to a process is given by a statement of the following type: if numbers δ (accuracy) and ε (1–ε is called reliability) are given and the parameters of the model satisfy certain restrictions (for instance, they are not less than certain lower bounds), then
Many such results have been proved for the cases where the norm in (1) is the Lp
norm or the uniform norm. But simulation with given accuracy and reliability has been developed so far mostly for processes with one-dimensional distributions having tails not heavier than Gaussian tails (e.g., for sub-Gaussian processes), and such a simulation for processes with tails heavier than Gaussian tails deserves attention.
We consider a random process Y(t) = exp{X(t)} and an f-wavelet ϕ(x) with the corresponding m-wavelet ψ (x), where X (t) is a centered second-order process such that its correlation function R(t, s) can be represented as
We prove thatwhere ξ
0
k
, ηjl
are random variables, and a
0
k
(t), bjl
(t) are functions that depend on X(t) and the wavelet.
Let us consider the case where X(t) is a strictly sub-Gaussian process. Note that the class of processes Y(t) = exp{X(t)}, where X(t) is a strictly sub-Gaussian process, is a rich class that includes many processes with one-dimensional distributions having tails heavier than Gaussian tails; for example, when X(t) is a Gaussian process, one-dimensional distributions of Y(t) are lognormal.
We describe the rate of convergence of
to a process Y(t) in C([0, T]) in such a way: if ε ∈ (0; 1) and δ > 0 are given and the parameters N
0, N, Mj
are big enough, then
A similar statement that characterizes the rate of convergence of
to Y(t) in Lp
([0, T]) is also proved for the case where (2) is replaced by the inequality
If the process X(t) = ln Y(t) is Gaussian, then the model
can be used for computer simulation of Y(t).
One of the merits of our model is its simplicity. Besides, it can be used for simulation of processes with one-dimensional distributions having tails heavier than Gaussian tails.
2 Auxiliary facts
A random variable ξ is called sub-Gaussian if there exists a constant a ≥ 0 such thatfor all λ ∈ ℝ.
The class of all sub-Gaussian random variables on a standard probability space {Ω, ℬ, P} is a Banach space with respect to the norm
A centered Gaussian random variable and a random variable uniformly distributed on [−b, b] are examples of sub-Gaussian random variables.
A family Δ of sub-Gaussian random variables is called strictly sub-Gaussian if for any finite or countable set I of random variables ξi
∈ Δ and for any λi
∈ ℝ,
A stochastic process X = {X(t), t ∈ T} is called sub-Gaussian if all the random variables X(t), t ∈ T, are sub-Gaussian and
. We call a sub-Gaussian stochastic process X = {X(t), t ∈ T} strictly sub-Gaussian if the family {X(t), t ∈ T} is strictly sub-Gaussian. Any centered Gaussian process X = {X(t), t ∈ T} for which
is strictly sub-Gaussian.
Details about sub-Gaussian random variables and sub-Gaussian and strictly sub-Gaussian random processes can be found in [1] and [3].
We will use wavelets (see [2] for details) for an expansion of a stochastic process. Namely, we use a father wavelet ϕ(x) and the corresponding mother wavelet ψ(x) (we will use the terms “f-wavelet” and “m-wavelet” instead of the terms “father wavelet” and “mother wavelet,” respectively). Set
Note that
is an orthonormal basis in L
2 (ℝ). We will further consider only wavelets for which both ϕ(x) and ψ(x) are real-valued.
The following statement is crucial for us.
Theorem 1. ([5]) Let X={X(t), t ∈ ℝ} be a centered random process such that E|X(t)|2 < ∞ for all t ∈ ℝ. Let
and suppose that there exists a Borel function u(t, λ), t ∈ ℝ, λ ∈ ℝ, such that
and
Let ϕ(x) be an f-wavelet, and ψ(x) the corresponding m-wavelet. Then the process X(t) can be presented as the following series, which converges for any t ∈ ℝ in L
2 (Ω):
where
and ξ
0
k,ηjl are centered random variables such that
Remark 1. There have been obtained explicit formulae for random variables ξ
0
k,ηjl
from an expansion more general than (5) under certain restrictions on the process X(t) (see [8], Theorem 2.1). It seems that getting explicit formulae for ξ
0
k
and ηjl
in the general case is either impossible or quite nontrivial.
Definition. Condition RC holds for stochastic process X(t) if it satisfies the conditions of Theorem 1, u(t,·) ∈ L
1 (ℝ) ∩ L
2 (ℝ), and the inverse Fourier transform
of the function u(t, x) with respect to x is a real function.
Remark 2. Condition RC guarantees that the coefficients a
0
k
(t), bjl
(t) of expansion (5) are real. Indeed, this follows from the formulae
Suppose that X(t) is a process that satisfies the conditions of Theorem 1. Let us consider the following approximation (or model) of X(t):where ξ
0
k, ηjl, a
0
k
(t), bjl
(t) are defined in Theorem 1.
Remark 3. If X(t) is a Gaussian process, then we can take as ξ
0
k, ηjl
in (8) independent random variables with distribution N(0; 1).
3 A multiplicative representation
We will obtain a multiplicative representation for a wide class of stochastic processes.
Theorem 2.
Suppose that a random process Y(t) can be represented as Y(t) = exp{X(t)}, where the process X(t) satisfies the conditions of Theorem 1. Then the equality
holds, where product (9) converges in probability for any fixed t, and ξ
0
k, ηjl, a
0
k
(t), bjl
(t) are defined in Theorem 1.
The statement of the theorem immediately follows from Theorem 1.
Remark 4. It was shown in [5] that any centered second-order wide-sense stationary process X(t) that has the spectral density satisfies the conditions of Theorem 1. The process Y(t) = exp{X(t)} can be represented as product (9), and therefore the class of processes that satisfy the conditions of Theorem 2 is wide enough.
It is natural to approximate a stochastic process Y(t) = exp {X(t)} that satisfies the conditions of Theorem 2 by the model
Remark 5. If X(t) = ln Y(t) is a Gaussian process, then we can use the model
for computer simulation of Y(t), taking as ξ
0
k, ηjl
in (10) independent random variables with distribution N(0; 1).
4 Simulation with given relative accuracy and reliability in C([0, T])
Let us study the rate of convergence in C([0, T]) of model (10) to a process Y(t). We will need several auxiliary facts.
Lemma 1. ([13]) Let X = {X(t), t ∈ ℝ} be a centered stochastic process that satisfies the requirements of Theorem 1, T > 0, ϕ be an f-wavelet, ψ be the corresponding m-wavelet, the function
be absolutely continuous on any interval, the function u(t, y) be absolutely continuous with respect to y for any fixed t, there exist the derivatives
, the inequalities
hold,
and
Let the process
be defined by (8), δ > 0. If N
0, N, Mj
(j = 0, 1,…, N − 1) satisfy the inequalities
then
Lemma 2. ([13]) Let X = {X(t), t ∈ ℝ} be a centered stochastic process satisfying the requirements of Theorem 1, T > 0, ϕ be an f-wavelet, ψ be the corresponding m-wavelet,
. Suppose that ϕ(y), u(t, λ), S(y), Sϕ
(y) satisfy the following conditions: the function u(t, y) is absolutely continuous with respect to y, the function
is absolutely continuous,
there exist functions v(y) and w(y) such that
and
a
0
k
(t) and bjl
(t) are defined by Eqs. (6) and (7),
Remark 6. It is easy to see that the functions a
0
k
(t) and bjl
(t) are continuous under the conditions of Lemma 2.
We omit the proof due to its simplicity.
Definition. We say that a model
approximates a stochastic process Y(t) with given relative accuracy δ and reliability 1 − ε (where ε ∈ (0; 1)) in C([0, T]) if
Now we can state a result on the rate of convergence in C([0, T]).
Theorem 3.
Suppose that a random process Y = {Y(t), t ∈ ℝ} can be represented as Y(t) = exp{X(t)}, where a separable strictly sub-Gaussian random process X = {X(t), t ∈ ℝ} is mean-square continuous, satisfies the condition RC and the conditions of Lemmas 1 and 2 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables ξ
0
k, ηjl in expansion (5) of the process X(t) are independent strictly sub-Gaussian,
is a model of X(t) defined by (8)
,
is defined by (10), θ ∈ (0; 1), δ > 0, ε ∈ (0; 1), T > 0, the numbers A
(1), B
(0), B
(1), E
2, F
1, F
2
are defined in Lemmas 1 and 2,
If
then the model
approximates the process Y(t) with given relative accuracy δ and reliability 1 − ε in C([0, T]).
Let us note that ρU
is a pseudometric. Let N(u) be the metric massiveness of [0, T] with respect to ρU
, that is, the minimum number of closed balls in the space ([0, T], ρU
) with diameters at most 2u needed to cover [0, T],
We will denote the norm in L
2(Ω) by ||·||2.
We will prove that
.
First, let us estimate E|U(t)|2 for t ∈ [0, T].
It follows from (4) that
First, we will find an upper bound for N(u). For this, we will prove thatwherewith CΔX
defined in Lemma 2.
Using (20) and the Cauchy–Schwarz inequality, we have:
Applying (4), we obtain
Using inequality (25), simple properties of metric entropy (see [1], Lemma 3.2.1, p. 88), and the inequality(where
is the entropy of [0, T] with respect to the Euclidean metric), we have
Example. Let us consider the function u(t, λ) = t/(1 + t
2 + λ
2)4 and an arbitrary Daubechies wavelet with the corresponding f-wavelet ϕ and m-wavelet ψ. We will use the notationsand consider the stochastic processwhere ξ
0
k, ηjl
(k, l ∈ ℤ, j = 0, 1, . . .) are independent uniformly distributed over
. It can be checked that the process Y(t) = exp{X(t)} and the Daubechies wavelet satisfy the conditions of Theorem 3.
5 Simulation with given accuracy and reliability in Lp ([0, T])
Now we will consider the rate of convergence in Lp
([0, T]) of model (10) to a process Y (t).
Lemma 4.
Suppose that a centered stochastic process X = {X(t), t ∈ ℝ} satisfies the conditions of Theorem 1, ϕ is an f -wavelet, ψ is the corresponding m-wavelet,
and
are the Fourier transforms of ϕ and ψ, respectively,
is absolutely continuous, u(t, y) is defined in Theorem 1 and is absolutely continuous for any fixed t, there exist the derivatives
and
, Eqs. (11) and (12) hold,
Then the following inequalities hold for the coefficients a
0
k
(t), bjl
(t) in expansion (5) of the process X(t):
The proof of inequalities (31)–(34) is analogous to the proof of similar inequalities for the coefficients of expansion (5) of a stationary process in [5].
Lemma 5.
Suppose that a random process X = {X(t), t ∈ ℝ} satisfies the conditions of Theorem 1, an f -wavelet ϕ and the corresponding m-wavelet ψ together with the process X(t) satisfy the conditions of Lemma 4, C,Q
1,Q
2, S
1, S
2, u
1(y) are defined in Lemma 4,T >0, p ≥ 1, δ ∈ (0; 1), ε > 0,
Definition. We say that a model
approximates a stochastic process Y(t) with given accuracy δ and reliability 1 − ε (where ε ∈ (0; 1)) in Lp
([0, T]) if
Theorem 4.
Suppose that a random process Y = {Y(t), t ∈ ℝ} can be represented as Y(t) = exp{X(t)}, where a separable strictly sub-Gaussian random process X = {X(t), t ∈ ℝ} is mean-square continuous, satisfies the condition RC and the conditions of Lemma 5 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables ξ
0
k, ηjl in expansion (5) of the process X(t) are independent strictly sub-Gaussian,
is a model of X(t) defined by (8),
is defined by (10), D,Q
1,Q
2, S
1, S
2
are defined in Lemmas 4 and 5, δ > 0, ε ∈ (0; 1), p ≥ 1,T >0.
If
then the model
defined by (10) approximates Y(t) with given accuracy δ and reliability 1 − ε in Lp
([0, T ]).
We will denote the norm in Lp
([0, T]) by ||·||
p
.
We will need two auxiliary inequalities. Using the power-mean inequalitywhere r ≥ 1, and setting a = ec
and b = 1, we obtain
It follows from (20) thatfor q ≥ 0.
Now let us estimate E|1 − exp{ΔX(t)}|2
p
, where t ∈ [0, T], using (41):
Applying (40), we obtain: