What is Open Access?
Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.
Our Authors and Editors
We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.
Content Alerts
Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective
How it Works Manage preferences
Contact
Want to get in touch? Contact our London head office or media team here
Careers
Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.
Home > Books > Exploring the Benefits of Numerical Simulation and Modelling [Working Title]
Open access peer-reviewed chapter - ONLINE FIRST
Written By
Pavel Loskot
Submitted: 12 February 2024 Reviewed: 20 February 2024 Published: 21 June 2024
DOI: 10.5772/intechopen.1005516
From the Edited Volume
Exploring the Benefits of Numerical Simulation and Modelling [Working Title]
Dr. Mykhaylo I. Andriychuk
Abstract
Random processes are a key component in modeling stochastic systems. They are often defined by stochastic differential or, in discrete time, difference equations (SDEs), which precisely describe how the instantaneous time-dependent and for multidimensional processes also location-dependent values vary. The solutions of SDEs are Markov processes, which is important in the analysis and simulations. The aim of this chapter is to examine various SDE models of random processes and their specific cases, for example, the random processes that arise in statistical filtering of random signals. The problem of discretizing continuous processes and linearizing nonlinear processes is also considered. Since general Langevin equation is often difficult to solve analytically, in many scenarios, it is sufficient as well as useful to solve the corresponding Fokker-Planck equation or master equation in order to obtain the time-evolving distribution of the system states. After outlining the most common random processes in continuous and discrete time such as Bernoulli and Poisson sequences, telegraph signal, and Wiener and Gaussian processes, numerical methods for solving SDEs are discussed. The chapter concludes by presenting a general method for generating random processes that are defined by their probability distribution and the auto-covariance or auto-correlation function (ACF).
Keywords
- dynamic system
- Fokker-Planck equation
- Kalman filter
- Langevin equation
- master equation
- stochastic differential equation
- random process
Author Information
Pavel Loskot*
- ZJU-UIUC Institute, Haining, Zhejiang, China
*Address all correspondence to: pavelloskot@intl.zju.edu.cn
1. Introduction
Random processes have been studied in different forms for decades. They are integral component of describing stochastic systems and their dynamics. They frequently appear in many applications of statistical signal processing, and they are also considered in modeling phenomena in engineering, computer science, statistical physics, social sciences, and more recently also in computational biology and medicine. Therefore, there exists very rich literature, and in many cases, also a good theoretical understanding of their models and properties. In practical settings, the choice of a random process is dictated by the specific application as well as by the tractability of mathematical analysis and feasibility of producing numerical results.
Among different models of random processes, the processes described by stochastic differential (in continuous time) or difference (in discrete time) equations (SDEs) are probably the most common. The SDE models are sufficiently versatile and in many cases offer good theoretical frameworks and efficient numerical algorithms. More importantly, the solutions of these SDEs are Markov processes, which are more prone to analysis and simulations.
This chapter explores different forms of SDEs that are used to define various random processes. It is clear that this is a very large topic with many results, which has been evolving for decades. The aim is to highlight the key ideas while the detailed description can be found in the cited literature. The advanced theoretical concepts were limited to minimum in order to make the discussion accessible for typical graduate students in engineering. The references consist only of the textbooks and monographs on stochastic processes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. In particular, the reference [4] is a good introductory textbook. The book [17] is a recommended reading to gain more advanced theoretical foundations. The fundamental concepts in statistical signal processing are provided in [14] and in estimation theory in [11]. The aspects of computer simulations involving random processes are considered in [16].
The topics that are not covered, but which may nevertheless be highly relevant to stochastic processes, include the boundary conditions, absorbing processes, the first passage time and other such metrics, many methods and software tools for numerically solving the SDEs and integro-differential equations, analytical methods for solving SDEs having a specific structure, working with SDEs involving spatial diffusion, defining and generating complex-valued random processes and other. Some examples of more recent research topics in stochastic processes that are not covered in this chapter include their use in machine learning models and algorithms, defining stochastic processes in complex topological spaces such as manifolds, stochastic processes on graphs, multivariate stochastic processes involving mixed distributions, partially defined stochastic processes and so on.
This chapter also assumes that the readers are already familiar with the basic concepts in stochastic processes such as the probability distributions of different orders, stationarity, ergodicity, definition of the ACF, Markovian property, and understand linear auto-regressive modeling of stochastic processes, and why Kalman filter has been introduced. It is an advantage to already have working knowledge of differential equations and the strategies to obtain their solutions.
The rest of this chapter is organized as follows. Section 2 outlines various linear and nonlinear SDE models in continuous and discrete time and also considers the problems of discretization and linearization. Section 3 defines the Langevin, Fokker-Planck and master equations. The common random processes are presented in Section 4. Numerical methods for solving SDEs are discussed in Section 5. Section 6 provides a general method for generating random processes with the defined distribution and the ACF. Section 7 concludes the chapter.
The following mathematical notations are used in this chapter. A scalar, vector and matrix process, respectively, is denoted as , , and , in continuous time, , and as , and , in discrete time, . The first time-derivative is denoted as , whereas the -th derivatives, , are denoted as . Other used symbols: denotes absolute value, denotes expectation, denotes probability, denotes Dirac delta function, the Kronecker delta function, , if , and otherwise, is an identity matrix, is an all-zeros vector or matrix, denotes a matrix transpose, is the factorial, and indicates approximation. Moreover, for simplicity, all signals and values are assumed to be real-valued.
2. Modeling random processes as SDEs
Random processes can be conveniently modeled by SDEs. These models can be linear or nonlinear and defined in continuous time or in discrete time. The continuous-time nonlinear SDEs are preferred for developing theoretical foundations of stochastic processes, since they allow using differential (and integral) calculus. The discrete-time linear SDEs are normally used in generating the samples of stochastic processes in computer simulations and in implementing the methods of digital signal processing.
2.1 Linear SDE models
In continuous time, a linear random process, , is defined by the
E1
where as well as are other defined random processes, or can be random constants. It is customary to assume that the driving process (input noise), , of model (1) is generated as a linear combination of higher-order derivatives, i.e.,
E2
where and are other defined random processes. Typically, are deterministic functions or constants, whereas is a stationary, zero-mean, unit-variance white noise, i.e., its ACF is .
In discrete time, a linear random process, , is defined by the
E3
with the driving (process) noise,
E4
Similar assumptions about the processes or functions, , , and , are usually adopted as for continuous-time random processes. Since the descriptions in continuous and discrete time are nearly identical, in the sequel of this section, the expressions are mostly provided assuming continuous-time models.
It is common to express the
E5
In the discrete time, the equivalent first-order vector difference equation is written as
E6
assuming that the discrete-time indices, .
In the literature, model (5) is generalized as ([17], (1.89)),
E7
Importantly, the derivatives in (7) may not be defined, if the underlying processes are not differentiable everywhere; in such a case, the differential eq. (7) must be formulated with differentials, , and, . Then, conditioned on matrix, , a solution of the hom*ogeneous equation is the matrix, , such that and is an identity matrix. Denoting the initial value as , the overall solution is obtained by the integration, i.e.,
E8
Note that the mean process , and the mean of its derivative . These solutions can be further specialized by conditioning also on the matrix, . Thus, if matrices, and , are time-invariant and known, the solution, , can be substituted into Eq. (8).
Given constant matrices, and , the SDE (7) defines different random processes for specific cases of the input process, . In particular, provided that is a white Gaussian noise, the random process (8) represents a multi-dimensional Brownian motion, i.e., a stationary Gaussian process with independent increments. The Ornstein-Uhlenbeck random process is defined for being a Wiener process, which is a continuous Gaussian process with independent increments.
There is also an integral representation of stationary random processes in the frequency domain. It is known as the Cramér spectral representation, and it is written as the inverse Fourier transform. Thus, a random process, , can be generated from the random increments, , as [7],
E9
The random increment process, , has zero mean, and its ACF function defined in the frequency domain is also the spectral density, i.e., .
2.2 Nonlinear SDE models
An effective way to obtain nonlinear models of random processes is to consider a nonlinear transformation, , of the linear random process, . Hence, define a random observation process
E10
where represents a measurement noise, which is usually assumed to be zero-mean, stationary, and white.
A common strategy for processing nonlinear signals is to linearize the model about a suitable value, , at time . This strategy is neither optimum nor the most effective, but it is often used in practice. In particular, the first-order Taylor approximation of the process model (10) is written as
E11
The value of affects whether the linearization error could be neglected with respect to the strength of the measurement noise, , so it must be updated at every time instant, . The practical choice is to set to be the predicted value of using the past estimated values, , , , or simply letting, , if the variance, , is small.
A nonlinear form of SDE model (7) can be written as ([17], eq. (4.1)),
E12
where the vector function, , and the elements of the matrix, are the multivariate processes, . The corresponding stochastic integral representation is
E13
In general, integrating a random process can be performed assuming Itô, Stratonovich, backward, and other methods including a path integral ([17], Section 4.1). These methods lead to specific numerical algorithms and have different levels of convergence and stability. The choice of the appropriate method is mainly dependent on the ACF of the random process. In this chapter, the SDE models of random processes are presented assuming Itô integration (calculus).
More importantly, it can be shown that the solutions of the integral equation (13) are Markovian processes. It is then not surprising that many examples of Markovian processes can be found in the literature as well as in practical applications. Moreover, if the values of a Markov process are chosen from a discrete, countably finite, or infinite set, such a process is referred to as Markov chain. Some examples of the Markov chain processes include a one-dimensional random walk, a queuing system, inventory modeling, branch processes, and games involving success runs [10].
A scalar Markovian process solving (13) is referred to as a diffusion process, provided that
E14
are continuous and deterministic processes, which are referred to as drift and diffusion coefficient, respectively. The diffusion process can be defined similarly in higher dimensions by assuming a drift vector and a diffusion matrix. Moreover, a diffusion process becomes a classical Brownian motion when it has zero drift and a constant diffusion.
2.3 Filtering random processes
Consider the random process defined by Eq. (8). Denote , and , where the means and . With these substitutions, the random process (8) can be rewritten as
E15
Eq. (15) now represents a random process generated from another random process, . More importantly, provided that (cf. (10)),
E16
the random process (15) is a linear estimate of the process, , in an additive measurement noise, , from the observations, , using a linear and generally non-stationary filter with the impulse response, . For example, the minimum mean square error (MMSE) filter must satisfy the integral equation,
E17
where is the covariance matrix of observations, , and is the matrix of cross-covariances between processes, and . Analogical expressions can be also defined in the discrete time.
In the sequel of this subsection, a general model of random processes is presented that is used to obtain a discrete-time Kalman filter. The actual Kalman filter is derived, for example, in [11]. The Kalman filter is known to be the best linear unbiased estimator (BLUE) of a linear random process under these conditions:
the observations are linear;
the measurement noise is additive, even though possibly non-stationary;
the random process to be estimated can be also non-stationary, but it is generated by linear filtering of a white process (driving) noise;
the process noise, i.e., also the estimated random process and the measurement noise are uncorrelated.
Let assume the model of measurements in the form,
E18
where , , , and are known (deterministic) functions. The measurement noise, , has zero mean, the covariance matrix, , and it is uncorrelated with the estimated random process, , i.e., is a zero-mean white noise, which is uncorrelated with the estimated process noise, . The estimated process noise is modeled after (6), i.e.,
E19
where and are known (deterministic) functions, and is a zero-mean, white noise having the covariance matrix, , and it is uncorrelated with the measurement noise, .
The correlations make the measurement noise to be predictable, so it can be added to the estimated process. The extended vector of parameters becomes
E20
and the corresponding measurements
E21
The random processes (20) and (21) are then used to obtain the Kalman filter.
2.4 Discretization of continuous-time models
Digital signal processing requires representing the continuous-time random processes as discrete sequences of random samples. In addition, it is desirable that these samples are statistically uncorrelated. The procedure to obtain the uncorrelated samples is known as Karhunen-Loéve expansion in the literature. In particular, given a finite set of orthonormal continuous real-valued functions, , over a finite time interval, , such that
E22
the task is to decompose the random process, , into the sum
E23
such that
E24
Consequently, the mean process and the ACF, respectively, are defined as
E25
In addition to being orthonormal, the basis functions, , must satisfy the Fredholm integral equation,
E26
The Fredholm integral equations are often encountered in designing statistical signal processing methods. They are difficult to solve analytically, unless the underlying random process is stationary. In practice, it is sufficient to show that the adopted set of orthonormal functions, satisfies condition (26) rather than finding the eigenvalues and the eigenfunctions of the ACF.
It should be noted that ordinary Nyquist sampling of random signals may not yield samples that are uncorrelated. In such a case, Karhunen-Loéve expansion can be used to obtain the uncorrelated samples. In such a case, the set of orthonormal functions, , must be chosen, so that the discrete-time ACF can be computed as
E27
for all the indices, and , from a finite subset of integers. Unlike in continuous time, the values, , can be computed from the variance matrix, , where is a column vector of samples, and . It can be shown that are the roots of the characteristic polynomial defined as the determinant,
E28
The vectors, , are then computed, so they lie in the null space of the matrix,
Furthermore, there are situations, for example, in describing the critical dynamics of a system, when the model of a stochastic process must involve both the deterministic and random effects. The random variables represent local, microscopic random fluctuations, whereas the deterministic variables model the collective, macroscopic, slowly varying effects. The corresponding SDEs are so-called Langevin equations. They are generally difficult to solve; however, the underlying probability density evolution can be often obtained by solving the corresponding Fokker-Planck equation (FPE) as will be shown in the next section.
A general Langevin equation in one dimension has the SDE form (12), i.e.,
E29
where represents the Langevin random force, which is assumed to be a zero-mean white Gaussian noise. If the coefficient, , does not depend on , then the noise term in (29) is assumed to be additive; otherwise, the noise is multiplicative. The multiplicative noise allows describing more complex behaviors including non-equilibrium dynamics.
In -dimensions, the set of general Langevin equations is defined as
E30
where are the zero-mean Langevin random processes having the cross-covariances, .
For example, a one-dimensional zero-mean Gaussian random process with the ACF, , is described by the Langevin SDE,
E31
where is a zero-mean white Gaussian noise. This random process is normally referred to as Ornstein-Uhlenbeck process. Note also that, in the limit of very large , the process, , becomes white.
3. Evolution of probability density
By definition, at any given time instant, the value of a random process is a random variable, which is described by a particular probability density function (PDF). It is, therefore, of interest to investigate how such a PDF evolves in time along the realizations of a random process.
Consider a general nonlinear random process described by model (12). As stated previously, the solutions of this SDE are always Markov processes defined by their transition probability density, , where, for notational convenience, , and, . The evolution of the probability density of the random variable, , is described by the forward Kolmogorov equation, which is better known as the FPE. It assumes that , and the initial condition . On the other hand, the evolution from the final (terminal) state, , is described by a similar partial differential equation, i.e., the backward Kolmogorov equation, assuming .
The FPE can be effectively formulated using the Fokker-Planck operator. Hence, assuming the PDF, , define the operator
E32
where the subscripts, and , denote the components of vectors or the elements of the process matrix, . The FPE can be then written as
E33
Note that, in the limit,
Omitting the dependence on the initial state, , at time , the FPE of a
E34
where is the drift vector, and the diffusion coefficients are computed as .
The one-dimensional Ornstein-Uhlenbeck process, i.e., , (cf. (31)) is described by the FPE,
E35
A Brownian motion process in one dimension is a special case of the diffusion having zero drift and the diffusion coefficient equal to . In such a case, , where is a Wiener process, and the corresponding FPE is
E36
Assuming the initial condition, the SDE (36) has the solution, , i.e., it is a non-stationary Gaussian process.
In general, the FPE can be solved for special cases, such as when the coefficients of the SDE are time-independent. The stationary solution is obtained by assuming that , for some . The solution of a non-stationary FPE can be obtained by separation or transformation of variables, variational approach, eigenvalue expansion, numerical integration, and other methods ([12], Ch. 3).
Furthermore, the dynamics of many physical and chemical systems are often described by so-called a master equation. The master equation describes the distribution of the discrete states over time. For these systems, it is desirable to model how likely the system is at any given state at different time instances. The switching between the states occurs randomly at random time instances with a certain rate. This can be described by a continuous-time Markov chain.
Specifically, the master equation defines the change of the probability distribution, , of the system state at time, . It is the first-order matrix SDE,
E37
where denotes the transition matrix. Its elements, , , are non-negative, whereas the diagonal elements are set, so the rows of sum to zero.
The matrix, , is the Laplacian of a directed weighted graph representing the state transitions of a Markov chain with the weights being equal to the rates of the state transitions. In case the elements, , of the transition matrix reflect the probabilities of a hom*ogeneous Markov chain to jump from the state,
E38
In discrete time, it follows that , so the master equation can be expressed as
E39
Importantly, this indicates that the master equation does not depend on the diagonal elements of the transition matrix, . At equilibrium, the detailed balance requires that
E40
The corresponding probability distribution of states, , is referred to as the equilibrium distribution. The equilibrium describes the time-reversibility of microscopic dynamics, which is different from a steady-state; the latter occurs, when, , for , and .
The master equation can be used to model a one-dimensional birth-death process. In particular, if the population size, , is of interest, it can increase by one to , due to a birth event, or decrease by one to , when a death occurred, while for . Assuming that the transitions are Markovian, the time-dependent transition probabilities are defined as
E41
For small , and the current population size, , it can be assumed that at most one birth or death has occurred, so that,
E42
The coefficients, and , in (42) represent the probabilities that the corresponding events occur per unit of time. Denoting the probability distribution of the population sizes at time as , the resulting master equation is written as
E43
The stationary solution of (43) is obtained assuming , , i.e., when and, , ; in such a case, the population eventually goes extinct. On the other hand, if , the population can either become extinct with a certain probability, or it grows without any bounds. More precisely, the derived steady-state probabilities are computed as
E44
4. Common random processes
In simulations and modeling of various systems, some random processes are encountered more often than others. The random processes that are preferred in practical applications are those that can be succinctly described, have well-defined properties, and can be readily generated or fitted to measured data. In this section, the most common random processes are outlined starting from the random processes that are defined in discrete time, which are often referred to as random sequences, before introducing the random processes defined in continuous time.
The random sequence, , is a martingale, if its increments are independent and have zero mean, i.e., ([18], Ch. 6)
E45
More generally, a sequence, , is a martingale with respect to a sequence, , provided that
E46
For example, if the samples, , are drawn independently from the same probability distribution, , the likelihood ratio of the estimated parameter, , from observed samples, is
E47
The mean value of the likelihood ratio conditioned on the observations is
E48
so the sequence of likelihoods is the martingale.
In continuous time, the martingale process is defined as
E49
or conditioned on another process as
E50
The Wiener process is an example of the martingale in continuous time.
The conditional probability of samples in a Markov sequence satisfies
E51
The higher-order Markov sequences can be also defined; however, they are assumed rarely in practical problems. The Markov sequences attaining a finite set of values are often referred to as Markov chains, and their values are referred to as states.
The Markov chain is irreducible, if any state can be reached from any other state in a finite number of steps, i.e.,
E52
Assuming Chapman-Kolmogorov equation, the transition probability over steps is computed recursively from transition probabilities over steps as
E53
If the chain is hom*ogeneous, and the one-step transition probabilities are arranged in a matrix,
The properties of the Markov process in continuous time are identical to the Markov chain, provided that the number of states is countably finite or infinite; such a process is referred to as a continuous-time Markov chain. For example, if the process, , can be constructed by independent increments, then it is the Markov process.
Bernoulli sequence is formed by binary random variables having the fixed probability of outcomes. The number of successes among a given number of trials is binomially distributed, whereas the number of trials required to achieve a given number of successes has a negative binomial distribution. Bernoulli sequence can be generalized to more than two outcomes; it is then called Bernoulli scheme sequence. These sequences can be used to define other random sequences. Consider, for example, the sum
E54
where which defines a Bernoulli random walk. It has the mean, which can be both increasing or decreasing over time, depending on . The variance, , is always increasing over time. The ACF can be derived as . It should be noted that the random walk can be also created assuming a birth-death process discussed above.
The case, , in (54) defines a binomial counting sequence with the mean value, , and the variance, . The ACF is now obtained as .
The binomial counting process can be generalized as a continuous-time Poisson counting process having the independent and stationary increments, and the initial value . In the literature, these processes are sometimes referred to as renewal processes [10]. The Poisson counting process has the probability of successes occurring over a time duration, , approximated by the Poisson distribution, i.e.,
E55
where denotes the average rate of successes per unit time. If is time-dependent, such a process is called non-hom*ogeneous. The inter-arrival time between two consecutive successes is a random variable, which can be shown to be exponentially distributed with the same parameter,
Poisson process is an example of a general class of Lévy random processes. Another example of the Lévy random process is a gamma process having the inter-arrival times being gamma distributed. Poisson process can be generalized as Hawkes random process. It is a marked point process, which models the random intensity changes in the arrival times as a self-excitation phenomenon ([2], Ch. 11).
The random telegraph signal, , can be obtained as a transformation of the Poisson process, , i.e.,
E56
Thus, is a continuous-time Markov jump process having only two states. The probability of states can be derived to be
E57
Note that the rate constant, , could be assumed to be different in the transitions to and value, respectively. The probability distribution of is
E58
with the mean, , and the variance, . Moreover, the telegraph signal can be randomized by assuming the product, , where are independent and equally likely values, or , generated whenever the process, , changes its value.
The Wiener process is best known as a model of Brownian motion of particles in a fluid. It is a solution of the second-order partial differential equation
E59
where denotes the rate of particle collisions. Solving (59), is the Gaussian distribution with zero mean and the variance equal to , i.e., this process is non-stationary. The ACF is derived as . More generally, the Wiener process can be also defined assuming the time increments rather than the absolute times.
A multivariate Gaussian process is probably the most frequently assumed random process. Since these processes have been extensively described and studied in the literature, they are mentioned only very briefly here. The Gaussian process is fully defined by the vector of means and the covariance matrix; the covariance matrix also defines the spatial ACF of this process. The Gaussian distribution has the largest entropy (i.e., it is the worst-case noise) among all continuous distributions. Its widespread occurrence in many practical scenarios and systems is justified by the central limit theorem. It is also the only distribution, which is preserved by linear transformations, and for which uncorrelated variables are equivalent to independent variables. Gaussian processes play an important role in estimating the parameter values. The Gaussian mixture models add another level of flexibility in modeling random processes and in defining the prior and posterior distributions in Bayesian as well as non-Bayesian estimation of parameter values.
More complicated random processes can be built from other random processes. For example, Archimedean survival process is a vector of differences between gamma bridge processes, which are themselves created by scaling the gamma random processes by other random variables ([2], Ch. 10).
5. Numerical methods for solving SDEs
The random processes considered in this chapter are defined by the SDEs. Since many practical SDEs cannot be solved analytically, they must be solved numerically by computer simulations. The caveat is that the existing algorithms for solving ordinary differential equations (ODEs) may not be directly applicable or efficient for solving SDEs. The notable exception is SDEs with the driving noise that is generated independently of the random process values; for these processes, the algorithms developed for ODEs can be directly used for SDEs.
In general, the algorithms for solving SDEs should provide fast convergence, good accuracy, and also stability. The accuracy is affected by discretization, whereas the lack of stability may cause a sudden divergence of computations. Therefore, numerical solvers for SDEs require formal mathematical frameworks and derivations in order to satisfy these requirements.
As an example, consider the integral form of a one-dimensional time-hom*ogeneous SDE
E60
Using Itô’s formula, the Euler method approximates integral equation (60) as
E61
The more accurate higher-order approximation can be obtained in the form
E62
where the function operator, , is introduced to denote .
Assuming the time discretization to evaluate the process, , at discrete-time instances, , , so that, , and denoting the Wiener process increments as , the Euler algorithm computes one of the following iterations depending on the assumed approximation, i.e.,
E63
where the function operator, , is defined as . A better stability can be obtained by assuming and , or, and , in (63). However, these latter values do not guarantee the convergence to the correct continuous-time solution even when the time step, , goes to zero. The converge problem can be rectified by using and in (63), where the filtered sequence, . For , the algorithm to generate the samples, , of the random process must periodically find the root, , of the equation
E64
The root mean square error as a measure of the algorithm convergence between the approximately computed values, , and the exact values, , is of the order between and . Thus, the approximation accuracy improves by reducing the discretization step size. There are other algorithms, for example, Milstein algorithm, that were proposed in the literature to improve the accuracy.
There is a notion of communicative and non-communicative noises, depending whether the driving noise (usually a Wiener process with independent Gaussian increments) is additive or multiplicative, i.e., whether the scaling coefficient of the noise is independent of the random process values or not. This is important, for example, when deriving the FPE for a Langevin equation. If the noise is additive, then Itô and Stratonovich interpretation of Langevin equation leads to the same FPE; this is not the case when the noise is multiplicative.
Furthermore, similar discretization methods can be used to generate samples of the vector random processes as explained in ([5], Section 10.5). Subsequently, the vector counterparts of the Euler and Milstein algorithms exist.
In addition to low-order approximations by the Euler and Milstein algorithms, other algorithms relying on higher-order approximations are available. However, the increased accuracy of these algorithms usually comes at the cost of increased computational complexity and possible issues with convergence. In general, a good strategy is to exploit the structure of the specific SDE in order to avoid many of the computational problems. For example, the noise can be quantized to a small number of levels or even be generated as a binary sequence of values.
The substantial computational problems arise in simulating spatially resolved partial SDEs, since they have to be discretized not only in time, but also in 2D or even 3D space. This may prevent considering any higher-order approximations and the corresponding algorithms. In the literature, the algorithms for spatial SDEs assume, for example, the fast Fourier transforms, and there is also the interaction picture method inspired by Hamiltonians in quantum mechanics.
5.1 Example
Consider the one-dimensional SDE,
E65
with the constant drift, , and the constant diffusion, , driven by the Wiener process, . In this case, the solution can be obtained in the closed form, i.e.,
E66
which can be verified by substituting into (65) and using the Itô calculus.
The exact solution (66) can be approximated by computing the approximate values, , of , at chosen discrete-time instances, . Specifically, using the Euler algorithm
E67
The samples of the Wiener process increments are generated, , where the time increments, , and is the zero-mean, unit-variance Gaussian random variable.
The Milstein algorithm approximates the SDE (65) as
E68
so it introduces the extra term compared to the approximation (66).
It can be shown that both Euler and Milstein algorithms converge in the mean to the exact solution as the step size, , is reduced, i.e.,
E69
where represents the order of converge. For random realizations of the values, , the expectation can be evaluated as
E70
The order, , is , for the Euler algorithm, and , for the Milstein algorithm, respectively. Consequently, in order to reduce the approximation error to one half, the step size, , must be reduced to , for the Euler algorithm, but only to , for the Milstein algorithm.
The approximations (67) and (68) of the exact solution (66) are numerically validated in Figure 1 (top) assuming the initial value, , the parameters, , and, , and time steps in the interval . The relative approximation error is expressed as
E71
In addition, Figure 1 (bottom) shows the absolute mean error, , as a function of the number of time steps, , , in the interval, , for . These curves indicate that the absolute mean error decreases as , for the Euler algorithm, and as , for the Milstein algorithm.
6. Generating random processes
The previous section discussed how to simulate random processes that are defined by a given SDE. This section considers the problem of generating random processes of given properties, i.e., the problem how to find the corresponding SDE model that can be used to generate samples of the random process.
In general, in most practical scenarios, the random process is typically described by the probability distribution of its samples, which also fully defines all statistical moments at any given time instant, and by the second-order pairwise moments between the samples at any two time instances; these moments are referred to as the ACF or auto-covariance function.
More importantly, the generated random processes are ergodic and stationary, so that the statistical mean is equal to the average value in time, i.e.,
E72
The non-ergodic processes are rarely encountered in practical applications. The non-stationary process, , can be generated from a stationary process, , using the transformation involving deterministic functions, and , as
E73
Note that transformation (73) is nonlinear, unless , for . Moreover, the robust signal processing can be achieved by assuming the values, or , instead of or .
The simplest, but very often considered random process is a correlated (colored) Gaussian noise. In continuous time, the uncorrelated (white) Gaussian process, , is correlated by a linear, not necessarily stationary filter with the impulse response, , i.e.,
E74
The mean and the ACF of the process, , can be computed as
E75
where , and .
Assuming a vector of mutually uncorrelated Gaussian processes, , where each can be individually correlated in time, can be transformed into a vector of mutually correlated processes, , by a linear transformation
E76
Then, the correlation matrix . If the number of processes, , is large, the elements of can be efficiently calculated as
E77
assuming, for simplicity, that all have zero mean, and denote the desired pairwise correlations. This procedure can be also used for a discrete-time signal represented by a column vector, . For very large, , the auto-regressive moving average (ARMA) model in continuous [cf. (1)] or discrete time should be used. The initial values of the generated random process can be discarded until the process becomes stationary. Alternatively, a non-stationary ARMA model can be used initially before switching to model (77) in order to speed up the convergence to stationary samples.
The above methods involving linear filtering are well justified when the samples are Gaussian. There are, however, many situations when the random process is specified by a general probability distribution and its ACF. This is not a complete statistical description, so different generation methods can yield different higher-order moments. The most straightforward, however, in practice rather complicated method is to generate uncorrelated samples with a certain distribution, so that linear filtering of these samples yields both the desired distribution as well as the ACF.
A much more tractable method first generates Gaussian samples with given correlations and then uses a memoryless nonlinear transformation to obtain the desired distribution and the ACF. In particular, let the zero-mean Gaussian noise, have the ACF, , and the pairwise PDF, . The random process, , has the one-dimensional and two-dimensional PDF, respectively, [14].
E78
The corresponding mean and the ACF, respectively, can be computed as
E79
The generation procedure then proceeds in these steps:
find a nonlinearity, , so that for the target output distribution, the input distribution is Gaussian;
given , find the correlations (covariances), , so that the output process, , has the desired ACF, ;
given , obtain a linear filter to generate such an ACF from the uncorrelated Gaussian samples.
This procedure is general and can be used to generate different processes with given distribution and ACF. However, in some cases, when the nature of a random process has been motivated by a specific physical or other phenomenon, it can directly indicate how to generate the underlying process. For example, a continuous-time Poisson process can be generated by a random sequence of Dirac delta pulses with exponentially distributed inter-arrival times, which are then filtered with an integrator. It is, therefore, advisable, to check the relevant literature whether this might be the case for a given random process.
6.1 Example
Consider the problem of generating a zero-mean stationary binary random sequence, , of values with a given ACF. Such a process can be generated, for example, from a discrete-time Gaussian process, , using the nonlinearity,
E80
The ACF, , of the output process, , can be expressed in terms of the ACF, , of the input process, , as
E81
For example, assume that , , and , otherwise. In such a case, the sequence, , has a zero mean and the unit variance, and by inverting the ACF (81), the required ACF is , , and , otherwise. The Gaussian process with this ACF can be obtained by filtering the uncorrelated Gaussian samples, , with a discrete-time linear filter having the impulse response, , , and , otherwise. Therefore, the process, , with the desired ACF is generated from the uncorrelated Gaussian samples as
E82
7. Conclusions
This chapter discussed random processes that are usually modeled as SDEs. The resulting SDEs can be linear or nonlinear and defined in continuous time or discrete time. The higher-order differential and difference linear equations can be equivalently represented by the first-order vector equations. Brownian motion and Ornstein-Uhlenbeck random processes are the special cases of a linear SDE. There is also Cramér integral representation of random processes in frequency domain. The nonlinear models can be obtained by introducing a nonlinear measurement equation with added measurement noise.
Filtering and estimating random processes often give rise to random processes defined by integral equations. Langevin equation allows incorporating both deterministic and random effects. In practical applications, it is usually easier to solve the corresponding Fokker-Planck equation and obtain the time-evolution of the state distribution. In modeling physical and chemical system dynamics, the FPE is referred to as master equation.
Abbreviations
auto-correlation function | |
auto-regressive moving average | |
best linear unbiased estimator | |
Fokker-Planck equation | |
minimum mean square error | |
ordinary differential equation | |
probability density function | |
stochastic differential equation |
References
- 1.
Amir A. Thinking Probabilistically, Stochastic Processes, Disordered Systems, and Their Applications. Cambridge, UK: Cambridge University Press; 2021 - 2.
Bielecki TR, Jakubowski J, Niewȩgłowski M. Structured Dependence between Stochastic Processes. Cambridge, UK: Cambridge University Press; 2020 - 3.
Leon-Garcia A. Probability and Random Processes for Electrical Engineering. 2nd ed. New York, USA: Addison-Wesley; 1994 - 4.
Gardner WA. Introduction to Random Processes With Applications. 2nd ed. New York, USA: McGraw-Hill; 1990 - 5.
Gardiner CW. Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences. 3rd ed. New York, NY: Springer; 2004 - 6.
Grimmett GR, Stirzaker DR. Probability and Random Processes. 3rd ed. Oxford, UK: Oxford University Press; 2001 - 7.
Haykin S. Adaptive Filter Theory. 5th ed. Harlow, UK: Pearson Education Limited; 2014 - 8.
Jazwinski AH. Stochastic Processes and Filtering Theory. New York, USA: Academic Press; 1970 - 9.
van Kampen NG. Stochastic Processes in Physics and Chemistry. 3rd ed. Amsterdam, Netherlands: Elsevier; 2007 - 10.
Karlin S, Taylor HM. A First Course Instochastic Processes. 2nd ed. New York, USA: Academic Press Inc.; 1975 - 11.
Kay SM. Fundamentals of Statistical Signal Processing: Estimation Theory. Vol. I. Upper Saddle River, New Jersey: Prentice Hall; 1993 - 12.
Kwok SF. Langevin and Fokker-Planck Equations and their Generalizations. Singapore: World Scientific Publishing Co. Pte. Ltd.; 2018 - 13.
Miller SL, Childers D. Probability and Random Processes With Applications to Signal Processing and Communications. 2nd ed. Waltham, MA, USA: Academic Press; 2012 - 14.
Papoulis A, Pillai SU. Probability, Random Variables, and Stochastic Processes. 4th ed. New York, NY: McGraw-Hill; 2002 - 15.
Pavliotis GA. Stochastic Processes and Applications. New York, USA: Springer Science+Business Media; 2014 - 16.
Ross SM. Stochastic Processes. 2nd ed. New York, USA: Wiley; 1996 - 17.
Schuss Z. Theory and Applications of Stochastic Processes, An Analytical Approach. New York, NY: Springer; 2010 - 18.
Shynk JJ. Probability, Random Variables and Random Processes. Hoboken, New Jersey: John Wiley & Sons; 2013
Written By
Pavel Loskot
Submitted: 12 February 2024 Reviewed: 20 February 2024 Published: 21 June 2024
© The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.