Carl Heinrich Bloch (1834-1890) was a Danish painter.
The topics are
\begin{equation}{\label{a}}\mbox{}\tag{A}\end{equation}
Approach I.
This approach follows from Klebaner \cite{kle}. In construction of stochastic integral \(\int_{0}^{t}H_{s}dX_{s}\), processes \(H\) and \(X\) are taken to be adapted to a filtration \(\{{\cal F}_{t}\}_{t\geq 0}\). The requirement that \(H\) is adapted is too weak, it fails to assure measurability of some basic constructions. A stronger condition that of a {\bf progressive} process is introduced. \(H\) is progressive if it is \({\cal B}([0,t])\times {\cal F}_{t}\)-measurable as a map from \([0,t]\times\Omega\) for every \(t>0\). Every adapted right-continuous or left-continuous process is progressive. Clearly, any progressive process is adapted. However, roughly speaking, to define integrals with respect to a semimartingle \(X\), \(H\) must satisfy even stronger conditions, its value at stopping time \(T\) must be determined from \({\cal F}_{T-}\). In fact, \(H\) must be predictable. The formal definition of predictable processes is rather technical (ref. (\ref{chueq*51}), the predictable \(\sigma\)-filed \({\cal P}\) is generated by the adapted left-continuous processes, and a process is called predictable if it is \({\cal P}\)-measurable). For our purposes it is enough to describe a subclass of predictable processes which can be defined
constructively. \(H\) is predictable if it is one of the following:
- a left-continuous adapted process;
- a limit (a.s. in probability) of left-continuous adapted processes;
- a RCLL process such that \(H_{T}\) is \({\cal F}_{T-}\)-measurable for any stopping time \(T\);
- A Borel measurable function of a predictable process.
Due to representation (\ref{kleeq81}), the integral with respect to the semimartingale \(X\) is the sum of two integrals, one with respect to a local martingale \(M\) and the other with respet to a finite variation process \(A\). The integral with respect to \(A\) can be done path by path as the Stieltjes integral since \(A\) is of finite variation. So \(H\) sholud be integrable with respect to \(A\). Sufficient condition for that is
\begin{equation}{\label{kleeq88}}\tag{1}
\int_{0}^{t}|H_{s}|dV_{A}(s)<\infty
\end{equation}
where \(V_{A}(s)\) is the variation process of \(A\).
Now we consider the stochastic integral with respect to martingales. For a simple predictable process \(H\) given by
\[H_{s}=H_{0}\cdot 1_{\{0\}}+\sum_{i=0}^{n-1}H_{i}\cdot 1_{(T_{i},T_{i+1}]}(s),\]
where \(0=T_{0}<T_{1}<\cdots <T_{n}\leq t\) are stopping times and \(H_{i}\)’s are \({\cal F}_{T_{i}}\)-measurable, the stochastic integral is defined as the sum
\[\int_{0}^{t}H_{s}dM_{s}=\sum_{i=0}^{n-1}H_{i}\cdot\left (M_{T_{i+1}}-M_{T_{i}}\right ).\]
If \(M\) is a locally square integrable martingale, then by the \(L^{2}\)-theory (Hilbert space theory) one can extend the stochastic integral from simple predictable processes to the class of predictable processes \(H\)
such that
\[\sqrt{\int_{0}^{t}H_{s}^{2}d\langle M\rangle_{s}}\]
is locally integrable. If \(M\) is a continuous local martingale, then the integral is defined for a wider class of predictable processes \(H\) satisfying
\begin{equation}{\label{kleeq812}}\tag{2}
\int_{0}^{t}H_{s}^{2}\langle M\rangle_{s}<\infty\mbox{ a.s.}
\end{equation}
Let \(X\) be a semimartingale with representation \(X_{t}=X_{0}+M_{t}+A_{t}\), where \(M\) is a local martingale and \(A\) is a finite variation process. Let \(H\) be a predictable process such that conditions (\ref{kleeq88}) and (\ref{kleeq812}) hold. Then the stochastic integral is defined as
\[\int_{0}^{t}H_{s}dX_{s}=\int_{0}^{t}H_{s}dM_{s}+\int_{0}^{t}H_{s}dA_{s}.\]
Since a representation of a semimartingale is not unique, one should check that the stochastic integral does not depend on the representation used. Indeed, if \(X_{t}=X_{0}+N_{t}+B_{t}\) is another representation, then \(M_{t}-N_{t}=B_{t}-A_{t}\). So that \(M_{t}-N_{t}\) is a local martingale of finite variation. But for such martingales stochastic and Stieltjes integrals are the same, and it follows that
\[\int_{0}^{t}H_{s}dN_{s}+\int_{0}^{t}H_{s}dB_{s}=\int_{0}^{t}H_{s}dM_{s}+\int_{0}^{t}H_{s}dA_{s}=\int_{0}^{t}H_{s}dX_{s}.\]
Since the integral with respect to a local martingale is a local martingale, and the integral with respect to a finite variation process is a process of finite variation, it follows that stochastic integral with respect to
a semimartingale is a semimartingale.
\begin{equation}{\label{b}}\mbox{}\tag{B}\end{equation}
Approach II.
This approach follows from Protter \cite{pro}. Let \({\bf D}\) denote the space of adapted processes with RCLL paths; \({\bf L}\) denote the space of adapted processes with LCRL paths (left continuous with right limits); \({\bf bL}\) denote processes in \({\bf L}\) with bounded paths. Let \({\bf S}_{u}\) be the space of simple, predictable processes endowed with the topology of uniform convergence (recall the definition of \({\bf S}\) in Definition \ref{prod*12}) and \({\bf L}^{0}\) be the space of finite-valued random variables topologized by convergence in probability. We need to consider a third type of convergence.
Definition. A sequence of processes \(\{X^{(n)}\}_{n\in {\bf N}}\) converges to a process \(H\) {\bf uniformly on compacts in probability} \((\)abbreviated {\bf ucp}$)$ if, for each \(t>0\), \(\sup_{0\leq s\leq t}|X_{s}^{(n)}-X_{s}|\) converges to \(0\) in probability. \(\sharp\)
We write \(X_{t}^{*}=\sup_{0\leq s\leq t}|X_{s}|\). Then if \(Y^{(n)}\in {\bf D}\) we have \(Y^{(n)}\rightarrow Y\) in ucp if \((Y^{(n)}-Y)_{t}^{*}\) converges to \(0\) in probability for each \(t>0\). We write \({\bf D}_{ucp},{\bf L}_{ucp}, {\bf S}_{ucp}\) to denote the respective spaces endowed with the ucp topology. We observe that \({\bf D}_{ucp}\) is a metrizable space; indeed, a compatible metric is given by, \(X,Y\in {\bf D}\),
\[d(X,Y)=\sum_{n=1}^{\infty}\frac{1}{2^{n}}\cdot E\left [\min\{1,(X-Y)_{n}^{*}\}\right ].\]
The above metric space \({\bf D}_{ucp}\) is complete. For a semimartingale \(M\) and a process \(X\in {\bf S}\), we have defined the appropriate notion \(I_{M}(X)\) of a stochastic integral in (\ref{proeq*14}). The next result is key to extending this definition.
Theorem. The space \({\bf S}\) is dense in \({\bf L}\) under the ucp topology. \(\sharp\)
We define a semimartingale \(M\) as a process that induced a continuous operator \(I_{M}\) from \({\bf S}_{u}\) into \({\bf L}^{0}\) in Definitions \ref{prod*13} and \ref{prod*14}. This \(I_{M}\) maps processes into random variables. We next define an operator (the stochastic integral operator), induced by \(M\), that will map processes into processes.
Definition. For \(X\in {\bf S}\) and \(M\) a RCLL process, define the linear mapping \(J_{M}:{\bf S}\rightarrow {\bf D}\) by
\[J_{M}(X)=X_{0}\cdot M_{0}+\sum_{i=1}^{n}X_{i}\cdot (M^{T_{i+1}}-M^{T_{i}})\]
for \(X\) in \({\bf S}\) with the representation
\[X=X_{0}\cdot 1_{\{0\}}+\sum_{i=1}^{n}X_{i}\cdot 1_{(T_{i},T_{i+1}]},\]
$X_{i}\in {\cal F}_{T_{i}}$ and \(0=T_{0}\leq T_{1}\leq\cdots\leq T_{n+1}<\infty\) stopping times. \(\sharp\)
Definition. For \(X\in {\bf S}\) and \(M\) an adapted RCLL process, we call \(J_{M}(X)\) the stochastic integral of \(X\) with respect to \(M\). \(\sharp\)
We use interchangeably three notations for the stochastic integral
\[J_{M}(X)=\int X_{s}dM_{s}=X\bullet M.\]
Observe that
\[(J_{M}(X))_{t}=I_{M^{t}}(X).\]
Indeed, \(I_{M}\) plays the role of a definite integral. For \(X\in {\bf S}\), \(I_{M}(X)=\int_{0}^{\infty} X_{s}dM_{s}\).
\begin{equation}{\label{prot211}}\mbox{}\end{equation}
Theorem \ref{prot211}. Let \(M\) be a semimartingale. Then the mapping \(J_{M}:{\bf S}_{ucp}\rightarrow {\bf D}_{ucp}\) is continuous. \(\sharp\)
We have seen that when \(M\) is a semimartingale, the integration operator \(J_{M}\) is continuous on \({\bf S}_{ucp}\), and also that \({\bf S}_{ucp}\) is dense in \({\bf L}_{ucp}\). Hence we are able to extend the linear operator \(J_{M}\) from \({\bf S}\) to \({\bf L}\) by continuity, since \({\bf D}_{ucp}\) is a complete metric space.
Definition. Let \(M\) be a semimartingale. The continuous linear mapping \(J_{M}:{\bf L}_{ucp}\rightarrow {\bf D}_{ucp}\) obtained as the extension of \(J_{M}:{\bf S}\rightarrow {\bf D}\) is called the stochastic integral. \(\sharp\)
The above definition can be seen briefly as follows. Let \(X\in {\bf L}\). Then there exists a sequence \(\{X^{(n)}\}_{n\in {\bf N}}\) in \({\bf S}\) such that \(X^{(n)}\) converges to \(X\) in ucp, since \({\bf S}_{ucp}\) is dense in \({\bf L}_{ucp}\). Now \(\{X^{(n)}\}_{n\in {\bf N}}\) can also be regarded as a Cauchy sequence. From Theorem \ref{prot211}, \(J_{M}(X^{(n)})\) is also a Cauchy sequence in \({\bf D}\). Since \({\bf D}_{ucp}\) is a complete metric space, \(J_{M}(X^{(n)})\) converges to an element in \({\bf D}\), which is the so-called stochastic integral.
Recall that if a process \(\{A_{t}\}_{t\geq 0}\) has continuous paths of finite variation with \(A_{0}=0\), then the Riemann-Stieltjes integral of \(\int_{0}^{t} A_{s}dA_{s}\) yields the formula
\[\int_{0}^{t}A_{s}dA_{s}=\frac{1}{2}A_{t}^{2}\]
by Theorem \ref{prot150}. Let us now consider a standard Brownian motion \(W=\{W_{t}\}_{t\geq 0}\) with \(W_{0}=0\). The process \(W\) does not have paths of finite variation on compacts, but it is a semimartingale. Let \(\{\pi_{n}\}\) be a sequence of partitions of \([0,\infty )\) with \(\lim_{n\rightarrow\infty}\parallel\pi_{n}\parallel =0\). Let
\[W_{t}^{(n)}=\sum_{t_{k}\in\pi_{n}}W_{t_{k}}\cdot 1_{(t_{k},t_{k+1}]}.\]
Then \(W^{(n)}\in {\bf L}\) for each \(n\). Moreover, \(W^{(n)}\) converges to \(W\) in ucp. Fix \(t\geq 0\) and assume that \(t\) is a partition point of each \(\pi_{n}\). Then
\[(J_{W}(W^{(n)}))_{t}=\sum_{t_{k}\in\pi_{n},t_{k}<t}W_{t_{k}}\cdot (W_{t_{k+1}}-W_{t_{k}})\]
and
\begin{align*}
(J_{W}(W))_{t} & = & \lim_{n\rightarrow\infty} (J_{W}(W^{(n)}))_{t}\\
& =\lim_{n\rightarrow\infty}\sum_{t_{k}\in\pi_{n},t_{k}<t}W_{t_{k}}\cdot (W_{t_{k+1}}-W_{t_{k}})\\
& =\lim_{n\rightarrow\infty}\sum_{t_{k}\in\pi_{n},t_{k}<t}\left [\frac{1}{2}(W_{t_{k+1}}+W_{t_{k}})(W_{t_{k+1}}-W_{t_{k}})-\frac{1}{2}(W_{t_{k+1}}-W_{t_{k}})(W_{t_{k+1}}-W_{t_{k}})\right ]\\
& =\frac{1}{2}W_{t}^{2}-\frac{1}{2}\lim_{n\rightarrow\infty}\sum_{t_{k}\in\pi_{n}}(W_{t_{k+1}}-W_{t_{k}})^{2}
\end{align*}
The last term on the right converges to \(t\) a.s. We therefore conclude that
\[\int_{0}^{t}W_{s}dW_{s}=\frac{1}{2}W_{t}^{2}-\frac{1}{2}t.\]
Now \(M\) will denote a semimartingale and \(X\) will denote an element of \({\bf L}\). Recall that the stochastic integral will be denoted by three notations \(J_{M}(X)=X\bullet M=\int X_{s}dM_{s}\). Evaluating these processes at \(t\), we have
\[(X\bullet M)_{t}=\int_{0}^{t}X_{s}dM_{s}=\int_{[0,t]}X_{s}dM_{s}.\]
To exclude \(0\) in the integral we write
\[\int_{0+}^{t}X_{s}dM_{s}=\int_{(0,t]}X_{s}dM_{s}.\]
The intergal \(\int_{0}^{\infty}X_{s}dM_{s}\) is defined to be \(\lim_{t\rightarrow\infty}\int_{0}^{t}X_{s}dM_{s}\) when the limit exists. Note that \(\int_{0}^{t}X_{s}dM_{s}=X_{0}\cdot M_{0}+\int_{0+}^{t}X_{s}dM_{s}\). For a process \(Y\in {\bf D}\), we recall that \(\Delta Y_{t}=Y_{t}-Y_{t-}\), the jump at \(t\). Also since \(Y_{0-}=0\) we have \(\Delta Y_{0}=Y_{0}\). Recall further that for a process \(Z\) and stopping time \(T\), we let \(Z^{T}\) denote the stopped process \(Z_{t}^{T}=Z_{t\wedge T}\). Two processes \(X\) and \(Y\) are {\bf indistinguishable} if
\[P\{\omega :t\mapsto X_{t}(\omega )\mbox{ and }t\mapsto Y_{t}(\omega )\mbox{ are the same functions}\}=1.\]
\begin{equation}{\label{prot212}}\mbox{}\end{equation}
Proposition \ref{prot212}. Let \(T\) be a stopping time. Then \((X\bullet M)^{T}=(X\cdot 1_{[0,T]})\bullet M=X\bullet (M^{T})\). \(\sharp\)
Proposition. The jump process \(\Delta (X\bullet M)_{s}\) is indistinguishable from \(X_{s}\bullet (\Delta M_{s})\).
Proof. Both properties are clear when \(X\in {\bf S}\), and they follow when \(X\in {\bf L}\) by passing to the limit using the convergence in ucp. \(\blacksquare\)
Let \(\mathbb{Q}\) denote another probability measure, and let \(X_{\mathbb{Q}}\bullet M\) denote the stochastic integral of \(X\) with respect to \(M\) computed under \(\mathbb{Q}\).
Proposition. Let \(\mathbb{Q}\ll\mathbb{P}\). Then \(X_{\mathbb{Q}}\bullet M\) is \(\mathbb{Q}\)-indistinguishable from \(X_{\mathbb{Q}}\bullet M\).
Proof. Note that by Proposition \ref{prot22}, \(M\) is a \(\mathbb{Q}\)-semimartingale. The result is clear if \(X\in {\bf S}\), and it follows for \(X\in {\bf L}\) by passage to the limit in the ucp topology, since convergence in \(\mathbb{P}\)-probability implies convergence in \(\mathbb{Q}\)-probability. \(\blacksquare\)
\begin{equation}{\label{prot217}}\mbox{}\end{equation}
Proposition \ref{prot217}. If the semimartingale \(M\) has paths of finite variation on compacts, then \(X\bullet M\) is indistinguishable from the Lebesgue-Stieltjes integral, computed path by path.
Proof. The result is evident for \(X\in {\bf S}\). Let \(X^{(n)}\in {\bf S}\) converges to \(X\) in ucp. Then there exists a subsequence \(\{n_{k}\}\) such that \(\lim_{n_{k}\rightarrow\infty}(X^{(n_{k})}-X)_{t}^{*}=0\) a.s., and the result follows by interchanging limits, justified by the uniform a.s. convergence. \(\blacksquare\)
Proposition. Let \(M,\tilde{M}\) be two semimartingales, and let \(X,\tilde{X}\in {\bf L}\). Let
\[A=\{\omega :X_{\cdot}(\omega )=\tilde{X}_{\cdot}(\omega )\mbox{ and }M_{\cdot}(\omega )=\tilde{M}_{\cdot}(\omega )\}\]
and
\[B=\{\omega :t\mapsto M_{t}(\omega )\mbox{ is of finite variation on compacts}\}.\]
Then \(X\bullet M=\tilde{X}\bullet\tilde{M}\) on \(A\), and \(H\bullet X\) is equal to a path by path Lebesgue-Steiltjes integral on \(B\). \(\sharp\)
\begin{equation}{\label{prot219}}\mbox{}\end{equation}
Proposition \ref{prot219}. (Associativity). The stochastic integral process \(N=X\bullet M\) is itself a semimartingale, and for \(Y\in {\bf L}\) we have
\[Y\bullet N=Y\bullet (X\bullet M)=(XY)\bullet M.\]
Proof. Suppose we know \(N=X\bullet M\) is a semimartingale. Then \(Y\bullet N=J_{N}(Y)\). If \(X,Y\) are in \({\bf S}\), then it is clear that \(J_{N}(Y)=J_{M}(XY)\). The associativity then extends to \({\bf L}\) by continuity. It remains to show that \(N=X\bullet M\) is a semimartingale. Let \(\{X^{(n)}\}\) be in \({\bf S}\) converging to \(X\) in ucp. Then \(X^{(n)}\bullet M\) converges to \(X\bullet M\) in ucp. Thus there exists a subsequence \(\{n_{k}\}\) such that \(X^{(n_{k})}\bullet M\) converges to \(X\bullet M\) a.s. Let \(Y\in {\bf S}\) and let \(N^{(n_{k})}=X^{(n_{k})}\bullet M\), \(N=X\bullet M\). The \(N^{(n_{k})}\) are semimartingales converging pointwise to the proces \(Y\). For \(Y\in {\bf S}\), \(J_{N}(Y)\) is defined for any process \(N\); so we have
\[J_{N}(Y)=Y\bullet N=\lim_{n_{k}\rightarrow\infty}Y\bullet N^{(n_{k})}=\lim_{n_{k}\rightarrow\infty}Y\bullet (X^{(n_{k})}\bullet M)=\lim_{n_{k}\rightarrow\infty} (YX^{(n_{k})})\bullet M\]
which equals \(\lim_{n_{k}\rightarrow\infty}J_{M}(YX^{(n_{k})})=J_{M}(XY)\), since \(M\) is a semimartingale. Therefore \(J_{N}(Y)=J_{M}(XY)\) for \(Y\in {\bf S}\). Let \(Y^{(n)}\) converge to \(Y\) in \({\bf S}_{u}\). Then \(Y^{(n)}X\) converges to \(XY\) in \({\bf L}_{ucp}\), and since \(M\) is a semimartingale,
\[\lim_{n\rightarrow\infty}J_{N}(Y^{(n)})=\lim_{n\rightarrow\infty}J_{M}(Y^{(n)}X)=J_{M}(XY)=J_{N}(Y).\]
This implies \(N^{t}\) is a total semimartingale since \((J_{N}(Y))_{t}=I_{N^{t}}(Y)\), and so \(N=X\bullet M\) is a semimartngale. \(\blacksquare\)
The above proposition shows that the property of being a semimartingale is preserved by stochastic integration. Also by Proposition \ref{prot217} if the semimartingale \(M\) is an FV process, then the stochastic integral agrees with the Lebesgue-Stieltjes integral, and by the theory of Lebesgue-Steiltjes integration we are able to conclude the stochastic integral is an FV process also; that is, the property of being an FV process is preserved by stochastic integration.
A classical result from the theory of Lebesgue measure and integration on \(\mathbb{R}\) is that a bounded, measurable function \(f\) mapping an interval \([a,b]\) to \(\mathbb{R}\) is Riemann integrable if and only if the set of discontinuities of \(f\) has Lebesgue measure zero. Therefore we cannot hope to express the stochastic integral as a limit of sums unless the integrands have reasonably smooth sample paths. The spaces \({\bf D}\) and \({\bf L}\) consists of processes which jump at most countably often. As we will see in the next Proposition~\ref{prot221}, this is smooth enough.
\begin{equation}{\label{prod*7}}\mbox{}\end{equation}
Definition \ref{prod*7}. Let \(\sigma\) denote a finite sequence of finite stopping times
\[0=T_{0}\leq T_{1}\leq\cdots \leq T_{k}<\infty .\]
The sequence \(\sigma\) is called a {\bf random partition}. A sequence of random partitions \(\sigma_{n}\)
\[\sigma_{n}:T_{0}^{(n)}\leq T_{1}^{(n)}\leq\cdots\leq T_{k_{n}}^{(n)}\]
is said to tend to the identity if
- \(\lim_{n\rightarrow\infty}\sup_{k}T_{k}^{(n)}=\infty\) a.s.
- \(\parallel\sigma_{n}\parallel =\sup_{k}|T_{k+1}^{(n)}-T_{k}^{(n)}|\) converges to \(0\) a.s. \(\sharp\)
Let \(X\) be a process and let \(\sigma\) be a random partition. We define the process \(X\) sampled at \(\sigma\) to be
\[X^{\sigma}\equiv X_{0}\cdot 1_{\{0\}}+\sum_{k}X_{T_{k}}\cdot 1_{(T_{k},T_{k+1}]}.\]
It is easy to check that
\[\int X_{s}^{\sigma}dM_{s}=X_{0}\cdot M_{0}+\sum_{i}X_{T_{i}}\cdot (M^{T_{i+1}}-M^{T_{i}})\]
for any semimartingale \(M\), any process \(X\) in \({\bf S}\), \({\bf D}\), or \({\bf L}\).
\begin{equation}{\label{prot221}}\mbox{}\end{equation}
Proposition \ref{prot221}. Let \(M\) be a semimartingale, and let \(X\) be a process in \({\bf D}\) or in \({\bf L}\). Let \(\{\sigma_{n}\}\) be a sequence of random partitions tending to the identity. Then the process
\[\int_{0+}X_{s}^{\sigma_{n}}dM_{s}=\sum_{i}X_{T_{i}^{(n)}}\cdot (M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}})\]
tend to the integral \((X_{-})\bullet M\) in ucp. \(\sharp\)


