Stochastic Calculus for Local Martingales

Benjamin Vautier (1829-1898) was a Swiss painter.

The topics are

This approach follows from Chung and Williams \cite{chu}. We shall define stochastic integrals of the form \(\int_{[0,t]}XdM\), where \(M\) is a right-continuous local \(L^{2}\)-martingale and \(X\) is a process satisfying certain measurability and integrability assumptions, such that the family of stochastic integrals \(\left\{\int_{[0,t]}XdM\right\}_{t\geq 0}\) is a right-continuous local \(L^{2}\)-martingale.

We provide the following outline of the several stages in the definition of the stochastic integral. The measurability conditions on \(X\) will be specified first. In doing this, we regard \(X\) as a function on \(\mathbb{R}_{+}\times\Omega\) and require it to be measurable with respect to a \(\sigma\)-field \({\cal P}\) generated by a simple class \({\cal R}\) of “predictable rectangles”. After a discussion of the \(\sigma\)-field \({\cal P}\), we shall consider the case where \(M\) is a right-continuous \(L^{2}\)-martingale. A measure \(\mu_{M}\) associated with \(M\) will be defined on \({\cal P}\) and then we shall define the integral \(\int_{[0,t]}XdM\) in the following three steps.

  • $ latex\int XdM$ will be defined for any \({\cal R}\)-simple process \(X\) in such a way that the following isometry holds
    \[E\left [\left (\int XdM\right )^{2}\right ]=\int_{\mathbb{R}_{+}\times\Omega} X^{2}d\mu_{M}.\]
  • This isometry will then be used to extend the definition of \(\int XdM\) to any \(X\in {\cal L}^{2}=L^{2}(\mathbb{R}_{+}\times\Omega ,{\cal P},\mu_{M})\).
  • (For any process \(X\) satisfying \(1_{[0,t]}\cdot X\in {\cal L}^{2}\) for each \(t\geq 0\), it will be shown that there is a version of \(\left\{\int 1_{[0,t]}XdM\right\}_{t\geq 0}\) which is a right-continuous
    $L^{2}$-martingale, to be denoted by$\left\{\int_{[0,t]}XdM\right\}_{t\geq 0}$.

Finally, the extension to the case where \(M\) is a right-continuous local \(L^{2}\)-martingale and \(X\) is “locally” in \({\cal L}^{2}\) will be achieved using a sequence of stopping times tending to \(\infty\). The above definition of the stochastic integral will apply to the processes obtained by stopping \(M-M_{0}\) and \(X\) at any one of these times, and then the integral for \(M\) and \(X\) will be defined as the almost sure limit of these integrals, as the stopping times tend to \(\infty\).

\begin{equation}{\label{a}}\mbox{}\tag{A}\end{equation}

Stochastic Integrals for Martingales.

We now begin the above program with the definition of the \(\sigma\)-field \({\cal P}\). The family of subsets of \(\mathbb{R}_{+}\times\Omega\) containing all sets of the form \(\{0\}\times F_{0}\) and \((s,t]\times F\), where \(F_{0}\in {\cal F}_{0}\) and \(F\in {\cal F}_{s}\) for \(s<t\), is called the class of predictable rectangles and we denote it by \({\cal R}\). The ring \({\cal A}\) generated by \({\cal R}\) is the smallest family of subsets of \(\mathbb{R}_{+}\times\Omega\) which contains \({\cal R}\) and is such that if \(A_{1}\) and \(A_{2}\) are in the ring \({\cal A}\), then so too are their union \(A_{1}\cup A_{2}\) and difference \(A_{1}\setminus A_{2}\). Then \(A_{1}\cap A_{2}\) is also in \({\cal A}\). Indeed, it can be verified that the ring \({\cal A}\) consists of the empty set \(\emptyset\) and all finite unions of disjoint
rectangles in \({\cal R}\). The \(\sigma\)-field \({\cal P}\) of subsets of \(\mathbb{R}_{+}\times\Omega\) generated by \({\cal R}\) is called the predictable \(\sigma\)-field and sets in \({\cal P}\) are called predictable sets. A process \(X:\mathbb{R}_{+}\times\Omega\rightarrow \mathbb{R}\) is called predictable if \(X\) is \({\cal P}\)-measurable. If \(A\) is a set in \({\cal R}\), then \(1_{A}(t,\cdot )\) is \({\cal F}_{t}\)-measurable for each \(t\). Consequently, \(1_{A}\) is an adapted process. It follows by forming finite linear combinations that the same is true for any \(A\) in \({\cal A}\). Then by a monotone class theorem, any real-valued \({\cal P}\)-measurable processes is adapted. A real-valued \({\cal P}\)-measurable process will be referred to as a predictable process.

We shall see that, for any stopping time \(T\),
\[[0,T]=\{(t,\omega )\in \mathbb{R}_{+}\times\Omega :0\leq t\leq T(\omega )\}\]
is a predictable set in the following Proposition \ref{chul221}. For stopping times (optional times) \(S\) and \(T\), the set
\[[S,T]=\{(t,\omega )\in \mathbb{R}_{+}\times\Omega :S(\omega )\leq t\leq T(\omega )\}\]

is called a stochastic interval. Three other stochastic intervals \((S,T]\), \([S,T)\) and \((S,T)\) are defined similarly. Note that stochastic intervals are subsets of \(\mathbb{R}_{+}\times\Omega\) not \(\bar{\mathbb{R}}_{+}\times\Omega\); consequently \((\infty ,\omega )\) is never a member of such a set, even if \(T(\omega )=\infty\). We see that \(1_{[0,t]}\) means the indicator function of the stochastic interval \([0,t]\times\Omega\). Also, we have not specified that \(S\leq T\), but by definition the intersection of \([S,T]\) with \(\mathbb{R}_{+}\times\{\omega :S(\omega)> T(\omega )\}\) is the empty set. The \(\sigma\)-field of subsets of \(\mathbb{R}_{+}\times\Omega\) generated by the class of stochastic intervals is called the optional \(\sigma\)-field and is denoted by \({\cal O}\). The graph of a stopping time \(T\), denoted by
\[[T]=[0,T]\setminus [0,T)=\{(t,\omega )\in \mathbb{R}_{+}\times\Omega :T(\omega )=t\}\]

is in \({\cal O}\). A process \(X:\mathbb{R}_{+}\times\Omega\rightarrow \mathbb{R}\) will be called optional if and only if \(X\) is \({\cal O}\)-measurable. If \(A\) is a stochastic interval, the \(1_{A}(t,\cdot )\) is \({\cal F}_{t}\)-measurable for each \(t\), by the optionality of the end-points of \(A\). Then it follows as for predictable functions that any optional function is an adapted process, and we shall refer to it as an optional process.

We now investigate the relationship between \({\cal P}\) and \({\cal O}\). Each predictable rectangle of the form \((s,t]\times F\) where \(F\in {\cal F}_{s}\) and \(s<t\) is a stochastic interval of the form \((S,T]\) with \(S(\omega )=s\) and
\[T(\omega )=\left\{\begin{array}{ll}
s & \mbox{on \(\Omega\setminus F\)}\\
t & \mbox{on \(F\)}.
\end{array}\right .\]
Also, for \(F_{0}\in {\cal F}_{0}\), \(\{0\}\times F_{0}=\bigcap_{n}[0,T_{n})\), where
\[T_{n}(\omega )=\left\{\begin{array}{ll}
1/n & \mbox{on \(F_{0}\)}\\ 0 & \mbox{on \(\Omega\setminus F_{0}\)}
\end{array}\right .\]
is optional for each \(n\). It follows that \({\cal R}\subset {\cal O}\) and hence, since \({\cal R}\) generates \({\cal P}\), we have \({\cal P}\subset {\cal O}\).

\begin{equation}{\label{chul221}}\tag{}\mbox{}\end{equation}
1
Proposition \ref{chul221}. Stochastic intervals of the form \([0,T]\) and \((S,T]\) are predictable. \(\sharp\)

Stochastic intervals, other than those mentioned in Proposition \ref{chul221}, are not in general predictable without further restriction on the end-points. An \({\cal F}\)-measurable function \(T:\Omega\rightarrow\bar{\mathbb{R}}_{+}\) is called a predictable time if there is a sequence of stopping times \(\{T_{n}\}\) which increases to \(T\) such that each \(T_{n}\) is strictly less than \(T\) on \(\{T\neq 0\}\). Such a sequence \(\{T_{n}\}\) is called an announcing sequence for \(T\). It can be verified that a predictable time is a stopping time and as a partial converse, if \(T\) is a stopping time then \(T+t\) is predictable for each constant \(t>0\). Intuitively speaking, if \(T>0\) is the first time some random event occurs, then \(T\) is predictable if this event cannot take us by surprise because we are forewarned by a sequence of prior events, occurring at times \(T_{n}\).

Proposition. We have the following properties.

(i) If \(T\) is a predictable time, then \([T,\infty )\) is predictable.

(ii) All stochastic intervals of the following forms are predictable: \((S,T]\) where \(S\) and \(T\) are stopping times, \([S,T]\) and \((S,T)\) where \(S\) is predictable and \(T\) is a stopping time, \([S,T)\) where \(S\) and \(T\) are both predictable.

(iii) The predictable \(\sigma\)-field \({\cal P}\) is generated by the class of stochastic intervals of the form \([T,\infty )\), where \(T\) is a predictable time.

(iv) The optional \(\sigma\)-field \({\cal O}\) is generated by the class of stochastic intervals of the form \([T,\infty )\) where \(T\) is a stopping time. \(\sharp\)

Proposition. Every stopping time is predictable if and only if every (local) martingale (adapted to \({\cal F}_{t}\)) has a continuous version. \(\sharp\)

Suppose that \(\{Z_{t}\}_{t\geq 0}\) is a real-valued process adapted to the standard filtration \(\{{\cal F}_{t}\}_{t\geq 0}\), and \(Z_{t}\in L^{1}\) for each \(t\geq 0\). We define a set fucntion \(\lambda_{Z}\) on \({\cal R}\) by
\[\lambda_{Z}((s,t]\times F)=E[1_{F}\cdot (Z_{t}-Z_{s})]=\int_{F}(Z_{t}-Z_{s})dP\mbox{ for \(F\in {\cal F}_{s}\) and \(s<t\)}\]
\[\lambda_{Z}(\{0\}\times F_{0})=0\mbox{ for }F_{0}\in {\cal F}_{0}.\]
We extend \(\lambda_{Z}\) to be finitely additive set function on the ring \({\cal A}\) generated by \({\cal R}\) by defining
\[\lambda_{Z}(A)=\sum_{j=1}^{n}\lambda_{Z}(R_{j})\]
for any \(A=\bigcup_{j=1}^{n}R_{j}\), where \(\{R_{j}\}_{1\leq j\leq n}\) is a finite collection of disjoint sets in \({\cal R}\). The value of \(\lambda_{Z}(A)\) is the same for all representations of \(A\) as a finite disjoint union of sets in \({\cal R}\). We call \(\lambda_{Z}\) a content if \(\lambda_{Z}\geq 0\) on \({\cal R}\) and hence on \({\cal A}\).

It is clear that if \(Z\) is a martingale then \(\lambda_{Z}=0\), and if \(Z\) is a submartingale then \(\lambda_{Z}\geq 0\). For \(s<t\) and any bounded real-valued \({\cal F}_{s}\)-measurable function \(Y\),
\begin{equation}{\label {chueq223}}\tag{2}
\begin{array}{lll}
E[Y\cdot (M_{t}-M_{s})^{2}] & = & E[Y\cdot (M_{t}^{2}-2M_{t}M_{s}+M_{s}^{2})]\\
& = & E[Y\cdot (M_{t}^{2}+M_{s}^{2})]-2E[Y\cdot M_{s}\cdot
E[M_{t}|{\cal F}_{s}]]\\
& = & E[Y\cdot (M_{t}^{2}+M_{s}^{2})]-2E[Y\cdot M_{s}^{2}]\\
& = & E[Y\cdot (M_{t}^{2}-M_{s}^{2})].
\end{array}
\end{equation}
The martingale property of \(M\) was used to obtain the third equality above. Therefore, for \(F\in {\cal F}_{s}\) and \(s<t\), we have
\begin{equation}{\label {chueq222}}\tag{3}
\lambda_{M^{2}}((s,t]\times F)=\mathbb{E}[1_{F}\cdot (M_{t}^{2}-M_{s}^{2})]= \mathbb{E}[1_{F}\cdot (M_{t}-M_{s})^{2}]
\end{equation}
from (\ref{chueq223}).

We are interested in \(L^{2}\)-martingale \(M\) for which \(\lambda_{M^{2}}\) can be extended to a measure on \({\cal P}\). It can be shown that if \(Z\) is a right-continuous positive submartingale, then the content \(\lambda_{Z}\) can be uniquely extended to a measure on \({\cal P}\), and the measure is \(\sigma\)-finite. Setting \(Z=M^{2}\) ($Z$ is a submartingale since \(M\) is a martingale), we see that for a right-continuous \(L^{2}\)-martingale, there is a unique extension of \(\lambda_{M^{2}}\) to a \(\sigma\)-finite measure on \({\cal P}\). Until stated otherwise, we suppose that \(\{M_{t}\}_{t\geq 0}\) is a right-continuous \(L^{2}\)-martingale. We use \(\mu_{M}\) to denote the unique measure on \({\cal P}\) which extends \(\lambda_{M^{2}}\). This measure has been called the {\bf Dol\'{e}ans measure} of \(M\). We use \({\cal L}^{2}\) to denote \(L^{2}(\mathbb{R}_{+}\times\Omega , {\cal P},\mu_{M})\).

Theorem. Let \(Z\) be a right-continuous positive submartingale. Then the content \(\lambda_{Z}\) defined before can be uniquely extended to a measure on \({\cal P}\), and this measure is \(\sigma\)-finite. \(\sharp\)

Corollary. If \(M\) is a right-continuous \(L^{2}\)-martingale, then \(\lambda_{M^{2}}\) has a unique extension to a measure on \({\cal P}\), and this measure is \(\sigma\)-finite. \(\sharp\)

\begin{equation}{\label{chue1*}}\tag{4}\mbox{}\end{equation}

Example \ref{chue1*}. Consider a Brownian motion \(W\) in \(\mathbb{R}\) with \(W_{0}\in L^{2}\) and let \(\{{\cal F}_{t}\}_{t\geq 0}\) denote its associated standard filtration. Then \(\{W_{t}\}_{t\geq 0}\) is a continuous \(L^{2}\)-martingale with respect to the filtartion \(\{{\cal F}_{t}\}_{t\geq 0}\). The following calculation shows that \(\mu_{W}\) is the product measure \(\lambda\times P\), where \(\lambda\) is the Lebesgue measure on \(\mathbb{R}_{+}\). For \(s<t\) and \(F\in {\cal F}_{s}\), we have, from (\ref{chueq222})
\begin{align*}
\lambda_{W^{2}}((s,t]\times F) & = & E[I_{F}\cdot (W_{t}-W_{s})^{2}]\\
& = E[(W_{t}-W_{s})^{2}]\cdot E[I_{F}]\\
& = (t-s)\cdot P(F)\\
& = (\lambda\times P)((s,t]\times F).
\end{align*}
The second equality above follows because \(W_{t}-W_{s}\) is independent of \({\cal F}_{s}\), a consequence of the independent of the increment of \(W\). The third equality follows because \(W_{t}-W_{s}\) has mean zero and variance \(t-s\). For \(F_{0}\in {\cal F}_{0}\),
\[\lambda_{W^{2}}(\{0\}\times F_{0})=0=(\lambda\times P)(\{0\}\times F_{0}).\]
Thus, \(\lambda_{W^{2}}\) agrees with \(\lambda\times P\) on \({\cal R}\) and hence on \({\cal A}\). Since \(\lambda\times P\) is a measure on \({\cal B}\times {\cal F}\supset {\cal P}\), we have \(\mu_{W}=\lambda\times P\) on \({\cal P}\), by the uniqueness of the extension of \(\lambda_{W^{2}}\) on \({\cal A}\) to \(\mu_{W}\) on \({\cal P}\). \(\sharp\)

First we define the stochastic integral \(\int XdM\) when \(X\) is an \({\cal R}\)-simple process and show that the map \(X\rightarrow\int XdM\) is an isometry from a subspace of \({\cal L}^{2}\) into \(L^{2}\). When \(X\) is the indicator function of a predictable rectangle, the integral \(\int XdM\) is defined as follows. For \(s<t\) and \(F\in {\cal F}_{s}\),
\[\int 1_{(s,t]\times F}dM=1_{F}\cdot (M_{t}-M_{s})\]
and for \(F_{0}\in {\cal F}_{0}\),
\[\int 1_{\{0\}\times F_{0}}dM=0.\]
Let \({\cal E}\) denote the class of all processes \(X:\mathbb{R}_{+}\times\Omega \rightarrow \mathbb{R}\) that are finite linear combinations of indicator functions of predictable rectangles. Such a process will be called an \({\cal R}\)-simple process. Thus, \(X\in {\cal E}\) can be expressed in the form
\begin{equation}{\label {chueq226}}\tag{5}
X=\sum_{j=1}^{n}c_{j}\cdot 1_{(s_{j},t_{j}]\times F_{j}}+c_{0}\cdot 1_{\{0\}\times F_{0}}
\end{equation}
where \(c_{j}\in \mathbb{R}\), \(F_{j}\in {\cal F}_{s_{j}}\) for \(s_{j}<t_{j}\), \(j=1,\cdots ,n\), \(c_{0}\in \mathbb{R}\) and \(F_{0}\in {\cal F}_{0}\). This representation, although not unique, can always be chosen such that the predictable rectangles \((s_{j},t_{j}]\times F_{j}\) for \(j=1,\cdots ,n\) are disjoint. The integral \(\int XdM\) for \(X\in {\cal E}\) is defined by linearity. Thus, for \(X\) of the form (\ref{chueq226}) we have
\[\int XdM=\sum_{j=1}^{n}c_{j}\cdot 1_{F_{j}}\cdot (M_{t_{j}}-M_{s_{j}}).\]
It can be verified that the value of the integral does not depend on the representation chosen for \(X\). Since \(1_{R}\in {\cal L}^{2}\) for any predictable rectangle \(R\), it follows that \({\cal E}\) is a subspace of \({\cal L}^{2}\); and since \(M_{t}\in L^{2}\) for each \(t\), \(\int XdM\) is in \(L^{2}\) for each \(X\in {\cal E}\). The following result shows that the linear map \(X\rightarrow\int XdM\) is an isometry from \({\cal E}\subset {\cal L}^{2}\) onto its image in \(L^{2}\).

Proposition. For \(X\in {\cal E}\) we have the isometry
\begin{equation}{\label {chueq228}}\tag{6}
E\left [\left (\int XdM\right )^{2}\right ]=\int_{\mathbb{R}_{+}\times\Omega} X^{2}d\mu_{M}. \sharp
\end{equation}

The extension of the definition of \(\int XdM\) from integrals \(X\) in \({\cal E}\) to those in \({\cal L}^{2}\) is based on the isometry (\ref{chueq228}) and the fact that \({\cal E}\) is dense in the Hilbert space
${\cal L}^{2}$.

Proposition. The set of \({\cal R}\)-simple process \({\cal E}\) is dense in the Hilbert space \({\cal L}^{2}\). \(\sharp\)

If we regard \({\cal L}^{2}\) and \(L^{2}\) as Hilbert spaces, then the map \(X\rightarrow\int XdM\) is a linear isometry from the dense subspace \({\cal E}\) of \({\cal L}^{2}\) into \(L^{2}\), and hence can be uniquely extended to a linear isometry from \({\cal L}^{2}\) into \(L^{2}\) (see Taylor \cite [p.99]{tay}). For \(X\in {\cal L}^{2}\), we define \(\int XdM\) as the image of \(X\) under this isometry.

Let \(\Lambda^{2}({\cal P},M)\) denote the space of all \(X\) that are \({\cal P}\)-measurable such that \(1_{[0,t]}\cdot X\in {\cal L}^{2}\) for each \(t\geq 0\). Here \(1_{[0,t]}\cdot X\) denotes the process defined by
\[(1_{[0,t]}\cdot X)(s,\omega )=1_{[0,t]}(s)\cdot X_{s}(\omega )\mbox{ for all }(s,\omega )\in \mathbb{R}_{+}\times\Omega .\]
Let \(X\in\Lambda^{2}({\cal P},M)\). For each \(t\), \(\int 1_{[0,t]}\cdot XdM\) is well defined and has the isometry property
\begin{equation}{\label {chueq2210}}\tag{7}
E\left [\left (\int 1_{[0,t]}\times XdM\right )^{2}\right ]=\int_{[0,t]\times\Omega} X^{2}d\mu_{M}.
\end{equation}
By definition, \(\mu_{M}(\{0\}\times\Omega )=0\), hence by (\ref{chueq2210}) we have
\[\int 1_{\{0\}\times\Omega}\cdot XdM=0\mbox{ a.s.}\]
If \(X\in {\cal E}\) and (\ref{chueq226}) is a representation for \(X\), then for each \(t\), \(1_{[0,t]}\cdot X\) is in \({\cal E}\) and
\begin{equation}{\label {chueq2212}}\tag{8}
\int 1_{[0,t]}\cdot XdM=\sum_{j=1}^{n}c_{j}\cdot 1_{F_{j}}\cdot (M_{t_{j}\wedge t}-M_{s_{j}\wedge t}).
\end{equation}
Here the right member of (\ref{chueq2212}) is a right-continuous \(L^{2}\)-martingale indexed by \(t\). By using the isometry, we shall extend this to prove for \(X\in\Lambda^{2}({\cal P},M)\) that $latex \left\{\int
1_{[0,t]}\cdot XdM\right\}_{t\geq 0}$ is an \(L^{2}\)-martingale which has a right-continuous version.

\begin{equation}{\label{chut225}}\tag{9}\mbox{}\end{equation}

Proposition \ref{chut225}. Let \(X\in\Lambda^{2}({\cal P},M)\) and for each \(t\), let \(Y_{t}=\int 1_{[0,t]}\cdot XdM\). Then \(\{Y_{t}\}_{t\geq 0}\) is a zero-mean \(L^{2}\)-martingale and there is a version of \(Y\) with all paths right-continuous. \(\sharp\)

Proposition. Suppose the hypothese of Proposition~\ref{chut225} hold and \(M\) has continuous paths. Then there is a version of \(Y\) with continuous paths. \(\sharp\)

We shall use the notation \(\left\{\int_{[0,t]}XdM\right\}_{t\geq 0}\) to denote a right-continuous version of \(\left\{\int 1_{[0,t]}\cdot XdM\right\}_{t\geq 0}\) and \(\int_{(s,t]}XdM\) to denote \(\int_{[0,t]}XdM-\int_{[0,s]}XdM\) for \(s<t\). If \(M\) is known to be continuous, we shall use \(\left\{\int_{0}^{t}XdM\right\}_{t\geq 0}\) to denote a continuous version of \(\left\{\int 1_{[0,t]}\cdot XdM\right\}_{t\geq 0}\) and \(\int_{s}^{t}XdM\) to denote \(\int_{0}^{t}XdM-\int_{0}^{s}XdM\) for \(s<t\). We now list some properties of the stocashtic integral \(\int_{[0,t]}XdM\).

Proposition. Let \(X\in\Lambda^{2}({\cal P},M)\) and let \(Y\) denote the right-continuous stochastic integral process \(\left\{\int_{[0,t]}XdM\right\}_{t\geq 0}\). Then, we have the following properties.

(i) For \(s<t\) and any bounded \({\cal F}_{s}\)-random variable \(Z\), we have that \(1_{(s,t]}\cdot Z\) is \({\cal P}\)-measurable, \(1_{(s,t]}\cdot ZX\in\Lambda^{2}({\cal P},M)\), and
\[\int 1_{(s,t]}\cdot ZXdM=Z\cdot\int_{(s,t]}XdM\mbox{ a.s.}\]

(ii) The measure \(\mu_{Y}\) associated with the right-continuous \(L^{2}\)-martingale \(Y\) has density \(X^{2}\) with respect to \(\mu_{M}\), i.e. for any \(A\in {\cal P}\),
\[\mu_{Y}(A)=\int_{A}X^{2}d\mu_{M}.\]

(iii) For any bounded stoppting time \(T\),
\[Y_{T}\equiv\int_{[0,T]}XdM=\int 1_{[0,T]}\cdot XdM\mbox{ a.s.}\]
where for each \(\omega\), \(Y_{T}(\omega )\) is the value of \(Y_{t}(\omega )\) at \(t=T(\omega )\); whereas the integral on the far right is a random variable defined via the \(L^{2}\)-isometry. \(\sharp\)

Corollary. Let \(s<t\), \(F\in {\cal F}_{s}\), and \(T\) be a stopping time. Then we have
\[\int 1_{[0,T]}\cdot 1_{(s,t]\times F}dM=1_{F}\cdot (M_{t\wedge T}-M_{s\wedge T})\mbox{ a.s.}\sharp\]

The definition of the measure \(\mu_{M}\) and the stochastic integral \(\int_{[0,t]}XdM\) only involved the increments of \(M\). Hence the values of these quantities would remain unchanged if we replace \(M\) by \(M-M_{0}\) in the definitions (ref. next section).

\begin{equation}{\label{b}}\mbox{}\tag{B}\end{equation}

Stochastic Integrals for Local Martingales.

So far we have considered stochastic integrals \(\int_{[0,t]}XdM\) where the integrator is right-continuous \(L^{2}\)-martingale and the integrand is in \(\Lambda^{2}({\cal P},M)\). As a final extension we shall define the stochastic integral for integrators and integrands which only possess these properties in a local sense. Consequently, we shall no longer assume that \(M\) is a right-continuous \(L^{2}\)-martingale. Instead, we suppose that \(M\) is a right-continuous local \(L^{2}\)-martingale. If \(\{T_{n}\}\) is a localizing sequence for \(M\), we use \(M^{(n)}\) to denote the right-continuous \(L^{2}\)-martingale \(\{M_{t\wedge T_{n}}-M_{0}\}_{t\geq 0}\) for each \(n\).

Let \(\Lambda ({\cal P},M)\) denote the class of all stochastic processes \(X\) for which there is a localizing sequence \(\{T_{n}\}\) for \(M\) such that \(M^{(n)}\) is an \(L^{2}\)-martingale and \(1_{[0,T_{n}]}\cdot X\in\Lambda^{2}({\cal P},M^{(n)})\) for each \(n\). Such a sequence will be called a localizing sequence for \((X,M)\). Let \(X\in\Lambda ({\cal P},M)\) and \(\{T_{n}\}\) be a localizing sequence for \((X,M)\). Then
\[Y^{(n)}\equiv\left\{\int_{[0,t]}1_{[0,T_{n}]}\cdot XdM^{(n)}\right\}_{t\geq 0}\]
is a right-continuous \(L^{2}\)-martingale for each \(n\). We shall define \(Y=\left\{\int_{[0,t]}XdM\right\}_{t\geq 0}\) as the a.s. limit of the \(Y^{(n)}\)’s. The difference being that here we use random truncation times \(T_{n}\) whereas constant times \(n\) were used before. To validate this procedure, we need to verify that the following consistency condition holds:

  • for each \(n\), for almost every \(\omega\), \begin{equation}{\label {chueq2223}}\tag{10} Y_{t}^{(m)}(\omega )=Y_{t}^{(n)}(\omega )\mbox{ for all \(t\in [0,T_{n}]\) and \(m\geq n\)}
    \end{equation}
    and to show that
  • the definition of \(Y\) is independent (up to indistinguishability) of the choice of a localizing sequence for \((X,M)\).

These assertions are formally obvious, but their proofs are long in details. They follows from the two lemmas below

\begin{equation}{\label{chul229}}\tag{11}\mbox{}\end{equation}

Lemma \ref{chul229}. Let \(S\) and \(T\) be stopping times such that \(M^{T}=\{M_{t\wedge T}-M_{0}\}_{t\geq 0}\) and \(M^{S}=\{M_{t\wedge S}-M_{0}\}_{t\geq 0}\) are right-continuous \(L^{2}\)-martingales. Let \(\mu_{M^{T}}\) and \(\mu_{M^{S}}\) be the measures on \({\cal P}\) associated respectively with \(M^{T}\) and \(M^{S}\). Then \(\mu_{M^{T}}\) and \(\mu_{M^{S}}\) induce the same measure on the stochastic interval \([0,S\wedge T]\), i.e., for each \(A\in {\cal P}\), \(\mu_{M^{T}}(A\cap [0,S\wedge T])=\mu_{M^{S}}(A\cap [0,S\wedge T])\). \(\sharp\)

\begin{equation}{\label{chul2210}}\tag{12}\mbox{}\end{equation}

Lemma \ref{chul2210}. Let \(S\) and \(T\) be stopping times such that \(M^{T}=\{M_{t\wedge T}-M_{0}\}_{t\geq 0}\) and \(M^{S}=\{M_{t\wedge S}-M_{0}\}_{t\geq 0}\) are right-continuous \(L^{2}\)-martingales, and \(1_{[0,T]}\cdot X\in\Lambda^{2}({\cal P},M^{T})\) and \(1_{[0,S]}\cdot X\in\Lambda^{2}({\cal P},M^{S})\). Let \(Y^{T}\) and \(Y^{S}\) respectively denote the right-continuous \(L^{2}\)-martingales \(\left\{\int_{[0,t]}1_{[0,T]}\cdot XdM^{T}\right\}_{t\geq 0}\) and \(\left\{\int_{[0,t]}1_{[0,S]}\cdot XdM^{S}\right\}_{t\geq 0}\). Then \(\mathbb{P}\left\{Y_{t}^{T}=Y_{t}^{S}\mbox{ for }0\leq t\leq (T\wedge S)\right\}=1\). \(\sharp\)

By setting \(T=T_{m}\) and \(S=S_{n}\) in Lemma~\ref{chul2210}, we obtain (\ref{chueq2223}). Consequently, there is a set \(\Omega_{0}\) of probability one such that for each \(\omega\in\Omega_{0}\), \(\lim_{m\rightarrow\infty}Y_{t}^{(m)}(\omega )\) exists and is finite for each \(t\), and for each \(n\) and \(t\in [0,T_{n}]\) this limit equals \(Y_{t}^{(n)}(\omega )\). We denote this limit by \(Y_{t}(\omega )\).
Then \(Y_{t}(\omega )\) is right-continuous for each \(\omega\in\Omega_{0}\) and can easily be defined so that it is right-continuous for \(\omega\in\Omega\setminus\Omega_{0}\). Then for each \(n\), almost surely, \(Y_{t\wedge T_{n}}=Y_{t}^{(n)}\) a.s. for all \(t\). Hence \(Y\) is a right-continuous local \(L^{2}\)-martingale with localizing sequence \(\{T_{n}\}\). We shall denote \(Y_{t}\) by \(\int_{[0,t]}XdM\) and \(Y_{t}-Y_{s}\) by \(\int_{(s,t]}XdM\). If \(M\) is actually continuous, then so is \(Y\), and \(Y_{t}\) will be denoted by \(\int_{0}^{t}XdM\) and \(Y_{t}-Y_{s}\) by \(\int_{s}^{t}XdM\).

The fact that the definition of \(Y\) is independent (up to indistinguishability) of the choice of a localizing sequence for \((X,M)\) is an easy consequence of Lemma~\ref{chul2210}. Recall that a continuous local martingale is automatically a local \(L^{2}\)-martingale.

Proposition. Let \(M\) be a continuous local martingale and \(X\) be a continuous adapted process. Then \(X\in\Lambda ({\cal P},M)\) and \(\left\{\int_{0}^{t}XdM\right\}_{t\geq 0}\) is a continuous local martingale. \(\sharp\)

If \(M\) is a right-continuous \(L^{2}\)-martingale and \(X\in\Lambda^{2}({\cal P},M)\), the above definition of \(\int_{[0,t]}XdM\) is consistent with the previous definition because the integrals are unchanged if \(M\) is replaced by \(M-M_{0}\).

Proposition. (Substitution Formula). Let \(M\) be a right-continuous local martingale, \(X\in\Lambda ({\cal P},M)\) and \(Y_{t}=\int_{[0,t]}XdM\) for all \(t\geq 0\). Suppose \(Z\in\Lambda ({\cal P},Y)\). Then \(XZ\in\Lambda ({\cal P},M)\) and
\[\int_{[0,t]}ZdY=\int_{[0,t]}XZdM\mbox{ a.s.}\]
for all \(t\geq 0\). \(\sharp\)

\begin{equation}{\label{c}}\mbox{}\tag{C}\end{equation}

Extension of the Predictable Integrands.

In the sequel, we shall show that the definition of the stochastic integral can be extended to a larger class of integrands than the predictable ones, when either a mild condition on the Dol\'{e}ans measure \(\mu_{M}\) is satisfied or \(M\) is continuous. We use \(\sigma (C)\) to denote the \(\sigma\)-field of subsets of \(\mathbb{R}_{+}\times\Omega\) generated by the continuous adapted processes, that is, \(\sigma (C)\) is the smallest \(\sigma\)-field of subsets of \(\mathbb{R}_{+}\times\Omega\) with respect to which every continuous adapted process is \(\sigma (C)\)-measurable. Similarly, we use \(\sigma (RC)\) to denote the \(\sigma\)-field generated by the right-continuous adapted processes, \(\sigma (LC)\) to denote the \(\sigma\)-field generated by the left-continuous adapted processes, \(\sigma (LCRL)\) to denote the \(\sigma\)-field generated by the left-continuous with right limits adapted processes, and \(\sigma (RCLL)\) to denote the \(\sigma\)-field generated by the right-continuous with left limits adapted processes.

A process \(Z:\mathbb{R}_{+}\times\Omega\rightarrow \mathbb{R}\) is a progressively measurable process if and only if for each \(t\) the restriction of \(Z\) to \([0,t]\times\Omega\) is $latex {\cal B}_{t}\times
{\cal F}_{t}$-measurable. The \(\sigma\)-field of subsets of \(\mathbb{R}_{+}\times\Omega\) generated by the progressively measurable processes will be denoted by \({\cal M}\). Each progressive process is clearly \({\cal B}\times {\cal F}\)-measurable and by Fubini’s theorem, each is adapted. All optional processes are progressive. In summary, we have the following relationships between the \(\sigma\)-fields
\begin{equation}{\label {chueq*51}}\tag{13}
{\cal P}=\sigma (C)=\sigma (LCRL)=\sigma (LC)\subset {\cal O}=\sigma (RCLL)=\sigma (RC)\subset {\cal M}\subset {\cal B}\times {\cal F}.
\end{equation}

Now we discuss two conditions, under either of which the definition of the stochastic integral can be extended to an augmented class of predictable integrands. For this we shall assume that \(M\) is right-continuous \(L^{2}\)-martingale. The extension (by localization) to right-continuous local \(L^{2}\)-maringale is similar to the discussions above. The first condition requires the Dol\'{e}ans measure \(\mu_{M}\) to be absolutely continuous with respect to the product measure \(\lambda\times P\). In this case, the augmented class includes suitably integrable \({\cal B}\times {\cal F}\)-measurable adapted processes. The alternative condition is that \(M\) be continuous. In this case, the augmented class includes suitably integrable progressive processes. If \(M\) is a Brownian motion, both conditions are satisfied and the extensions coincide. For consideration of the first condition, we make the following assumption.

  • Assumption A1. Suppose \(M\) is a right-continuous \(L^{2}\)-martingale such that its Dol\'{e}ans measure \(\mu_{M}\) is absolutely continuous with respect to \(\lambda\times P\) on \({\cal P}\), i.e., \(\mu_{M}\ll\lambda\times P\).

The extension under this assumption can also be applied to a right-continuous local \(L^{2}\)-martingale \(M\) such that \(M-M_{0}\) satisfies the assumption, because stochastic integrals with respect to \(M\) are the same as those with respect to \(M-M_{0}\).

Example. From Example \ref{chue1*}, a Brownian motion \(W\) in \(\mathbb{R}\) that starts from zero satisfies the above assumption with \(\mu_{W}=\lambda\times P\). For a general Brownian motion in
$\mathbb{R}$, \(W-W_{0}\) satisfies the above assumption.  \(\sharp\)

Since \(\mu_{M}\ll\lambda\times P\) on \({\cal P}\), by the Radon-Nikodym theorem, there is a \({\cal P}\)-measurable function \(f\) such that \(0\leq f<\infty\) and for each \(A\in {\cal P}\)
\begin{equation}{\label {chueq335}}\tag{14}
\mu_{M}(A)=\int_{A}fd(\lambda\times P).
\end{equation}
The right member of (\ref{chueq335}) actually defines a measure on sets \(A\) in \({\cal B}\times {\cal F}\). This extension of \(\mu_{M}\) will be denoted by \(\tilde{\mu}_{M}\).

Definition. Let \({\cal N}^{*}\) denote the class of all \(\lambda\times P\)-null sets in \({\cal B}\times {\cal F}\). The augmentation of \({\cal P}\) with respect to \(({\cal B}\times {\cal F},\lambda\times P)\) is the
$\sigma$-field of subsets of \(\mathbb{R}_{+}\times\Omega\) generated by \({\cal P}\) and \({\cal N}^{*}\). We denote it by \({\cal P}^{*}\). Similarly, we define \(\tilde{{\cal N}}\) to be the class of \(\tilde{\mu}_{M}\)-null sets in \({\cal B}\times {\cal F}\) and \(\tilde{{\cal P}}\) to be the \(\sigma\)-field generated by \({\cal P}\) and \(\tilde{{\cal N}}\). \(\sharp\)

Note that \({\cal N}^{*}\subset\tilde{{\cal N}}\) and hence \({\cal P}^{*}\subset\tilde{{\cal P}}\). For the extension of the stochastic integral under assumption A1, we only need the larger \(\sigma\)-filed \(\tilde{{\cal P}}\). However, to elucidate the measurability properties of certain integrands, we also consider the \(\sigma\)-field \({\cal P}^{*}\). Note that the definition of this \(\sigma\)-field does not depend on \(M\).

\begin{equation}{\label{chul335}}\tag{15}\mbox{}\end{equation}

Proposition \ref{chul335}. We have the following properties.

(i) \({\cal P}^{*}=\{A\in {\cal B}\times {\cal F}:A\Delta A_{1}\in {\cal N}^{*}\mbox{ for some }A_{1}\in {\cal P}\}\).

(ii) Suppose \(Z:\mathbb{R}_{+}\times\Omega\rightarrow\mathbb{R}\) is \({\cal B}\times {\cal F}\)-measurable. Then \(Z\) is \({\cal P}^{*}\)-measurable if and only if there exists a predictable process \(X\) satisfying \((\lambda\times P)(\{X\neq Z\})=0\).

(iii) The results (i) and (ii) also hold with \(\tilde{{\cal N}}\), \(\tilde{{\cal P}}\) and \(\tilde{\mu}_{M}\) in place of \({\cal N}^{*}\), \({\cal P}^{*}\) and \(\lambda\times P\), respectively. \(\sharp\)

Proposition. Any \({\cal B}\times {\cal F}\)-measurable adapted process is \({\cal P}^{*}\)-measurable. \(\sharp\)

To summarize the measurability properties under Assumption A1, let \({\cal V}\) denote the \(\sigma\)-field generated by the \({\cal B}\times {\cal F}\)-measurable adapted processes. Then we have
\[{\cal P}\subset {\cal O}\subset {\cal M}\subset {\cal V}\subset{\cal P}^{*}\subset\tilde{{\cal P}}\subset {\cal B}\times {\cal F}.\]

We now show how to extend stochastic integrals with respect to \(M\) to suitable \(\tilde{{\cal P}}\)-measurable integrands. For any sub-$\sigma$-field \({\cal W}\) of \({\cal B}\times {\cal F}\), we shall use \({\cal L}^{2}_{{\cal W}}\) to denote \(L^{2}(\mathbb{R}_{+}\times\Omega ,{\cal W},\tilde{\mu}_{M})\) and \(\Lambda ({\cal W},M)\) to denote the set of \({\cal W}\)-measurable processes \(X\) for which there is a sequence of stopping times \(\{T_{n}\}\) such that \(T_{n}\uparrow\infty\) a.s. and \(1_{[0,t\wedge T_{n}]}\cdot X\in {\cal L}^{2}_{{\cal W}}\) for each \(t\) and \(n\). Such a sequence \(\{T_{n}\}\) will be called a localizing sequence for \((X,M)\). Since \({\cal P}\subset\tilde{{\cal P}}\), then \({\cal L}^{2}_{{\cal P}}\subset {\cal L}^{2}_{\tilde{{\cal P}}}\). Conversely, by Proposition~\ref{chul335} (iii), for any \(Z\in {\cal L}^{2}_{\tilde{{\cal P}}}\) there is \(X\in {\cal L}^{2}_{{\cal P}}\) such that \(X=Z\), \(\tilde{\mu}_{M}\)-a.s. It follows that as Hilbert spaces \({\cal L}^{2}_{\tilde{{\cal P}}}\) and \({\cal L}^{2}_{{\cal P}}\) are the same. For \(X\in {\cal L}^{2}_{{\cal P}}\), \(\int XdM\) was defined by the isometry between \({\cal L}^{2}_{{\cal P}}\) and \(L^{2}\). Consequently, \(\int ZdM\) is defined by this isometry for any \(Z\in {\cal L}^{2}_{\tilde{{\cal P}}}\). Then \(\int_{[0,t]}ZdM\) can be defined by the usual extension procedure for any \(Z\in\Lambda (\tilde{{\cal P}},M)\). Indeed, for \(Z\in\Lambda (\tilde{{\cal P}},M)\) and \(X\in {\cal P}\) such that \(\tilde{\mu}_{M}(\{X\neq Z\})=0\), we have almost surely for all \(t\)
\[\int_{[0,t]}ZdM=\int_{[0,t]}XdM.\]

Proposition. Suppose \(Z\) is a \({\cal B}\times {\cal F}\)-measurable adapted process such that we have a.s. for all t, \(\int_{0}^{t}Z_{s}^{2}ds<\infty\). If \(W\) is a Brownian motion in \(\mathbb{R}\) that starts from zero, then \(Z\in\Lambda (\tilde{{\cal P}},W)\). \(\sharp\)

We now assume that the following holds in place of Assumption A1.

  • Assumption A2. Suppose that \(M\) is a continuous \(L^{2}\)-martingale.

For the discussion of this case, we need the quadratic variation process associated with \(M\), which is defined below. However, for our purposes here, we only need the following proposition.

Proposition. Let \(M\) be a continuous \(L^{2}\)-martingale. Then the quadratic variation process \(\langle M\rangle\) of \(M\) is a continuous, adapted increasing process satisfying
\begin{equation}{\label {chueq338}}\tag{16}
\mu_{M}(A)=E\left [\int_{0}^{\infty}1_{A}(s,\omega )d\langle M\rangle_{s}(\omega )\right ],\end{equation}
for each \(A\in {\cal P}\). \(\sharp\)

Here, for each \(\omega\in\Omega\),
\[\int_{0}^{\infty}1_{A}(s,\omega )d\langle M\rangle_{s}(\omega )\]
is defined as a Lebesgue-Steiltjes integral. If \(W\) is a Brownian motion, then \(\langle W\rangle_{t}=t\) for all \(t\geq 0\). The right member of (\ref{chueq338}) defines a measure on all sets \(A\in {\cal B}\times {\cal F}\). We shall denote this extension of \(\mu_{M}\) by \(\widehat{\mu}_{M}\). The augmentation of \({\cal P}\) and the associated extension of the stochastic integral is then defined in an analogous manner to that under Assumption A1. Specifically, let \(\widehat{{\cal N}}\) denote the collection of \(\widehat{\mu}_{M}\)-null sets in \({\cal B}\times {\cal F}\), and define \(\widehat{{\cal P}}={\cal P}\vee \widehat{{\cal N}}\). Then, as in the previous discussions, \(\int_{[0,t]}ZdM\) can be defined for all \(Z\in\Lambda (\widehat{{\cal P}},M)\). The following result indicates that progressive processes are in the augmentation \(\widehat{{\cal P}}\) of \({\cal P}\).

Proposition. Any progressively measurable process is \(\widehat{{\cal P}}\)-measurable. \(\sharp\)

Summarizing the measurability under Assumption A2, we have
\[{\cal P}\subset {\cal O}\subset {\cal M}\subset\widehat{{\cal P}}\subset {\cal B}\times {\cal F}.\]
Now, \({\cal M}\subset {\cal V}\) (recall that \({\cal V}\) is generated by the \({\cal B}\times {\cal F}\)-measurable adapted processes), and when \(M\) is a Brownian motion, \({\cal V}\subset\widehat{{\cal P}}\). However, the latter relation does not hold in general for a continuous \(L^{2}\)-martingale \(M\).

 

 

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *