Adam Frans van der Meulen (1632-1690) was a Flemish painter.
The topics are
- Conditional Probabilities and Expectations
- Martingales
- Brownian Motion
- Doob-Meyer Decomposition
- Local Martingales
- Semimartingales
\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}
Conditional Probabilities and Expectations.
Let \((\Omega ,2^{\Omega},\mathbb{P})\) be a finite probability space, and \({\cal G}\) be a field generated by a partition of \(\Omega\), \(\{D_{1},\cdots ,D_{k}\}\), such that \(D_{i}\cap D_{j}=\emptyset\) for \(i\neq j\) and \(\bigcup_{i}D_{i}=\Omega\). Recall that the conditional probability of an event \(A\) given the event \(D\) is defined by
\[\mathbb{P}(A|D)=\frac{\mathbb{P}(A\cap D)}{\mathbb{P}(D)}.\]
Suppose that all the sets \(D_{i}\) in the partition have a positive probability. The conditional probability of the event \(A\) given the field \({\cal G}\) is the random variable that takes values \(\mathbb{P}(A|D_{i})\) on \(D_{i}\) for \(i=1,\cdots ,k\). Let \(I_{D}\) denote the indicator function of the set \(D\). The conditional probability can be written as
\[\mathbb{P}(A|{\cal G})(\omega )=\sum_{i=1}^{k}\mathbb{P}(A|D_{i})\cdot I_{D_{i}}(\omega ).\]
For example, if \({\cal G}=\{\emptyset ,\Omega\}\) is the trivial field, then
\[\mathbb{P}(A|{\cal G})=\mathbb{P}(A|\Omega )=\mathbb{P}(A).\]
Let now \(Y\) be a random variable that takes values \(y_{1},\cdots ,y_{k}\), then the sets \(D_{i}=\{\omega :Y(\omega )=y_{i}\}\), for \(i=1,\cdots ,k\), form a partition of \(\Omega\). If \({\cal F}_{Y}\) denotes the field generated by \(Y\), then the conditional probability given by \({\cal F}_{Y}\) is denoted by
\[\mathbb{P}(A|{\cal F}_{Y})=\mathbb{P}(A|Y).\]
Let \(X\) take values \(x_{1},\cdots ,x_{p}\) and \(A_{1}=\{\omega :X(\omega )=x_{1}\},\cdots ,A_{p}=\{\omega :X(\omega )=x_{p}\}\). Let the field \({\cal G}\) be generated by a partition \(\{D_{1},\cdots ,D_{k}\}\) of \(\Omega\). Then the conditional expectation of \(X\) given \({\cal G}\) is defined by
\[\mathbb{E}[X|{\cal G}]=\sum_{i=1}^{p}x_{i}\cdot P(A_{i}|{\cal G}).\]
Note that \(\mathbb{E}[X|{\cal G}]\) is a linear combination of random variables, so that it is a random variable. It is clear that \(\mathbb{P}(A|{\cal G})=\mathbb{E}[I_{A}|{\cal G}]\) and \(\mathbb{E}[X|{\cal F}_{0}]=\mathbb{E}[X]\), where \({\cal F}_{0}=\{\emptyset ,\Omega\}\) is the trivial field.
If \(X\) and \(Y\) are random variables both taking a finite number of values, then \(\mathbb{E}[X|Y]\) is defined as \(\mathbb{E}[X|{\cal F}_{Y}]\). In other words, \(\mathbb{E}[X|Y]\) is a random variable, which takes values
\[\mathbb{E}[X|Y=y_{i}]=\sum_{j=1}^{p}x_{j}\cdot P(X=x_{j}|Y=y_{i})\]
for \(i=1,\cdots ,k\).
Suppose that \((\Omega ,{\cal F},P)\) is a probability space and that \(X\) and \(Z\) are random variables, \(X\) taking the distinct values \(x_{1},x_{2},\cdots ,x_{m}\) and \(Z\) taking the distinct values \(z_{1},z_{2},\cdots ,z_{n}\). Elementary conditional probability says
\[P\{X=x_{i}|Z=z_{j}\}\equiv\frac{P\{X=x_{i},Z=z_{j}\}}{P\{Z=z_{j}\}}\]
and elementary conditional expectation says that
\[\mathbb{E}[X|Z=z_{j}]=\sum x_{i}\cdot P\{X=x_{i}|Z=z_{j}\}.\]
The random variable \(Y=\mathbb{E}[X|Z]\), the conditional expectation of \(X\) given \(Z\), is defined as follows
\begin{equation}{\label{wila}}\tag{1}
\mbox{if \(Z(\omega )=z_{j}\), then \(Y(\omega )\equiv \mathbb{E}[X|Z=z_{j}]\equiv y_{j}\)}.
\end{equation}
It proves to be very advantageous to look at this idea in a new way. “Reporting to us the value of \(Z(\omega )\)” amounts to partitioning \(\Omega\) into “$Z$-atoms” on which \(Z\) is constant. The \(\sigma\)-filed \({\cal G}=\sigma (Z)\) generated by \(Z\) consisting of sets \(\{Z\in B\}\) for \(B\in {\cal B}\), and therefore consists precisely of the \(2^{n}\) possible unions of the \(n\) \(Z\)-atoms. It is clear from (\ref{wila}) that \(Y\) is constant on \(Z\)-atoms, or, to put it better,
\begin{equation}{\label{wilb}}\tag{2}
\mbox{$Y$ is \({\cal G}\)-measurable.}
\end{equation}
Next, since \(Y\) takes the constant value \(y_{j}\) on the \(Z\)-atom \(\{Z=z_{j}\}\), we have
\begin{align*}
\int_{\{Z=z_{j}\}} YdP & = & y_{j}\cdot P\{Z=z_{j}\}\\
& =\sum_{i} x_{i}\cdot P\{X=x_{i}|Z=z_{j}\}\cdot P\{Z=z_{j}\}\\
& =\sum_{i} x_{i}\cdot P\{X=x_{i},Z=z_{j}\}\\
& =\int_{\{Z=z_{j}\}}XdP.
\end{align*}
If we write \(G_{j}=\{Z=z_{j}\}\), this says \(\mathbb{E}[Y\cdot 1_{G_{j}}]=\mathbb{E}[X\cdot 1_{G_{j}}]\). Since for every \(G\in {\cal G}\), \(1_{G}\) is the sum of \(1_{G_{j}}\)’s, we have \(\mathbb{E}[Y\cdot 1_{G}]=\mathbb{E}[X\cdot 1_{G}]\), or
\begin{equation}{\label{wilc}}\tag{3}
\int_{G}YdP=\int_{G}XdP\mbox{ for all }G\in {\cal G}.
\end{equation}
Results (\ref{wilb}) and (\ref{wilc}) suggest the central definition of modern probability.
Proposition. Let \((\Omega ,{\cal F},P)\) be a probability space, and \(X\) a random variable with \(\mathbb{E}[|X|]<\infty\). Let \({\cal G}\) be a sub-$\sigma$-field of \({\cal F}\). Then there exists a random variable \(Y\) such that \(Y\) is \({\cal G}\)-measurable with $\mathbb{E}[|Y|]<\infty, and for every set \(G\) in \({\cal G}\), we have
\[\int_{G}YdP=\int_{G}XdP\mbox{ for all }G\in {\cal G}.\]
Moreover if \(\bar{Y}\) is another random variable with these properties then \(\bar{Y}=Y\) a.s., that is \(\mathbb{P}\{{\bar Y}=Y\}=1\). \(\sharp\)
A random variable \(Y\) with the above properties (i)-(iii) is called a version of the conditional expectation \(\mathbb{E}[X|{\cal G}]\) of \(X\) given \({\cal G}\), and we write \(Y=\mathbb{E}[X|{\cal G}]\) a.s. Two versions agree a.s., and when one has become familiar with the concept, one identifies different versions and speaks of the conditional expectation \(\mathbb{E}[X|{\cal G}]\). But we should think about the “a.s.” throughout this course. We often write \(\mathbb{E}[X|Z]\) for \(\mathbb{E}[X|\sigma (Z)]\) and \(\mathbb{E}[X|Z_{1},Z_{2},\cdots ]\) for \(\mathbb{E}[X|\sigma (Z_{1},Z_{2},\cdots )]\).
The intuitive meaning of the conditional expectation can be described as follows. An experiment has been performed. The only information available regarding which sample point \(\omega\) has been chosen is the set of values \(Z(\omega )\) for every \({\cal G}\)-measurable random variable \(Z\). Then \(Y(\omega )=\mathbb{E}[X|{\cal G}](\omega )\) is the expected value given this information. The “a.s.” ambiguity in the definition is something one has to live with in general, but it is sometimes possible to choose a canonical version of \(\mathbb{E}[X|{\cal G}]\). Note that if \({\cal G}\) is the trivial \(\sigma\)-filed $latex \{\emptyset ,
\Omega\}$ (which contains no information), then \(\mathbb{E}[X|{\cal G}](\omega )=\mathbb{E}[X]\) for all \(\omega\).
Proposition. Let \(X\) satisfy \(\mathbb{E}[|X|]<\infty\). \({\cal G}\) and \({\cal H}\) denote sub-$\sigma$-fileds of \({\cal F}\). Then, we have the following properties.
(i) If \(Y\) is any version of \(\mathbb{E}[X|{\cal G}]\) then \(\mathbb{E}[X]=\mathbb{E}[Y]\) or \(\mathbb{E}[\mathbb{E}[X|{\cal G}]]=\mathbb{E}[X]\).
(ii) If \(X\) is \({\cal G}\)-measurable, then \(\mathbb{E}[X|{\cal G}]=X\) a.s.
(iii) \(\mathbb{E}[a_{1}X_{1}+a_{2}X_{2}|{\cal G}]=a_{1}\mathbb{E}[X_{1}|{\cal G}]+a_{2}\mathbb{E}[X_{2}|{\cal G}]\) a.s. That is to say, \(Y_{1}\) is a version of \(\mathbb{E}[X_{1}|{\cal G}]\) and \(Y_{2}\) is a version of \(\mathbb{E}[X_{2}|{\cal G}]\), then \(a_{1}Y_{1}+a_{2}Y_{2}\) is a version of \(\mathbb{E}[a_{1}X_{1}+a_{2}X_{2}|{\cal G}]\).
(iv) If \(X\geq 0\), then \(\mathbb{E}[X|{\cal G}]\geq 0\) a.s.
(v) If \(0\leq X_{n}\uparrow X\), then \(\mathbb{E}[X_{n}|{\cal G}]\uparrow\mathbb{E}[X|{\cal G}]\) a.s.
(vi) If \(X_{n}\geq 0\), then
\[\mathbb{E}\left [\left .\liminf_{n\rightarrow\infty}X_{n}\right |{\cal G}\right ]=\liminf_{n\rightarrow\infty} \mathbb{E}[X_{n}|{\cal G}]\mbox{ a,s,}\]
(vii) If \(|X_{n}(\omega )|\leq V(\omega )\) for all \(n\), \(\mathbb{E}[V]<\infty\), and \(X_{n}\rightarrow X\) a.s., then \(\mathbb{E}[X_{n}|{\cal G}]\rightarrow\mathbb{E}[X|{\cal G}]\) a.s.
(viii) If \(c:\mathbb{R}\rightarrow \mathbb{R}\) is convex, and \(\mathbb{E}[|c(X)|]< \infty\), then \(\mathbb{E}[c(X)|{\cal G}]\geq c\left (\mathbb{E}[X|{\cal G}]\right )\) a.s. We also have \(\parallel \mathbb{E}[X|{\cal G}]\parallel_{p}\leq\parallel X\parallel_{p}\) for \(\mathbb{P}\geq 1\).
(ix) If \({\cal H}\) is a sub-$\sigma$-filed of \({\cal G}\), then \(\mathbb{E}[\mathbb{E}[X|{\cal G}|{\cal H}]=\mathbb{E}[X|{\cal H}]\) a.s.
(x) If \(Z\) is \({\cal G}\)-measurable and bounded, then \(\mathbb{E}[Z\cdot X|{\cal G}]=Z\cdot \mathbb{E}[X|{\cal G}]\) a.s. If \(\mathbb{P}>1\), \(1/p+1/q=1\), \(X\in {\cal L}^{p}(\Omega ,{\cal F},P)\) and
$Z\in {\cal L}^{q}(\Omega ,{\cal F},P)$, then the equality \(\mathbb{E}[Z\cdot X|{\cal G}]=Z\cdot \mathbb{E}[X|{\cal G}]\) a.s. again holds.
(xi) If \({\cal H}\) is independent of \(\sigma (\sigma (X),{\cal G})\), then \(\mathbb{E}[X|\sigma ({\cal G},{\cal H})]=\mathbb{E}[X|{\cal G}]\) a.s. In particular, if \(X\) is independent of \({\cal H}\), then \(\mathbb{E}[X|{\cal H}]=\mathbb{E}[X]\) a.s. \(\sharp\)
\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}
Martingales.
Discrete-Time Martingales.
Definition. A process \(X=\{X_{n}\}_{n\in {\bf N}}\) is called a martingale relative to \((\{{\cal F}_{n}\}_{n\in {\bf N}},P)\) when
(a) \(X\) is adapted to \(\{{\cal F}_{n}\}_{n\in {\bf N}}\);
(b) \(\mathbb{E}[|X_{n}|]<\infty\) for all \(n\);
(c) \(\mathbb{E}[X_{n}|{\cal F}_{n-1}]=X_{n-1}\) \(\mathbb{P}\)-a.s. for all \(n\).
$X$ is a supermartingale if in place of (c) by \(\mathbb{E}[X_{n}|{\cal F}_{n-1}]\leq X_{n-1}\) \(\mathbb{P}\)-a.s. for \(n\geq 1\) and \(X\) is a submartingale if in place of (c) by \(\mathbb{E}[X_{n}|{\cal F}_{n-1}]\geq X_{n-1}\) \(\mathbb{P}\)-a.s. for all \(n\). \(\sharp\)
$X$ is a submartingale (resp. supermartingale) if and only if \(-X\) is a supermartingale (resp. submartingale). \(X\) is a martingale if and only if it is both a supermartingale and submartingale. \(\{X_{n}\}_{n\in {\bf N}}\) is a martingale if and only if \(\{X_{n}-X_{0}\}_{n\in {\bf N}}\) is a martingale. So we may without loss of generality take \(X_{0}=0\) when convenient. If \(X\) is a martingale, then for \(m<n\) using the iterated conditional expectation and the martingale property repeatedly
\[\mathbb{E}[X_{n}|{\cal F}_{m}]=\mathbb{E}[\mathbb{E}[X_{n}|{\cal F}_{n-1}]|{\cal F}_{m}]=
\mathbb{E}[X_{n-1}|{\cal F}_{m}]=\cdots =\mathbb{E}[X_{m}|{\cal F}_{m}]=X_{m},\]
and similarly for supermartingales and submartingales. We call a process \(X=\{X_{n}\}_{n\in {\bf N}}\) predictable (or previsible) when $X_{n}$ is \({\cal F}_{n-1}\)-measurable for all \(n\geq 1\).
Proposition. Let \(\{X_{n}\}_{n\in {\bf N}}\) be an adapted process with each \(X_{n}\in L^{1}\). Then \(X_{n}\) has an \((\)essentially unique$)$ Doob decomposition
\[X_{n}=X_{0}+M_{n}+A_{n}\mbox{ for all }n\]
with \(\{M_{n}\}_{n\in {\bf N}}\) a martingale null at zero, \(\{A_{n}\}_{n\in {\bf N}}\) a predictable process null at zero. If \(\{X_{n}\}_{n\in {\bf N}}\) is also a submartingale, then \(\{A_{n}\}_{n\in {\bf N}}\) is increasing, i.e., \(A_{n}\leq A_{n+1}\) a.s. for all \(n\). \(\sharp\)
Now think of a gambling game in discrete time. There is no play at time \(0\); there are plays at times \(n=1,2,\cdots ,\) and \(\Delta X_{n}=X_{n}-X_{n-1}\) represents the net winnings per unit stake at play \(n\). Thus if \(\{X_{n}\}_{n\in {\bf N}}\) is a martingale, the game is “fair on average”. Think of \(C_{n}\) as the stake on play \(n\). Predictability says that we have to decide how much to stake on play \(n\) based on the history before time \(n\) (i.e. up to and including play \(n-1\)). The winnings on game \(n\) are \(C_{n}\cdot\Delta X_{n}=C_{n}\cdot (X_{n}-X_{n-1})\). The total winnings up to time \(n\) are
\[Y_{n}=\sum_{k=1}^{n}C_{k}\cdot\Delta X_{k}=\sum_{k=1}^{n}C_{k}\cdot (X_{k}-X_{k-1}).\]
We write \(Y=C\bullet X\), \(Y_{n}=(C\bullet X)_{n}\), and \(\Delta Y_{n}=C_{n}\cdot\Delta X_{n}\). We then call \(C\bullet X\) the martingale transform of \(X\) by \(C\).
\begin{equation}{\label{binp18}}\tag{4}\mbox{}\end{equation}
Proposition \ref{binp18}. We have the following properties.
(i) If \(C\) is a bounded nonnegative predictable process and \(X\) is a supermartingale, \(C\bullet X\) is a supermartingale null at zero.
(ii) If \(C\) is bounded and predictable and \(X\) is a martingale, then \(C\bullet X\) is a martingale null at zero. \(\sharp\)
Proposition. An adapted sequence of real integrable random variables \(\{M_{n}\}_{n\in {\bf N}}\) is a martingale if and only if for any bounded predictable sequence \(\{H_{n}\}_{n\in {\bf N}}\)
\[\mathbb{E}\left [\sum_{k=1}^{n}H_{k}\cdot\Delta M_{k}\right ]=0\mbox{ for }n=1,2,\cdots . \sharp\]
Theorem. (Doob’s Optional Stopping Theorem). Let \(T\) be a stopping time, \(\{X_{n}\}_{n\in {\bf N}}\) be a supermartingale, and assume that one of the following holds:
(a) \(T\) is bounded \((T(\omega )\leq K\) for some constant \(K\) and all \(\omega\in\Omega )\);
(b) \(\{X_{n}\}_{n\in {\bf N}}\) is bounded \((|X_{n}(\omega )|\leq K\) for some \(K\) and all \(n,\omega )\);
(c) \(\mathbb{E}[T]<\infty\) and \((X_{n}-X_{n-1})\) is bounded.
Then \(X_{T}\) is integrable and \(\mathbb{E}[X_{T}]\leq \mathbb{E}[X_{0}]\). If \(\{X_{n}\}_{n\in {\bf N}}\) is a martingale, then \(\mathbb{E}[X_{T}]=\mathbb{E}[X_{0}]\). \(\sharp\)
We write \(X_{n}^{T}\equiv X_{n\wedge T}\) for the sequence \(\{X_{n}\}_{n\in {\bf N}}\) stopped at time \(T\).
Proposition. We have the following properties.
(i) If \(\{X_{n}\}_{n\in {\bf N}}\) is adapted and \(T\) is a stopping time, the stopped sequence \(\{X_{n\wedge T}\}_{n\in {\bf N}}\) is adpapted.
(ii) If \(\{X_{n}\}_{n\in {\bf N}}\) is a martingale (resp. supermartingale) and \(T\) is a stopping time, \(\{X_{n\wedge T}\}_{n\in {\bf N}}\) is a martingale (resp. supermartingale). \(\sharp\)
Continuous-Time Martingales.
Recall that we will always assume as given a filtered, complete probability space \((\Omega ,{\cal F},\{{\cal F}_{t}\}_{t\geq 0},P)\), where the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) is assumed to be right-continuous.
Definition. A stochastic process \(\{M_{t}\}_{t\geq 0}\) is a martingale with respect to the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) when
(a) \(\{M_{t}\}_{t\geq 0}\) is adapted and \(\mathbb{E}[|M_{t}|]<\infty\) for all \(t\geq 0\);
(b) \(\mathbb{E}[M_{t}|{\cal F}_{s}]=M_{s}\) \(\mathbb{P}\)-a.s. for \(0\leq s\leq t\),
and similarly for submartingales and supermartingales when the condition (b) is replaced by \(\mathbb{E}[M_{t}|{\cal F}_{s}]\geq M_{s}\) and \(\mathbb{E}[M_{t}|{\cal F}_{s}]\leq M_{s}\) \(\mathbb{P}\)-a.s. for \(0\leq s\leq t\), respectively. \(\sharp\)
Note that the mean of a supermartingale is nonincreasing in \(t\), the mean of a submartingale is nondecreasing in \(t\), and the mean of a martingale is constant in \(t\) (If \(M\) is a martingale then \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{0}]\), since by taking expecations on the equality \(\mathbb{E}[M_{t}|{\cal F}_{s}]=M_{s}\) to get \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{s}]\)).
\begin{equation}{\label{klet73}}\tag{5}\mbox{}\end{equation}
Proposition \ref{klet73}. A supermartingale \(\{M_{s}\}_{0\leq s\leq t}\) is a martingale if and only if \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{0}]\).
Proof. f \(M\) is a martingale, then \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{0}]\) follows by the martingale property. Conversely, suppose \(M\) is a supermartingale and \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{0}]\). If for some \(t_{1}<t_{2}\) we have a strict inequality, \(\mathbb{E}[M_{t_{1}}|{\cal F}_{t_{2}}]<M_{t_{2}}\) on a set of positive probability, then by taking expectations, we obtain \(\mathbb{E}[M_{t_{1}}]<\mathbb{E}[M_{t_{2}}]\). Since the expectation of a supermartingale is nonincreasing, \(\mathbb{E}[M_{t}]\leq \mathbb{E}[M_{t_{1}}<\mathbb{E}[M_{t_{2}}]\leq \mathbb{E}[M_{0}]\). But this contradicts the condition of the theorem \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{0}]\). Thus for all \(t_{1}\) and \(t_{2}\) the inequality \(\mathbb{E}[M_{t_{1}}|{\cal F}_{t_{2}}]\leq M_{t_{2}}\) must
be an equality a.s. Thus \(M\) is a martingale. \(\blacksquare\)
Consider a martingale \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) and an integrable \({\cal F}_{\infty}\)-measurable random variable \(X_{\infty}\); we recall that \({\cal F}_{\infty}=\sigma (\bigcup_{t\geq 0}{\cal F}_{t})\). If we so have, for every \(0\leq t<\infty\),
\[\mathbb{E}[X_{\infty}|{\cal F}_{t}]=X_{t}\mbox{ \(\mathbb{P}\)-a.s.},\]
then we say that \(\{(X_{t},{\cal F}_{t})\}_{0\leq t\leq\infty}\) is a martingale with last element \(X_{\infty}\). We have the similar convention in supermartingale and submartingale cases (also refer to the following Proposition \ref{chut115}).
Proposition (Submartingale Convergence). Let \(\{(X_{t},{\cal F}_{t}\}_{t\geq 0}\) be a right-continuous submartingale and assume \(\sup_{t\geq 0}\mathbb{E}[X_{t}^{+}]<\infty\). Then \(X_{\infty}(\omega )=\lim_{t\rightarrow\infty} X_{t}(\omega )\) exists for a.e. \(\omega\in\Omega\), and \(\mathbb{E}[|X_{\infty}|]<\infty\). \(\sharp\)
In other words, a martingale is an adapted family of integrable random variables satisfying
\[\int_{A}X_{s}dP=\int_{A}X_{t}dP\]
for every pair \(s,t\) with \(s<t\) and \(A\in {\cal F}_{s}\). Note that martingales are only defined on \([0,\infty )\); that is, for finite \(t\) and not \(t=\infty\). It is often possible to extend the definition to \(t=\infty\). Obviously, the set of martingales with respect to a given filtration is a vector space.
Proposition. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a martingale \((\)resp. submartingale$)$, and \(\phi :\mathbb{R}\rightarrow \mathbb{R}\) be a convex (resp. convex nondecreasing) function such that \(\mathbb{E}[|\phi (X_{t})|]< \infty\) holds for every \(t\geq 0\). Then \(\{(\phi (X_{t}),{\cal F}_{t})\}_{t\geq 0}\) is a submartingale. \(\sharp\)
For \(\mathbb{P}\in [1,\infty )\), \(M\) is called an \(L^{p}\)-martingal if and only if it is a martingale and \(M_{t}\in L^{p}\) for each \(t\). If \(\sup_{t\geq 0} \mathbb{E}[|M_{t}|^{p}]<\infty\), we say \(M\) is \(L^{p}\)-bounded.
Example. Let \(\{W_{t}\}_{t\geq 0}\) be a Brownian motion with \(W_{0}\in L^{p}\) for some \(\mathbb{P}\in [1,\infty )\) and let \(\{{\cal F}_{t}\}_{t\geq 0}\) be the standard filtration associated with \(W\). Then \(\{W_{t}\}_{t\geq 0}\) is a continuous \(L^{p}\)-martingale with respect to the filtartion \(\{{\cal F}_{t}\}_{t\geq 0}\). Moreover, if \(\mathbb{P}\geq 2\), then \(\{W_{t}^{2}-t\}_{t\geq 0}\) is a continuous \(L^{p/2}\)-martingale. \(\sharp\)
Proposition. Let \(\mathbb{P}\in [1,\infty )\) and \(M\) be a right-continuous \(L^{p}\)-martingale. Then for each \(t\) and \(c\geq 0\),
\[c^{p}\cdot P\left\{\sup_{0\leq s\leq t}|M_{s}|\geq c\right\}\leq\int_{\{\sup_{0\leq s\leq t}|M_{s}|\geq c\}}|M_{t}|^{p}.\]
If \(\mathbb{P}>1\), then for each \(t\), \(\sup_{0\leq s\leq t}|M_{s}|\in L^{p}\) and
\begin{equation}{\label{chueq114}}\tag{6}
\left |\!\left |\sup_{0\leq s\leq t}|M_{s}|\right |\!\right |_{p}\leq q\cdot\parallel M_{t}\parallel_{p}
\end{equation}
where \(1/p+1/q=1\). \(\sharp\)
Inequality (\ref{chueq114}) will be called Doob’s inequality.
Definition. A martingale \(X\) is said to be {\bf closed} by a random variable \(Y\) if \(\mathbb{E}[|Y|]<\infty\) and \(X_{t}=\mathbb{E}[Y|{\cal F}_{t}]\) for \(0\leq t<\infty\). \(\sharp\)
A random variable \(Y\) closing a martingale is not necessarily unique.
\begin{equation}{\label{prot19}}\tag{7}\mbox{}\end{equation}
Proposition \ref{prot19}. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a supermartingale, and assume the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) satisfies the usual conditions.. The function \(t\mapsto \mathbb{E}[X_{t}]\) is right-continuous if and only if there exists a unique modification \(Y\) of \(X\), which is RCLL. Such a modification is unique. \(\sharp\)
By uniqueness we mean up to indistinguishability. Note that the process \(Y\) is also a supermartingale. If \(X\) is a martingale then \(t\mapsto \mathbb{E}[X_{t}]\) is constant, and hence it has a right-continuous modification.
Proposition. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a submartingale, and assume the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) satisfies the usual conditions. Then the process \(X\) has a right-continuous modification if and only if the function \(t\mapsto \mathbb{E}[X_{t}]\) from \([0,\infty )\) to \(\mathbb{R}\) is right-continuos. If this right-continuous modification exists, it can be chosen so as to be RCLL and adapted to \(\{{\cal F}_{t}\}_{t\geq 0}\), hence a submartingale with respect to \(\{{\cal F}_{t}\}_{t\geq 0}\). \(\sharp\)
\begin{equation}{\label{proc11}}\tag{8}\mbox{}\end{equation}
Corollary \ref{proc11}. If \(X\) is a martingale then there exists a unique modification \(Y\) of \(X\) which is RCLL. \(\sharp\)
Since all martingales have right-continuous modifications, we will always assume that we are taking the right-continuous version without any special mention. Note that it follows from Corollary \ref{proc11} that a right-continuous martingale is RCLL. A random variable \(X\) is called integrable if \(\mathbb{E}[|X|]<\infty\). It is easy to see that this holds if and only if
\begin{equation}{\label{kleeq72}}\tag{9}
\lim_{n\rightarrow\infty}\mathbb{E}\left [|X|\cdot 1_{\{|X|>n\}}\right ]=\lim_{n\rightarrow\infty}\int_{|X|>n}|X|dP=0.
\end{equation}
Indeed, if \(X\) is integrable then (\ref{kleeq72}) holds by the dominated convergence, since \(\lim_{n\rightarrow\infty}|X|\cdot 1_{\{|X|>n\}}=0\) and \(|X|\cdot 1_{\{|X|>n\}}\leq |X|\). Conversely, if (\ref{kleeq72}) holds, take any \(n\) such that \(\mathbb{E}\left [|X|\cdot 1_{\{|X|>n\}}\right ]<\infty\). Clearly, \(\mathbb{E}\left [|X|\cdot 1_{\{|X|\leq n\}}\right ]\leq n\). Thus \(\mathbb{E}[|X|]=\mathbb{E}\left [|X|\cdot 1_{\{|X|>n\}}\right ]+\mathbb{E}\left [|X|\cdot 1_{\{|X|\leq n\}}\right ]<\infty\).
Definition. A family of random variables \(\{U_{\alpha}\}_{\alpha\in A}\) is uniformly integrable when
\[\lim_{n\rightarrow\infty}\sup_{\alpha\in A}\int_{\{|U_{\alpha}|\geq n\}}|U_{\alpha}|dP=0. \sharp\]
\begin{equation}{\label{klet76}}\tag{10}\mbox{}\end{equation}
Proposition \ref{klet76}. If the process is dominated by an integrable random variable, that is, for all \(t\geq 0\), \(|X_{t}|\leq Y\) and \(\mathbb{E}[|Y|]<\infty\), then it is uniformly integrable. In particular, if \(\mathbb{E}\left [\sup_{t\geq 0}|X_{t}|\right ]<\infty\), then \(X_{t}\) is uniformly integrable.
Proof. \(\mathbb{E}\left [|X_{t}|\cdot 1_{\{|X_{t}|>n\}}\right ]>\mathbb{E}\left [|Y|\cdot 1_{\{|Y|>n\}}\right ]\rightarrow 0\) as \(n\rightarrow\infty\). \(\sharp\)
\begin{equation}{\label{prop*23}}\tag{11}\mbox{}\end{equation}
Proposition \ref{prop*23}. Let \(\{U_{\alpha}\}_{\alpha\in A}\) be a subset of \(L^{1}\). The following statements are equivalent
(a) \(\{U_{\alpha}\}_{\alpha\in A}\) is uniformly integrable.
(b) \(\sup_{\alpha\in A}\mathbb{E}[|U_{\alpha}|]<\infty\), and whatever \(\epsilon >0\) there exists \(\delta >0\) such that \(\Gamma\in {\cal F}\), \(\mathbb{P}(\Gamma )\leq\delta\), imply \(\mathbb{E}[|U_{\alpha}\cdot I_{\Gamma}|]<\epsilon\).
(c) There exists a positive, increasing, convex function \(G(x)\) defined on \([0,\infty )\) satisfying
\[\lim_{x\rightarrow\infty}\frac{G(x)}{x}=+\infty\mbox{ and }\sup_{\alpha\in A}\mathbb{E}[G(|U_{\alpha}|)]<\infty . \sharp\]
Note that the assumption that \(G\) is convex is not needed for the implications (c)$\Rightarrow$(b) and (c)$\Rightarrow$(a). In practice, the above statement (c) in Proposition \ref{prop*23} is used with \(G(x)=x^{r}\) for \(r>1\), and uniform integrability is checked by using moments. For second moments \(r=2\), we have that square integrability implies uniform integrability.
Corollary. If \(X_{t}\) is square integrable, that is, \(\sup_{t\geq 0}\mathbb{E}[X^{2}_{t}]<\infty\), then \(X_{t}\) is uniformly integrable. \(\sharp\)
Proposition. (Doob’s martingale). Let \(Y\) be an integrable random variable, that is, \(\mathbb{E}[|Y|]<\infty\) and define \(M_{t}=\mathbb{E}[Y|{\cal F}_{t}]\). Then \(M_{t}\) is a uniformly integrable martingale. \(\sharp\)
\begin{equation}{\label{chut115}}\tag{12}\mbox{}\end{equation}
Proposition \ref{chut115}. (Martingale Convergence Theorem). Let \(p\in [1,\infty )\) and \(M\) be a right-continuous \(L^{p}\)-bounded martingale. Then there is a random variable \(M_{\infty}\in L^{p}\) satisfying \(\lim_{t\rightarrow\infty}M_{t}=M_{\infty}\) a.s. Furthermore if either \(p=1\) and \(\{M_{t}\}_{t\geq 0}\) is uniformly integrable, or \(p>1\), then \(M_{t}\rightarrow M_{\infty}\) in \(L^{p}\) as \(t\rightarrow\infty\), \(\{M_{t}\}_{0\leq t\leq\infty}\) is an \(L^{p}\)-martingale where \({\cal F}_{\infty}=\bigvee_{t\geq 0}{\cal F}_{t}\),
\[\mathbb{E}\left [|M_{\infty}|^{p}\right ]=\lim_{t\rightarrow\infty}\mathbb{E}\left [|M_{t}|^{p}\right ]=\sup_{t\geq 0}\mathbb{E}\left [|M_{t}|^{p}\right ],\]
and \((\)\ref{chueq114}$)$ holds with \(t=\infty\). \(\sharp\)
Proposition. (Martingale Convergence Theorem). Let \(X\) be a right-continuous supermartingale, \(\sup_{t\geq 0}\mathbb{E}[|X_{t}|]<\infty\). Then the random variable $latex Y=\lim_{t\rightarrow\infty}
X_{t}$ exists a.s., and \(\mathbb{E}[|Y|]<\infty\). Moreover if \(X\) is a martingale closed by a random variable \(Z\), then \(Y\) also closes \(X\) and \(Y=\mathbb{E}[Z|\bigvee_{t\geq 0}{\cal F}_{t}]\), where
$\bigvee_{t\geq 0}{\cal F}_{t}$ denotes the smallest \(\sigma\)-filed generated by \(\{{\cal F}_{t}\}_{t\geq 0}\). \(\sharp\)
Proposition. Let \(X\) be a right-continuous martingale which is uniformly integrable. Then \(Y=\lim_{t\rightarrow\infty}X_{t}\) exists a.s., \(\mathbb{E}[|Y|]<\infty\), and \(Y\) closes \(X\) as a martingale. \(\sharp\)
\begin{equation}{\label{prot113}}\tag{13}\mbox{}\end{equation}
Proposition \ref{prot113}. Let \(X\) be a \((\)right-continuous$)$ martingale. Then \(\{X_{t}\}_{t\geq 0}\) is uniformly integrable if and only if \(Y=\lim_{t\rightarrow\infty}X_{t}\) exists a.s., \(\mathbb{E}[|Y|]<\infty\), and \(\{X_{t}\}_{0\leq t\leq\infty}\) is a martingale, where \(X_{\infty}=Y\). \(\sharp\)
If \(X\) is a uniformly integrable martingale, then \(X_{t}\) converges to \(X_{\infty}=Y\) in \(L^{1}\) as well as almost surely.
Theorem (Doob’s Stopping Theorem). Let \(\mathbb{P}\in [1,\infty )\) and \(M\) be a right-continuous \(L^{p}\)-bounded martingale. If \(\mathbb{P}=1\), suppose \(M\) is also uniformly integrable. Let \(M\) be as in Proposition \ref{chut115}. Suppose \(\Gamma\subset\mathbb{R}_{+}\) and \(\{T_{t}\}_{t\in\Gamma}\) is an increasing family of stopping times. Then \(\{M_{T_{t}}\}_{t\in\Gamma}\) is an \(L^{p}\)-martingale with respect to the filtration \(\{{\cal F}_{T_{t}}\}_{t\in\Gamma}\) and \(\left\{|M_{T_{t}}|^{p}\right\}_{t\in\Gamma}\) is uniformly integrable. \(\sharp\)
Corollary. Let \(p\in [1,\infty )\) and \(M\) be a right-continuous \(L^{p}\)-martingale. Then, we have the following properties.
(i) If \(\Gamma\subset \mathbb{R}_{+}\) and \(\{T_{t}\}_{t\in\Gamma}\) is an increasing family of stopping times satisfying \(\sup_{t\in\Gamma}T_{t}\leq k\) for some \(k\in \mathbb{R}_{+}\), then \(\{M_{T_{t}}\}_{t\in\Gamma}\) is an \(L^{p}\)-martingale with respect to the filtration \(\{{\cal F}_{T_{t}}\}_{t\in\Gamma}\) and \(\left\{|M_{T_{t}}|^{p}\right\}_{t\in\Gamma}\) is uniformly integrable.
(ii) If \(T\) is a stopping time, then \(\{M_{t\wedge T}\}_{t\geq 0}\) is an \(L^{p}\)-martingale with respect to the filtration \(\{{\cal F}_{t\wedge T}\}_{t\geq 0}\). Moreover, if \(T\) is bounded, then \(\left\{|M_{t\wedge T}|^{p}\right\}_{t\geq 0}\) is uniformly integrable. \(\sharp\)
\begin{equation}{\label{klet711}}\tag{14}\mbox{}\end{equation}
Proposition \ref{klet711}. If \(\{M_{t}\}_{t\geq 0}\) is a martingale and \(T\) is a stopping time, then the stopped process \(\{M_{T\wedge t}\}_{t\geq 0}\) is a martingale, in particular, \(\mathbb{E}\left [M_{T\wedge t}\right ]=\mathbb{E}[M_{0}]\). \(\sharp\)
We stress that in the above proposition, \(\{M_{T\wedge t}\}_{t\geq 0}\) is a martingale with respect to the original filtration \(\{{\cal F}_{t}\}_{t\geq 0}\). Since it is adapted to \(\{{\cal F}_{T\wedge t}\}_{t\geq 0}\), it is also an \({\cal F}_{T\wedge t}\)-martingale. Under some additional assumptions on the martingale or on the stopping times, the random stopping does not alter the expected value. The following result gives most frequently used sufficient conditions for optional stopping to hold. It shows that if a martingale represents a betting strategy, then on average no loss or gain is made whenever we stop to play, even if clever stopping rules are used, as long as stopping times are bounded.
Proposition (Optional Stopping). Let \(\{M_{t}\}_{t\geq 0}\) be a martingale. Then, we have the following properties.
(i) If \(T\leq c<\infty\) is a bounded stopping time then \(\mathbb{E}\left [M_{T}\right ]=\mathbb{E}[M_{0}]\).
(ii) If \(M\) is uniformly integrable, then for any stopping time \(T\), \(\mathbb{E}\left [M_{T}\right ]=\mathbb{E}[M_{0}]\). \(\sharp\)
Proposition. Let \(\{M_{t}\}_{t\geq 0}\) be a martingale. \(T\) is a stopping time satisfying \(\mathbb{P}\{T<\infty\}=1\). If \(\mathbb{E}\left [|M_{T}|\right ]<\infty\), and \(\lim_{t\rightarrow\infty}\mathbb{E}\left [M_{t}\cdot 1_{\{T>t\}}\right ]=0\), then \(\mathbb{E}\left [M_{T}\right ]=\mathbb{E}[M_{0}]\).
Proof. \(\{M_{T\wedge t}\}_{t\geq 0}\) is a martingale by Proposiion \ref{klet711}, and it can be written as
\begin{equation}{\label{kleeq77}}\tag{15}
M_{T\wedge t}=M_{t}\cdot 1_{\{t<T\}}+M_{T}\cdot 1_{\{t\geq T\}}.
\end{equation}
Since \(\mathbb{E}\left [M_{T\wedge t}\right ]=\mathbb{E}[M_{0}]\), taking expectations in (\ref{kleeq77}), we have
\begin{equation}{\label{kleeq78}}\tag{16}
\mathbb{E}[M_{0}]=\mathbb{E}\left [M_{t}\cdot 1_{\{t<T\}}\right ]+\mathbb{E}\left [M_{T}\cdot 1_{\{t\geq T\}}\right ].
\end{equation}
Take limits in (\ref{kleeq78}). Since \(T\) is finite, \(\mathbb{E}\left [M_{T}\cdot 1_{\{t\geq T\}}\right ]\rightarrow \mathbb{E}\left [M_{T}\right ]\) by dominated convergence. It is assumed that \(\mathbb{E}\left [M_{t}\cdot 1_{\{t<T\}}\right ]\rightarrow 0\) as \(t\rightarrow\infty\), and the result follows. \(\blacksquare\)
The following result is in some sense the converse to the Optional Stopping Theorem.
Proposition. Let \(\{M_{t}\}_{t\geq 0}\) be such that for any bounded stopping time \(T\), \(M_{T}\) is integrable and \(\mathbb{E}\left [M_{T}\right ]=\mathbb{E}[M_{0}]\). Then \(\{M_{t}\}_{t\geq 0}\) is a martingale.
\end{Pro}
\begin{equation}{\label{prot116}}\tag{17}\mbox{}\end{equation}
Theorem \ref{prot116} (Doob’s Optional Sampling Theorem). Let \(X\) be a right-continuous martingale, which is closed by a random variable \(X_{\infty}\). Let \(S\) and \(T\) be two bounded stopping times such that \(S\leq T\) a.s. Then \(X_{S}\) and \(X_{T}\) are integrable and \(X_{S}=\mathbb{E}\left [\left .X_{T}\right |{\cal F}_{S}\right ]\) a.s. \(\sharp\)
We also have a similar version for supermartingales.
Theorem. Let \(X\) be a right-continuous supermartingale and let \(S\) and \(T\) be two bounded stopping times such that \(S\leq T\) a.s. Then \(X_{S}\) and \(X_{T}\) are integrable and \(X_{S}\geq \mathbb{E}\left [\left .X_{T}\right |{\cal F}_{S}\right ]\) a.s. \((=)\). \(\sharp\)
Theorem (Optional Sampling). Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a right-continuous submartingale with a last element \(X_{\infty}\), and let \(S\leq T\) be two stopping times of the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\). We have \(\mathbb{E}[X_{T}|{\cal F}_{S}]\geq X_{S}\) \(\mathbb{P}\)-a.s. In particular, \(\mathbb{E}[X_{T}]\geq \mathbb{E}[X_{0}]\), and for a martingale with a last element, we have \(\mathbb{E}[X_{T}]=\mathbb{E}[X_{0}]\). \(\sharp\)
If \(T\) is a stopping time, then so is \(t\wedge T=\min\{t,T\}\) for each \(t\geq 0\).
Definition. Let \(X\) be a stochastic process and let \(T\) be a stopping time. \(X^{T}\) is said to be the process stopped at \(T\) if \(X_{t}^{T}=X_{t\wedge T}\). \(\sharp\)
Note that if \(X\) is adapted and RCLL and if \(T\) is a stopping time, then
\[X_{t}^{T}=X_{t\wedge T}=X_{t}\cdot 1_{\{t<T\}}+X_{T}\cdot 1_{\{t\geq T\}}\]
is also adapted. By Proposition \ref{prot*1}, the family of \(\sigma\)-fields \(\{{\cal F}_{t\wedge T}\}_{t\geq 0}\) is a filtration and we have the following result.
Proposition. If \(X\) is progressive measurable, then \(X^{T}\) is progressively measurable with respect to the filtration \(\{{\cal F}_{t\wedge T}\}_{t\geq 0}\). \(\sharp\)
A martingale stopped at a stopping time is still a martingale, as the next result shows.
\begin{equation}{\label{prot118}}\tag{18}\mbox{}\end{equation}
Proposition \ref{prot118}. Let \(X\) be a uniformly integrable right-continuous martingale, and let \(T\) be a stopping time. Then \(X^{T}=\{X_{t\wedge T}\}_{0\leq t\leq\infty}\) is also a uniformly integrable right-continuous martingale.
Proof. \(X^{T}\) is clearly right-continuous. By Theorem \ref{prot116}
\begin{align*}
X_{t\wedge T} & =\mathbb{E}[X_{T}|{\cal F}_{t\wedge T}]\\
& =\mathbb{E}[X_{T}\cdot 1_{\{T<t\}}+X_{t}\cdot 1_{\{T\geq t\}}|{\cal F}_{t\wedge T}]\\
& =X_{T}\cdot 1_{\{T<t\}}+\mathbb{E}[X_{t}\cdot 1_{T\geq t\}}|{\cal F}_{t\wedge T}]
\end{align*}
However for \(H\in {\cal F}_{t}\) we have \(H\cdot 1_{\{T\geq t\}}\in {\cal F}_{T}\). Thus
\[=X_{T}\cdot 1_{\{T<t\}}+\mathbb{E}[X_{t}|{\cal F}_{t}]\cdot 1_{\{T\geq t\}}.\]
Therefore
\begin{equation}{\label{proeq1}}\tag{19}
X_{t\wedge T}=X_{T}\cdot 1_{\{T<t\}}+\mathbb{E}[X_{t}|{\cal F}_{t}]\cdot 1_{\{T\geq t\}}=\mathbb{E}[X_{T}|{\cal F}_{t}]
\end{equation}
since \(X_{T}\cdot 1_{\{T<t\}}\) is \({\cal F}_{t}\)-measurable. Thus \(X^{T}\) is a uniformly integrable \({\cal F}_{t}\)-martingale by Proposition \ref{prot113}. \(\blacksquare\)
Observe that the difficulty in Proposition \ref{prot118} is to show that \(X^{T}\) is a martingale for the filtration \(\{{\cal F}_{t}\}_{0\leq t\leq\infty}\). It is a trivial consequence of Theorem \ref{prot116} that \(X^{T}=X_{t\wedge T}\) is a martingale for the filtration \(\{{\cal G}_{t}\}_{0\leq t\leq\infty}\) given by \({\cal G}_{t}={\cal F}_{t\wedge T}\).
Corollary. Let \(Y\) be an integrable random variable and let \(S\) and \(T\) be stopping times. Then
\[\mathbb{E}[\mathbb{E}[Y|{\cal F}_{S}]|{\cal F}_{T}]=\mathbb{E}[\mathbb{E}[Y|{\cal F}_{T}]|{\cal F}_{S}]=\mathbb{E}[Y|{\cal F}_{S\wedge T}]=Y_{S\wedge T},\]
where \(Y_{S\wedge T}\) is defined in the proof.
Proof. Let \(Y_{t}=\mathbb{E}[Y|{\cal F}_{t}]\). Then \(Y^{T}\) is uniformly integrable martingale. From (\ref{proeq1}), we have
\[Y_{S\wedge T}=Y_{S}^{T}=\mathbb{E}[Y_{T}|{\cal F}_{S}]=\mathbb{E}[\mathbb{E}[Y|{\cal F}_{T}]|{\cal F}_{S}] .\]
Interchanging the roles of \(T\) and \(S\) yields
\[Y_{S\wedge T}=Y_{T}^{S}=\mathbb{E}[Y_{S}|{\cal F}_{T}]=\mathbb{E}[\mathbb{E}[Y|{\cal F}_{S}]|{\cal F}_{T}] .\]
Finally, \(\mathbb{E}[Y|{\cal F}_{S\wedge T}]=Y_{S\wedge T}\). \(\blacksquare\)
Proposition (Jensen’s Inequality). Let \(\phi:\mathbb{R}\rightarrow \mathbb{R}\) be convex, and let \(X\) and \(\phi(X)\) be integrable random variable. For any \(\sigma\)-filed \({\cal G}\), \(\phi\left (\mathbb{E}[X|{\cal G}]\right )\leq \mathbb{E}[\phi (X)|{\cal G}]\). \(\sharp\)
Corollary. Let \(X\) be a martingale, and let \(\phi\) be convex such that \(\phi (X_{t})\) is integrable for \(0\leq t<\infty\). Then \(\phi (X)\) is a submartingale. In particular, if \(M\) is a martingale, then \(|M|\) is a submartingale. \(\sharp\)
Corollary. Let \(X\) be a submartingale, and let \(\phi\) be convex, nondecreasing, and such that \(\{\phi (X_{t})\}_{t\geq 0}\) is integrable. Then \(\phi (X)\) is also a submartingale. \(\sharp\)
Bayes Formula.
\begin{equation}{\label{binch4p1}}\tag{20}\mbox{}\end{equation}
Proposition \ref{binch4p1}. (Bayes Formula). Assume \(\bar{P}\) is absolutely continuous with respect to \(\mathbb{P}\) and \(Z\) is its Radon-Nikodym derivative. If \(Y\) is \(\bar{P}\)-integrable and \({\cal F}_{t}\)-measurable, then
\[E_{\bar{P}}[Y|{\cal F}_{s}]=\frac{1}{Z_{s}}\cdot E_{P}[Y\cdot Z_{t}|{\cal F}_{s}]\mbox{ a.s. }s\leq t. \sharp\]
Let \(\mathbb{P}\) and \(Q\) be two mutually equivalent probability measures defined on a common measurable space \((\Omega ,{\cal F})\). Suppose that the Radon-Nikodym derivative of \(Q\) with respect to \(\mathbb{P}\) equals
\begin{equation}{\label{museqa8}}\tag{21}
\frac{dQ}{dP}=\eta\mbox{ \(\mathbb{P}\)-a.s.}
\end{equation}
Note that the random variable \(\eta\) is strictly positive \(\mathbb{P}\)-a.s., moreover \(\eta\) is \(\mathbb{P}\)-integrable with \(E_{P}[\eta ]=1\). By virtue of (\ref{museqa8}), it is clear that equality \(E_{Q}[\psi ]=E_{P}[\psi\eta ]\) holds for any \(Q\)-integrable random variable \(\psi\).
\begin{equation}{\label{musa04}}\tag{22}\mbox{}\end{equation}
Proposition \ref{musa04}. (Bayes Formula). Let \({\cal G}\) be a sub-$\sigma$-field of the \(\sigma\)-field \({\cal F}\), and let \(\psi\) be a random variable integrable with respect to \(Q\). Then the following version of the Bayes formula holds
\[\mathbb{E}_{Q}[\psi |{\cal G}]=\frac{\mathbb{E}_{P}[\psi\eta |{\cal G}]}{\mathbb{E}_{P}[\eta |{\cal G}]}. \sharp\]
\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}
Brownian Motion.
The Scottish botanist Robert Brown observed pollen particles in suspension under a microscope in 1828 and 1829, and observed that they were in constant irregular motion. In 1900 L. Bachelier considered Brownian motion a possible model for stock market prices. In 1905 Albert Einstein considered Browninan motion as a model of particles in suspension, and used it to estimate Avogadro’s number ($N\sim 6\times 10^{23}$) based on the diffusion coefficient \(D\) in the Einstein relation \(Var(X_{t})=Dt\) for \(t>0\). In 1923 Norbert Wiener defined and constructed Brownian motion rigorously for the first time. The resulting process is often called the Wiener process in his honour and its probability measure is called Wiener measure. Recall that we are assuming as given a filtered probability space \((\Omega ,{\cal F},\{{\cal F}_{t}\}_{t\geq 0},P)\) that satisfies the usual condition.
Definition. A stochastic process \(\{W_{t}\}_{t\geq 0}\) is a standard (one-dimensional) Brownian motion on some probability space \((\Omega ,{\cal F},P)\) if
- \(W_{0}=0\) a.s.,
- \(W_{t+u}-W_{t}\) is independent of \(\sigma (W_{s},s\leq t)\) for \(u\geq 0\). (independent increment),
- the law \(W_{t+u}-W_{t}\) depends only on \(u\) and is normally distributed with mean \(0\) and variance \(u\) (stationary increment),
- \(W_{t}\) is a continuous function of \(t\), i.e., \(t\mapsto W_{t}(\omega )\) is continuous in \(t\) for all \(\omega\in\Omega\). \(\sharp\)
We have \(\mathbb{E}[W_{t}]=0\), \(Var[W_{t}]=\mathbb{E}[W^{2}_{t}]=t\) and \(Var[W^{2}_{t}]=2t^{2}\). Now we consider the covariance function \(Cov(W_{s},W_{t})\). If \(t<s\) then \(W_{s}=W_{t}+W_{s}-W_{t}\), and
\begin{align*}
Cov(W_{s},W_{t}) & =\mathbb{E}[W_{s}W_{t}]-\mathbb{E}[W_{s}]\cdot \mathbb{E}[W_{t}]=\mathbb{E}[W_{s}W_{t}]\\
& =\mathbb{E}[W_{t}^{2}]+\mathbb{E}[W_{t}\cdot (W_{s}-W_{t})]=\mathbb{E}[W_{t}^{2}]=t,
\end{align*}
where we used independence of increment property. Similarly if \(t>s\), \(\mathbb{E}[W_{t}W_{s}]=s\). Therefore we conclude
\[\mathbb{E}[W_{t}W_{s}]=\min\{t,s\}.\]
The quadratic variation of Brownian motion \(\langle W\rangle_{t}\) is defined as
\[\langle W\rangle_{t}=\lim_{n\rightarrow\infty}\sum_{i=1}^{N_{n}}\left (W_{t_{i}^{(n)}}-W_{t_{i-1}^{(n)}}\right )^{2},\]
where, for each \(n\), \(\pi_{n}=\{t_{i}^{(n)}\}_{i=1}^{N_{n}}\) is a partition of \([0,t]\), and the limit is taken over all partitions with \(\parallel\pi_{n}\parallel =\max_{i}\left\{t_{i+1}^{(n)}-t_{i}^{(n)}\right\} \rightarrow 0\) as \(n\rightarrow\infty\). Then
\[\sum_{i=1}^{N_{n}}\left (W_{t_{i}^{(n)}}-W_{t_{i-1}^{(n)}}\right )^{2}\rightarrow t\mbox{ in probability}.\]
We also have the following result.
Theorem. The quadratic variation of a Brownian motion over \([0,t]\) exists and equals to \(t\).
Proof. We give the proof for a sequence of partitions, for which \(\sum_{n}\parallel\pi_{n}\parallel <\infty\). An example of such partitions is when the interval is divided into two, then each subinterval is divided into two, etc. Let
\[V_{n}=\sum_{i=1}^{N_{n}}\left (W_{t_{i}^{(n)}}-W_{t_{i-1}^{(n)}}\right )^{2}.\]
It is easy to see that
\[\mathbb{E}[V_{n}]=\sum_{i=1}^{N_{n}}\mathbb{E}\left [\left (W_{t_{i}^{(n)}}-W_{t_{i-1}^{(n)}}\right )^{2}\right ]=\sum_{i=1}^{N_{n}}(t_{i}^{(n)}-t_{i-1}^{(n)})=t-0=t\]
and by using the fourth moment of the standard normal distribution, we obtain
\begin{align*}
\mbox{Var}[V_{n}] & = & \sum_{i=1}^{N_{n}}Var\left [\left (W_{t_{i}^{(n)}}-W_{t_{i-1}^{(n)}}\right )^{2}\right ]\\
& =\sum_{i=1}^{N_{n}}3(t_{i}^{(n)}-t_{i-1}^{(n)})^{2}\leq 3\cdot\max_{i}\{(t_{i+1}^{(n)}-t_{i}^{(n)}\}\cdot t=3t\cdot\parallel\pi_{n}\parallel .
\end{align*}
Therefore \(\sum_{n=1}^{\infty}Var[V_{n}]<\infty\). Using monotone convergence theorem, we find \(\mathbb{E}\left [\sum_{n=1}^{\infty}(V_{n}-\mathbb{E}[V_{n}])^{2}\right ]<\infty\). This implies that the series inside the expectation converges almost surely. Hence its terms converge to zero, and \(V_{n}-\mathbb{E}[V_{n}]\rightarrow 0\) a.s., consequently \(V_{n}\rightarrow t\) a.s. It is possible to show a stronger statement, that for any sequence of partitions which are successive refinements and satisfy \(\parallel\pi_{n}\parallel\rightarrow 0\) as \(n\rightarrow\infty\), the limit of the corresponding sums \(V_{n}\) is \(t\) a.s. \(\blacksquare\)
If we increase \(t\) by a small amount to \(t+dt\), the increase in the quadratic variation can be written symbolically as \((dW(t))^{2}\), and the increase in \(t\) is \(dt\). So, formally we may summarise the theorem as \((dW(t))^{2}=dt\). We see that if \(W\) is a Brownian motion then \(W\) is a martingale, since
\begin{align*}
\mathbb{E}[W_{t+s}|{\cal F}_{t}] & =\mathbb{E}[W_{t}+(W_{t+s}-W_{t})|{\cal F}_{t}]\\
& =\mathbb{E}[W_{t}|{\cal F}_{t}]+\mathbb{E}[W_{t+s}-W_{t}|{\cal F}_{t}]\\
& =W_{t}+\mathbb{E}[W_{t+s}-W_{t}]=W_{t},
\end{align*}
where we used the property of independent increment.
Proposition. Let \(W\) be a one-dimensional standard Brownian motion with \(W_{0}=0\). Then \(M_{t}=W_{t}^{2}-t\) is a martingale.
Proof. \(\mathbb{E}[M_{t}]=\mathbb{E}[W_{t}^{2}-t]=0\). Also
\[\mathbb{E}[M_{t}-M_{s}|{\cal F}_{s}]=\mathbb{E}[W_{t}^{2}-W_{s}^{2}-(t-s)|{\cal F}_{s}],\]
and
\[\mathbb{E}[W_{t}\cdot W_{s}|{\cal F}_{s}]=W_{s}\cdot \mathbb{E}[W_{t}|{\cal F}_{s}]=W_{s}^{2}\]
since \(W\) is a martingale with \(W_{s},W_{t}\in L^{2}\). Therefore
\begin{align*}
\mathbb{E}[M_{t}-M_{s}|{\cal F}_{s}] & =\mathbb{E}[W_{t}^{2}-2W_{t}W_{s}+W_{s}^{2}-(t-s)|{\cal F}_{s}]\\
& =\mathbb{E}[(W_{t}-W_{s})^{2}-(t-s)|{\cal F}_{s}]\\
& =\mathbb{E}[(W_{t}-W_{s})^{2}]-(t-s)=0
\end{align*}
due to the independence of the increments from the past. \(\sharp\)
Proposition. For almost all \(\omega\), the sample paths \(t\mapsto W_{t}(\omega )\) of a standard Brownian motion \(W\) are of unbounded variation on any interval. \(\sharp\)
Definition. Let \({\cal F}_{t}^{X}\) denote the \(\sigma\)-field generated by the process \(X\) up to time \(t\). \(X\) is a {\bf Markov process} if for any \(t\) and \(s>0\), the conditional distribution of \(X_{t+s}\) given \({\cal F}_{t}^{X}\) is the same as the conditional distribution of \(X_{t+s}\) given \(X_{t}\), that is,
\[\mathbb{P}\left .\left\{X_{t+s}\leq y\right |{\cal F}_{t}^{X}\right\}=\mathbb{P}\left .\left\{X_{t+s}\leq y\right |X_{t}\right\}\mbox{ a.s. } \sharp\]
Proposition. Brownian motion \(W\) posseses Markov property.
Proof. It is easy to see by using the moment generating function that the
conditional distribution of \(W_{t+s}\) given \({\cal F}_{t}\) is the same as that given \(W_{t}\). Indeed,
\begin{align*}
\mathbb{E}\left .\left [e^{u\cdot W_{t+s}}\right |{\cal F}_{t}^{W}\right ] & =e^{u\cdot W_{t}}\cdot \mathbb{E}\left .\left [e^{u\cdot (W_{t+s}-W_{s})}\right |{\cal F}_{t}^{W}\right ]\\
& =e^{u\cdot W_{t}}\cdot \mathbb{E}\left [e^{u\cdot (W_{t+s}-W_{s})}\right ]\mbox{ (by independent increment of \(W\))}\\
& =e^{u\cdot W_{t}}\cdot e^{u^{2}t/2}\mbox{ (since \(W_{t+s}-W_{s}\) is \(N(0,t)\))}\\
& =e^{u\cdot W_{t}}\cdot \mathbb{E}\left .\left [e^{u\cdot (W_{t+s}-W_{s})}\right |W_{t}\right ]=\mathbb{E}\left .\left [e^{u\cdot W_{t+s}}\right |W_{t}\right ].
\end{align*}
This completes the proof. \(\blacksquare\)
Transition probability function of a Markov process \(X\) is defined as
\[P(y,t,x,s)=\mathbb{P}\left .\left\{X_{t}\leq y\right |X_{s}=x\right\}\]
the conditional distribution function of the process at time \(t\), given that it is at point \(x\) at time \(s<t\). In the case of Brownian motion it is given by the distribution function of the normal \(N(x,t-s)\) distribution
\[P(y,t,x,s)=\int_{-\infty}^{y}\frac{1}{\sqrt{2\pi\cdot (t-s)}}\cdot\ex\mathbb{P}\left (\frac{(u-x)^{2}}{2(t-s)}\right )du.\]
The transition probability function of Brownian motion satisfies \(\mathbb{P}(y,t,x,s)=P(y,t-s,x,0)\). In other words,
\begin{equation}{\label{kleeq312}}\tag{23}
\mathbb{P}\left .\left\{W_{t}\leq y\right |X_{s}=x\right\}=\mathbb{P}\left .\left\{W_{t-s}\leq y\right |X_{0}=x\right\}.
\end{equation}
For fixed \(x\) and \(t\), \(\mathbb{P}(y,t,x,0)\) has the density \(\mathbb{P}_{t}(x,y)\) given by
\[p_{t}(x,y)=\frac{1}{\sqrt{2\pi t}}\cdot\ex\mathbb{P}\left (-\frac{(y-x)^{2}}{2t}\right ).\]
The property (\ref{kleeq312}) states that Brownian motion is time-homogeneous, that is, its distribution function do not change with a shift in time. For eaxmple, the distribution of \(W_{t}\) given \(W_{s}=x\) is the same as that of \(W_{t-s}\) given \(W_{0}=x\).
Strong Markov property is similar to the Markov property except that in the definition a fixed time is replaced by a stopping time \(T\).
Proposition. Brownian motion \(W\) has the strong Markov property: for a finite stopping time \(T\), we have
\[\mathbb{P}\left .\left\{W_{t+T}\leq y\right |{\cal F}_{T}^{W}\right\}=\mathbb{P}\left .\left\{W_{t+T}\leq y\right |W_{T}\right\}\mbox{ a.s.} \sharp\]
Let \(T_{x}\) denote the first time \(W_{t}\) hits \(x\), i.e., \(T_{x}=\inf\{t>0:W_{t}=x\}\). Then \(T_{x}\) is a stopping time. Let \(\mathbb{P}_{x}\) denote the conditional probability given \(W_{0}=x\).
Proposition. Let \(a<x<b\), and \(\tau =\min\{T_{a},T_{b}\}\). Then \(\mathbb{P}_{x}\{\tau <\infty\}=1\) and \(\mathbb{E}_{x}[\tau ]<\infty\). \(\sharp\)
Definition. An adapted process \({\bf W}=\{{\bf W}_{t}\}_{t\geq 0}\) taking values in \(\mathbb{R}^{n}\) is called an \(n\)-dimensional Brownian motion when
- for \(0\leq s<t<\infty\), \({\bf W}_{t}-{\bf W}_{s}\) is independent of \(\sigma ({\bf W}_{s},s\leq t)\);
- for \(0<s<t\), \({\bf W}_{t}-{\bf W}_{s}\) is a Gaussian random variable with mean zero and variance matrix \((t-s)\cdot C\), for a given nonrandom matric \(C\). \(\sharp\)
It is simple to check that a Brownian motion is a martingale as long as \(\mathbb{E}[|{\bf W}_{0}|]<\infty\). Therefore by Proposition \ref{prot19} there exists a version which has right continuous paths a.s.
Proposition. Let \({\bf W}\) be a Brownian motion. Then there exists a modification of \({\bf W}\) which has continuous paths a.s. \(\sharp\)
We will always assume that we are using the version of Brownian motion with continuous paths. We will also assume that \(C\) is the identity matrix. We then say that a Brownian motion \({\bf W}\) with continuous
paths with \(C=I\) the identity matrix is a standard Brownian motion. The Borwnian motion starts at \({\bf x}\) if \(\mathbb{P}\{{\bf W}_{0}={\bf x}\}=1\). Note that for an \(n\)-dimensional standard Brownian motion \({\bf W}\), writing \({\bf W}_{t}=(W^{(1)}_{t},\cdots ,W^{(n)}_{t})\), then each \(W^{(i)}\) is a one-dimensional Brownian motion with continuous paths, and the \(W^{(i)}\)’s are independent.
Reflection Principle.
Given a one-dimensional standard Brownian motion \(W\), let us denote by \(M_{t}^{W}\) and \(m_{t}^{W}\) the running maximum and minimum, respectively. More explicitly,
\[M_{t}^{W}=\sup_{u\in [0,t]}W_{u}\mbox{ and }m_{t}^{W}=\inf_{u\in [0,t]}W_{u}.\]
It is well known that for every \(t>0\), we have
\begin{equation}{\label{museqb28}}\tag{24}
\mathbb{P}\left\{M_{t}^{W}>0\right\}=1\mbox{ and }\mathbb{P}\left\{m_{t}^{W}<0\right\}=1.
\end{equation}
The following well-known results, commonly referred to as the reflection principle.
Theorem. (Reflection Principle). Let \(T\) be a stopping time. Define \(\hat{W}_{t}=W_{t}\) for \(t\leq T\), and \(\hat{W}_{t}=2W_{T}-W_{t}\) for \(t\geq T\). Then \(\hat{W}\) is also a Brownian motion. \(\sharp\)
Note that \(\hat{W}_{t}\) defined above coincides with \(W_{t}\) for \(t\leq T\), and then for \(t\geq T\) it is the reflected path about the horizontal line passing through \((T,W_{T})\), that is, $latex \hat{W}_{t}-W_{T}=
-(W_{t}-W_{T})$, which gives the name to the result.
Proposition. The formula
\[\mathbb{P}\left\{W_{t}\leq x,M_{t}^{W}\geq y\right\}=\mathbb{P}\left\{W_{t}\geq 2y-x\right\}=\mathbb{P}\left\{W_{t}\leq x-2y\right\}\]
is valid for every \(t>0\), \(y>0\) and \(x\leq y\). \(\sharp\)
Theorem. (Reflection Principle for Brownian Motion). Let \(W=\{W_{t}\}_{t\geq 0}\) be standard Brownian motion with \(W_{0}=0\) a.s. and \(S_{t}=\sup_{0\leq s\leq t}W_{s}\), the Brownian maximum process. For \(y\geq 0\), \(z>0\),
\[\mathbb{P}\{W_{t}<z-y,S_{t}\geq z\}=\mathbb{P}\{W_{t}>y+z\}. \sharp\]
Corollary. Let \(W=\{W_{t}\}_{t\geq 0}\) be standard Brownian motion with \(W_{0}=0\) a.s. and \(S_{t}=\sup_{0<s\leq t}W_{s}\). For \(z>0\),
\[\mathbb{P}\{S_{t}>z\}=2\mathbb{P}\{W_{t}>z\}. \sharp\]
We need to examine the case of a slightly more general process; namely, a Brownian motion with non-zero drift. Consider the process \(X\) that equals \(X_{t}=\sigma\cdot W_{t}+\nu\cdot t\), where \(W\) is a standard Brownian motion under \(\mathbb{P}\), and \(\sigma >0\), \(\nu\) are real numbers. We write
\[M_{t}^{X}=\sup_{u\in [0,t]} X_{u}\mbox{ and }m_{t}^{X}=\inf_{u\in [0,t]} X_{u}.\]
By virtue of Girsanov’s theorem, the process \(X\) is a Brownian motion under an equivalent probability measure and thus (ref. (\ref{museqb28}))
\[\mathbb{P}\left\{M_{t}^{X}>0\right\}=1\mbox{ and }\mathbb{P}\left\{m_{t}^{X}<0\right\}=1\]
for every \(t>0\).
\begin{equation}{\label{muslb32}}\tag{25}\mbox{}\end{equation}
Proposition \ref{muslb32}. For every \(t>0\), the joint distribution of \(X_{t}\) and \(M_{t}^{X}\) is given by the formula
\[\mathbb{P}\left\{X_{t}\leq x,M_{t}^{X}\geq y\right\}=e^{2\nu y/\sigma^{2}}\cdot\mathbb{P}\left\{X_{t}\geq 2y-x+2\nu\cdot t\right\}\]
for every \(x,y\in\mathbb{R}\) such that \(y\geq 0\) and \(x\leq y\). \(\sharp\)
It is worthwhile to observe that
\[\mathbb{P}\left\{X_{t}\leq x,M_{t}^{X}\geq y\right\}=\mathbb{P}\left\{X_{t}<x,M_{t}^{X}>y\right\}.\]
The following result is a straightforward consequence of Proposition \ref{muslb32}.
Proposition. For every \(x,y\in\mathbb{R}\) which satisfy \(y\geq 0\) and \(x\leq y\), we have
\[\mathbb{P}\left\{X_{t}\leq x,M_{t}^{X}\geq y\right\}=e^{2\nu y/\sigma^{2}}\cdot N\left (\frac{x-2y-\nu t}{\sigma\cdot\sqrt{t}}\right ).\]
Hence,
\[\mathbb{P}\left\{X_{t}\leq x,M_{t}^{X}\leq y\right\}=N\left (\frac{x-\nu t}{\sigma\cdot\sqrt{t}}\right )-e^{2\nu y/\sigma^{2}}\cdot N\left (\frac{x-2y-\nu t}{\sigma\cdot\sqrt{t}}\right )\]
for every \(x,y\in\mathbb{R}\) such that \(x\leq y\) and \(y\geq 0\). \(\sharp\)
It is clear that
\[\mathbb{P}\left\{M_{t}^{X}\geq y\right\}=\mathbb{P}\left\{X_{t}\geq y,M_{t}^{X}\geq y\right\}+
\mathbb{P}\left\{X_{t}\leq y,M_{t}^{X}\geq y\right\}=\mathbb{P}\left\{X_{t}\geq y\right\}+\mathbb{P}\left\{X_{t}\leq y,M_{t}^{X}\geq y\right\}\]
for every \(y\geq 0\), and thus
\begin{equation}{\label{museqb34}}\tag{26}
\mathbb{P}\left\{M_{t}^{X}\geq y\right\}=\mathbb{P}\left\{X_{t}\geq y\right\}+e^{2\nu y/\sigma^{2}}\cdot \mathbb{P}\left\{X_{t}\geq y+2\nu t\right\}.
\end{equation}
This leads to the following result
Proposition. The following formula holds for every \(y\geq 0\)
\[\mathbb{P}\left\{M_{t}^{X}\leq y\right\}=N\left (\frac{y-\nu t}{\sigma\cdot\sqrt{t}}
\right )-e^{2\nu y/\sigma^{2}}\cdot N\left (\frac{-y-\nu t}{\sigma\cdot\sqrt{t}}\right ). \sharp\]
Let us now focus on the law of the minimal value of \(X\). Observe that for any \(y\leq 0\), we have
\[\mathbb{P}\left\{\inf_{u\in [0,t]}(\sigma\cdot W_{u}-\nu u)\geq -y\right\}=
\mathbb{P}\left\{\inf_{u\in [0,t]}(-\sigma\cdot W_{u}+\nu u)\leq y\right\}=\mathbb{P}\left\{\inf_{u\in [0,t]}X_{u}\leq y\right\},\]
where the last equality follows from the symmetry of the Brownian motion. Consequently, for every \(y\leq 0\) we have \(\mathbb{P}\{m_{t}^{X}\leq y\}=\mathbb{P}\{M_{t}^{\bar{X}}\geq -y\}\), where the process \(\bar{X}\) equals \(\bar{X}_{t}=\sigma\cdot W_{t}-\nu t\). The following results are not difficult to prove
Proposition. The joint distribution of \((X_{t},m_{t}^{X})\) satisfies
\[\mathbb{P}\left\{X_{t}\geq x,m_{t}^{X}\geq y\right\}=N\left (\frac{-x+nu t}{\sigma\cdot\sqrt{t}}\right )-e^{2\nu y/\sigma^{2}}\cdot N\left (
\frac{2y-x+\nu t}{\sigma\cdot\sqrt{t}}\right )\]
for every \(x,y\in\mathbb{R}\) such that \(y\leq 0\) and \(y\leq x\). \(\sharp\)
Proposition. The following formula is valid for every \(y\leq 0\)
\[\mathbb{P}\left\{m_{t}^{X}\geq y\right\}=N\left (\frac{-y+\nu t}
{\sigma\cdot\sqrt{t}}\right )-e^{2\nu y/\sigma^{2}}\cdot N\left (\frac{y+\nu t}{\sigma\cdot\sqrt{t}}\right ). \sharp\]}
\begin{equation}{\label{d}}\tag{D}\mbox{}\end{equation}
Doob-Meyer Decomposition
Definition. An adapted process \(A\) is called {\bf increasing} if for \(\mathbb{P}\)-a.e. \(\omega\in\Omega\), we have \(A_{0}(\omega )=0\), the function \(t\mapsto A_{t}(\omega )\) is nondecreasing and right-continuous and \(\mathbb{E}[A_{t}]<\infty\) holds for every \(t\geq 0\). An increasing process is called integrable if \(\mathbb{E}[A_{\infty}]<\infty\), where \(A_{\infty}=\lim_{t\rightarrow\infty} A_{t}\). \(\sharp\)
Definition. An increasing process \(A\) is called {\bf natural} if for every bounded, right-continuous martingale \(\{(M_{t},{\cal F}_{t})\}_{t\geq 0}\) we have
\begin{equation}{\label{kareq144}}\tag{27}
\mathbb{E}\left [\int_{(0,t]}M_{s}dA_{s}\right ]=\mathbb{E}\left [\int_{(0,t]}M_{s-}dA_{s}\right ]
\end{equation}
for every \(0<t<\infty\). \(\sharp\)
Every continuous, increasing process is natural as shown below. For \(\mathbb{P}\)-a.e. \(\omega\in\Omega\), we have
\[\int_{(0,t]}\left (M_{s}(\omega )-M_{s-}(\omega )\right )dA_{s}(\omega )=0\mbox{ for every }0<t<\infty ,\]
since every path \(\{M_{s}(\omega )\}_{s\geq 0}\) has only countably many discountinuities. It can be shown that every natural increasing process is adapted to the filtration \(\{{\cal F}_{t-}\}_{t\geq 0}\) provided that \(\{{\cal F}_{t}\}_{t\geq 0}\) satisfies the usual conditions.
Proposition. Condition \((\ref{kareq144})\) is equivalent to
\[\mathbb{E}[M_{t}\cdot A_{t}]=\mathbb{E}\left [\int_{(0,t]}M_{s-}dA_{s}\right ]. \sharp\]
Definition. Let us consider the class \({\cal T}\) \((\)resp. \({\cal T}_{a})\) of all stopping times \(T\) of the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) which satisfy \(\mathbb{P}\{T<\infty\}=1\) (resp. \(\mathbb{P}\{T\leq a\}=1\) for a given finite number \(a>0\). The right-continuous process \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) is said to be of class D, if the family \(\{X_{T}\}_{T\in {\cal T}}\) is uniformly integrable; of class
DL, if the family \(\{X_{T}\}_{T\in {\cal T}_{a}}\) is uniformly integrable, for every \(0<a<\infty\). \(\sharp\)
\begin{equation}{\label{kart1410}}\tag{28}\mbox{}\end{equation}
Theorem. \ref{kart1410}. (Doob-Meyer Decomposition) Let \(\{{\cal F}_{t}\}_{t\geq 0}\) satisfy the usual conditions. If the right-continuous submartingale \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) is of class DL, then it admits the decomposition
\begin{equation}{\label{kareq146}}\tag{29}
X_{t}=M_{t}+A_{t}\mbox{ for \(t\geq 0\)}
\end{equation}
as the summation of a right-continuous martingale \(\{(M_{t},{\cal F}_{t})\}_{t\geq 0}\) and an increasing process \(\{(A_{t},{\cal F}_{t})\}_{t\geq 0}\). The latter can be taken to be natural; under this additional condition, the decomposition is unique up to indistinguishability. Further, if \(X\) is of class D, then \(M\) is a uniformly integrable martingale and \(A\) is integrable. \(\sharp\)
Definition. A submartingale \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) is called regular if for every \(a>0\) and every nondecreasing sequence of stopping times \(\{T_{n}\}\subset {\cal T}_{a}\) with \(T=\lim_{n\rightarrow\infty}T_{n}\), we have \(\lim_{n\rightarrow\infty}\mathbb{E}\left [X_{T_{n}}\right ]=\mathbb{E}[X_{T}]\). \(\sharp\)
\begin{equation}{\label{kart1414}}\tag{30}\mbox{}\end{equation}
Proposition \ref{kart1414}. Suppose that \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) is a right-continuous submartingale of class DL, where the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) satisfies the usual conditions, and let \(\{A_{t}\}_{t\geq 0}\) be the natural increasing process in the Doob-Meyer decomposition (\ref{kareq146}). The process \(A\) is continuous if and only if \(X\) is regular. \(\sharp\)
\begin{equation}{\label{e}}\tag{E}\mbox{}\end{equation}
Local Martingales.
We assume given a filtered probability space \((\Omega ,{\cal F},\{{\cal F}_{t}\}_{0\leq t<\infty},P)\) satisfying the usual condition. For a process \(X\) and a stopping time \(T\) we recall that \(X^{T}\) denotes the stopped process
\[X_{t}^{T}=X_{t\wedge T}=X_{t}\cdot I_{\{t<T\}}+X_{T}\cdot I_{\{t\geq T\}}.\]
Definition. An adapted, RCLL process \(M\) is a local martingale if there exists a sequence of increasing stopping times \(T_{n}\) with \(\lim_{n\rightarrow\infty}T_{n}=\infty\) a.s. such that \(M_{t\wedge T_{n}}\cdot 1_{\{T_{n}>0\}}\) is a uniformly integrable martingale for each \(n\). Such a sequence \(\{T_{n}\}_{n\in {\bf N}}\) of stopping times is called a localizing sequence. \(\sharp\)
The reason we multiply \(M_{t\wedge T_{n}}\) by \(1_{\{T_{n}>0\}}\) is to relax the integrability condition on \(M_{0}\). This is useful, for example, in the consideration of stochastic integral equations with a
nonintegrable initial condition. Clearly any RCLL martingale is a local martingale by taking \(T_{n}\equiv n\).
\begin{equation}{\label{chud*1}}\tag{31}\mbox{}\end{equation}
Definition \ref{chud*1}. For \(\mathbb{P}\in [1,\infty )\), \(\{M_{t}\}_{t\geq 0}\) is called a local \(L^{p}\)-martingale with respect to the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) when
- \(M_{0}\) is an \({\cal F}_{0}\)-measurable random variable;
- there is a sequence \(\{T_{n}\}_{n\in {\bf N}}\) of stopping times such that \(T_{n}\uparrow\infty\) a.s. and for each \(n\), \(\{M_{t\wedge T_{n}}-M_{0}\}_{t\geq 0}\) is an \(L^{p}\)-martingale with respect to the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\). \(\sharp\)
The sequence \(\{T_{n}\}_{n\in {\bf N}}\) is called a localizing sequence for \(M\). The definition of a local martingale by Dellacherie and Meyer require \(\{M_{t\wedge T_{n}}-M_{0}\}_{t\geq 0}\) to be uniformly integrable for each \(n\). This is not more restrictive than the second condition of Definition \ref{chud*1}, for if \(\{M_{t\wedge T_{n}}-M_{0}\}_{t\geq 0}\) is a martingale, then \(\{M_{t\wedge n\wedge T_{n}}-M_{0}\}_{t\geq 0}\) is a uniformly integrable martingale. Clearly an \(L^{p}\)-martingale for \(\mathbb{P}\in [1,\infty )\) is a local \(L^{p}\)-martingale. Conditions implying the converse are given below.
Proposition. Let \(\mathbb{P}\in [1,\infty )\) and \(M\) be a local \(L^{p}\)-martingale with a localizing sequence \(\{T_{n}\}_{n\in {\bf N}}\). If for each \(t\geq 0\) we have \(\left\{|M_{t\wedge T_{n}}|^{p}\right\}_{n\in {\bf N}}\) is uniformly integrable, then \(M\) is an \(L^{p}\)-martingale. The converse is true provided that \(M\) is right-continuous. \(\sharp\)
If \(M\) is a continuous local \(L^{1}\)-martingale, then there is a natural choice of localizing sequence for \(M\) which shows that \(M\) is a local \(L^{p}\)-martingale for any \(\mathbb{P}\in [1,\infty )\). This sequence is exhibited below.
Proposition. Suppose \(M\) is a continuous local \(L^{1}\)-martingale and let \(T_{n}=\inf\{t>0:|M_{t}-M_{0}|>n\}\) for each \(n\in {\bf N}\). Then, for each \(\mathbb{P}\in [1,\infty )\), \(M\) is a local \(L^{p}\)-martingale and \(\{T_{n}\}_{n\in {\bf N}}\) is a localizing sequence for it. \(\sharp\)
Definition. A stopping time \(T\) reduces a process \(M\) if \(M^{T}\) is a uniformly integrable martingale. \(\sharp\)
\begin{equation}{\label{prot144}}\tag{32}\mbox{}\end{equation}
Proposition \ref{prot144}. Let \(M\) and \(N\) be local martingales and let \(S\) and \(T\) be stopping times. Then, we have the following properties.
(i) If \(T\) reduces \(M\) and \(S\leq T\) a.s., then \(S\) reduces \(M\).
(ii} The sum \(M+N\) is also a local martingale.
(iii) If \(S\) and \(T\) both reduce \(M\), then \(S\vee T\) also reduces \(M\).
(iv) a positive local martingale is a supermartingale.
(v) The process \(M^{T}\) and \(M^{T}\cdot 1_{\{T>0\}}\) are local martingales.
(vi) Let \(X\) be a RCLL process and let \(\{T_{n}\}_{n\in {\bf N}}\) be a sequence of stopping times increasing to \(\infty\) a.s. such that \(X^{T_{n}}\cdot 1_{\{T_{n}>0\}}\) is a local martingale for each \(n\).
Then \(X\) is a local martingale. \(\sharp\)
If \(Z\) is a \({\cal F}_{0}\)-measurable random variable and \(X\) is a local martingale then, so is \(ZX\); in particular, the set of all local martingales is a vector space as the following result.
Corollary. Local martingales form a vector space. \(\sharp\)
We will often need to know that a reduced local martingale \(M^{T}\) is in \(L^{p}\) and not simply uniformly integrable.
Definition. Let \(X\) be a stochastic process. A property P is said to hold locally when there exits a sequence of stopping times \(\{T_{n}\}_{n\in {\bf N}}\) increasing to \(\infty\) a.s. such that \(X^{T_{n}}\cdot 1_{\{T_{n}>0\}}\) has property {\bf P} for each \(n\). \(\sharp\)
We see from Proposition \ref{prot144} (v) that a process which is locally a local martingale is also a local martingale. Other examples of local properties that arise frequently are locally bounded and locally square integrable. The following Propositions \ref{prot145} and \ref{prot146} are consequences of Proposition \ref{prot144} (v).
\begin{equation}{\label{prot145}}\tag{33}\mbox{}\end{equation}
Proposition \ref{prot145}. Let \(X\) be a proces which is locally a square integrable martingale. Then \(X\) is a local martingale. \(\sharp\)
\begin{equation}{\label{prot146}}\tag{34}\mbox{}\end{equation}
Proposition \ref{prot146}. Let \(M\) be adapted, RCLL and let \(\{T_{n}\}_{n\in {\bf N}}\) be a sequence of stopping times increasing to \(\infty\) a.s. If \(M^{T_{n}}\cdot 1_{\{T_{n}>0\}}\) is a martingale for each \(n\), then \(M\) is a local martingale. \(\sharp\)
It is often of interest to determine when a local martingale is actually a martingale. A simple condition involves the maximal function. Recall that \(M_{t}^{*}=\sup_{s\leq t}|M_{s}|\) and \(M^{*}=\sup_{s}|M_{s}|\).
\begin{equation}{\label{prot147}}\tag{35}\mbox{}\end{equation}
Proposition \ref{prot147}. Let \(M\) be a local martingale such that \(\mathbb{E}[M_{t}^{*}]<\infty\) for every \(t\geq 0\). Then \(M\) is a martingale. If \(\mathbb{E}[M^{*}]<\infty\), then \(M\) is a uniformly integrable martingale. \(\sharp\)
Note that in particular a bounded local martingale is a uniformly integrable martingale.
Proposition. Let \(\{M_{t}\}_{t\geq 0}\) be a local martingale such that \(|M_{t}|\leq Y\) with \(\mathbb{E}[Y]<\infty\). Then \(M\) is a uniformly integrable martingale.
Proof. Let \(\{T_{n}\}_{n\in {\bf N}}\) be a localizing sequence, then for any \(n\) and \(s<t\),
\begin{equation}{\label{kleeq711}}\tag{36}
\mathbb{E}\left .\left [M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]=M_{T_{n}\wedge s}.
\end{equation}
$M$ is clearly integrable since \(\mathbb{E}[|M_{t}|]\leq \mathbb{E}[Y]<\infty\). Since \(\lim_{n\rightarrow\infty}M_{T_{n}\wedge t}=M_{t}\), by dominated convergence of conditional expectations, we have
$\lim_{n\rightarrow\infty}\mathbb{E}\left .\left [M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]=\mathbb{E}\left .\left [M_{t}\right |{\cal F}_{s}\right ]$. Since \(\lim_{n\rightarrow\infty}M_{T_{n}\wedge s}=M_{s}\), the martingale property is established by taking limits in (\ref{kleeq711}). If a martingale is dominated by an integrable random variable then it is uniformly integrable by Proposition \ref{klet76}. \(\blacksquare\)
Corollary. Let \(\{M_{t}\}_{t\geq 0}\) be a local martingale such that for all \(t\), \(\mathbb{E}\left [\sup_{s\leq t}|M_{s}|\right ]<\infty\). Then it is a martingale, and as such it is uniformly integrable on any finite interval \([0,t]\). If in addition \(\mathbb{E}\left [\sup_{t\geq 0}|M_{t}|\right ]<\infty\) then \(\{M_{t}\}_{t\geq 0}\) is uniformly integrable on \([0,\infty )\). \(\sharp\)
In financial application positive local martingales arise. Such processes are supermartingales.
\begin{equation}{\label{klet720}}\tag{37}\mbox{}\end{equation}
Proposition \ref{klet720}. A nonnegative local martingale is a supermartingale.
Proof. Let \(\{T_{n}\}_{n\in {\bf N}}\) be a localizing sequence. Then since \(M_{T_{n}\wedge t}\geq 0\) by Fatou’s lemma
\begin{equation}{\label{kleeq712}}\tag{38}
\mathbb{E}\left [\liminf_{n\rightarrow\infty}M_{T_{n}\wedge t}\right ]\leq\liminf_{n\rightarrow\infty}\mathbb{E}\left [M_{T_{n}\wedge t}\right ].
\end{equation}
Since \(\lim_{n\rightarrow\infty}M_{T_{n}\wedge t}=M_{t}\) implies \(\liminf_{n\rightarrow\infty}M_{T_{n}\wedge t}=M_{t}\) and \(\mathbb{E}\left [M_{T_{n}\wedge t}\right ]=\mathbb{E}\left [M_{T_{n}\wedge 0}\right ]=\mathbb{E}[M_{0}]\) by the martingale property of \(\{M_{T_{n}\wedge t}\}_{t\geq 0}\). Therefore (\ref{kleeq712}) implies \(\mathbb{E}[M_{t}]\leq \mathbb{E}[M_{0}]\), so that \(M\) is integrable. Supermartingale property is established similarly. Using Fatou’s lemma for conditional expectations
\[\mathbb{E}\left .\left [\liminf_{n\rightarrow\infty}M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]\leq\liminf_{n\rightarrow\infty}\mathbb{E}\left .\left [
M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]=M_{T_{n}\wedge s}.\]
Taking limits as \(n\rightarrow\infty\), we obtain \(\mathbb{E}[M_{t}|{\cal F}_{s}]\leq M_{s}\) a.s. \(\blacksquare\)
From Propositions \ref{klet720} and \ref{klet73}, we obtain the following result.
Proposition. A nonnegative local martimgale \(\{M_{s}\}_{0\leq s\leq t}\) is a martingale if and only if \(\mathbb{E}[M_{t}]=\mathbb{E}[M_{0}]\). \(\sharp\)
Definition. A real-valued adapted process \(X\) is said of class \(D\) if the family of random variables \(X_{T}\cdot 1_{\{T<\infty\}}\) where \(T\) ranges through all stopping times is uniformly integrable. It is of class DL if for every \(a>0\), the family of random variables \(X_{T}\), where \(T\) ranges through all stopping times less than \(a\), is uniformly integrable. \(\sharp\)
A uniformly integrable martingale is of class \(D\). We have \(X_{T}\cdot 1_{\{T<\infty\}}=\mathbb{E}[X_{\infty}|{\cal F}_{T}]\cdot 1_{\{T<\infty\}}\) and it is known that if \(Y\) is integrable on \((\Omega ,{\cal F},P)\), the family of conditional expectations \(\mathbb{E}[Y|{\cal E}]\) where \({\cal E}\) ranges through all the sub-$\sigma$-field of \({\cal F}\) is uniformly integrable.
Proposition. A local martingale \(M\) is a uniformly integrable martingale if and only if it is of class D.
Proof. Suppose that \(M\) is a local martingale of class D. Let \(\{T_{n}\}_{n\in {\bf N}}\) be a localizing sequence so that \(\{M_{T_{n}\wedge t}\}_{t\geq 0}\) is a uniformly integrable martingale. Then \(\mathbb{E}\left .\left [M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]=M_{T_{n}\wedge s}\) for \(s<t\). Since \(T_{n}\rightarrow\infty\) a.s., \(M_{T_{n}\wedge s}\rightarrow M_{s}\) a.s. Since \(T_{n}\wedge s\) is a finite stopping time for each \(n\) and \(M\) is of class D, the sequence of random variables \(\{M_{T_{n}\wedge s}\}_{n\in {\bf N}}\) is uniformly integrable. Thus \(M_{T_{n}\wedge s}\rightarrow M_{s}\) a.s. and in \(L^{1}\), that is, \(\mathbb{E}\left [\left |M_{T_{n}\wedge s}-M_{s}\right |\right ]\rightarrow 0\). Using properties of condition expectation
\[\mathbb{E}\left [\left |\mathbb{E}\left .\left [M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]-\mathbb{E}\left .\left [M_{t}\right |{\cal F}_{s}\right ]\right |\right ]\leq
\mathbb{E}\left [\left |M_{T_{n}\wedge t}-M_{t}\right |\right ]\rightarrow 0\mbox{ as }n\rightarrow\infty .\]
This implies \(\mathbb{E}\left .\left [M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]\rightarrow \mathbb{E}\left .\left [M_{t}\right |{\cal F}_{s}\right ]\) a.s. as \(n\rightarrow\infty\). Taking limits in \(\mathbb{E}\left .\left [M_{T_{n}\wedge t}\right |{\cal F}_{s}\right ]=M_{T_{n}\wedge s}\) as \(n\rightarrow\infty\), we establish the martingale property of \(M\). Since it is of class D, it is uniformly integrable. \(\blacksquare\)
Proposition. A local martingale is a martingale if and only if it is of class DL. \(\sharp\)
\begin{equation}{\label{f}}\tag{F}\mbox{}\end{equation}
Semimartingales.
The following discussion follows from Klebaner \cite{kle}.
Definition. A RCLL adapted process is a semimartingale if it can be represented as a sum of a local martingale \(M\) and a process of finite variation A by
\begin{equation}{\label{kleeq81}}\tag{39}
X_{t}=X_{0}+M_{t}+A_{t}
\end{equation}
with \(M_{0}=A_{0}=0\). \(\sharp\)
- \(X_{t}=W_{t}^{2}\), where \(W\) is the Brownian motion is a semimartingale. \(X_{t}=M_{t}+t\), where \(M_{t}=W_{t}^{2}-t\) is a martingale and \(A_{t}=t\) is a finite variation process.
- One way to obtain semimartingales from known semimartingales is by applying a twice continuously differentiable transformation. If \(X_{t}\) is a semimartingale and \(f\) is a \(C^{2}\)-function, then \(f(X_{t})\) is also a semimartingale. The decomposition of \(f(X_{t})\) is given by the Ito’s formula given later. In this way we can assert that the geometric Brownian motion \(\ex\mathbb{P}\left (\sigma W_{t}+\mu t\right )\) is a semimartingale.
- A right-continuous with left-limits deterministic function \(f\) is a semimartingale if and only if it is of finite variation.
The following discussion follows from Karatzas and Shreve \cite{kar}. Let us consider a probability space \((\Omega ,{\cal F},P)\) with an associated filtration \(\{{\cal F}_{t}\}_{t\geq 0}\) which we always assume to satisfy the usual conditions.
Definition. A continuous semimartingale \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) is an adapted process which has the decomposition \(\mathbb{P}\)-a.s.
\begin{equation}{\label{kareq331}}\tag{40}
X_{t}=X_{0}+M_{t}+B_{t}\mbox{ for }t\geq 0,
\end{equation}
where \(\{(M_{t},{\cal F}_{t})\}_{t\geq 0} \)latex is a continuous local martingale and \(\{(B_{t},{\cal F}_{t})\}_{t\geq 0}\) is the difference of continuous, nondecreasing, adapted processes \(\{(A^{\pm}_{t},{\cal F}_{t})\}_{t\geq 0}\)
\begin{equation}{\label{kareq332}}\tag{41}
B_{t}=A_{t}^{+}-A_{t}^{-}\mbox{ for }t\geq 0,
\end{equation}
where \(A_{0}^{\pm}=0\) \(\mathbb{P}\)-a.s. We shall always assume that \((\ref{kareq332})\) is the minimal decomposition of \(B\); in other words, \(A_{t}^{+}\) is the positive variation of \(B\) on \([0,t]\) and \(A_{t}^{-}\) is the negative variation. The total variation of \(B\) on \([0,t]\) is then \(A_{t}^{+}+A_{t}^{-}\). \(\sharp\)
The following result discusses the question of uniqueness for the decomposition (\ref{kareq331}) of a continuous semimartingale.
Proposition. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a continuous semimartingale with decomposition \((\ref{kareq331})\). Suppose that \(X\) has another decomposition
\[X_{t}=X_{0}+\tilde{M}_{t}+\tilde{B}_{t}\mbox{ for }t\geq 0,\]
where \(\tilde{M}\) is a continuous local martingale and \(\tilde{B}\) is a continuous, adapted process which has finite total variation on each bounded interval \([0,t]\). Then \(\mathbb{P}\)-a.s. \(M_{t}=\tilde{M}_{t}\) and \(B_{t}=\tilde{B}_{t}\) for \(t\geq 0\). \(\sharp\)
The following discussion follows from Protter \cite{pro}. We will assume that we are given a filtered, complete probability space \((\Omega ,{\cal F},\{{\cal F}_{t}\}_{t\geq 0},P)\) satisifying the usual condition.
\begin{equation}{\label{prod*12}}\tag{42}\mbox{}\end{equation}
Definition \ref{prod*12}. A process \(H\) is said to be {\bf simple predictable} if \(H\) has a representation
\[H_{t}=H_{0}\cdot 1_{\{0\}}(t)+\sum_{i=1}^{n}H_{i}\cdot 1_{(T_{i},T_{i+1}]}(t)\]
where \(0=T_{1}\leq\cdots\leq T_{n+1}<\infty\) is a finite sequence of stopping times, \(H_{i}\in {\cal F}_{T_{i}}\) with \(|H_{i}|<\infty\) a.s. for \(0\leq i\leq n\). The collection of simple predictable process is denoted by \({\bf S}\). \(\sharp\)
Note that we can take \(T_{1}=T_{0}=0\) in the above definition, so there is no “gap” between \(T_{0}\) and \(T_{1}\). We can topologize \({\bf S}\) by uniform convergence in \((t,\omega )\), and we denote \({\bf S}\) endowed with this topology by \({\bf S}_{u}\). We also write \({\bf L}^{0}\) for the space of finite-valued random variables topologized by convergence in probability.
Let \(X\) be a stochastic process. An operator \(I_{X}\) induced by \(X\) should have two fundamental properties to earn the name “integral”. The operator \(I_{X}\) should be linear, and it should satisfy some version of bounded convergence theorem. A particular weak form of the bounded convergence theorem is that the uniform convergence of processes \(H_{n}\) to \(H\) implies only the convergence in probability of \(I_{X}(H_{n})\) to \(I_{X}(H)\).
Inspired by the above considerations, for a given process \(X\) we define a linear mapping \(I_{X}:{\bf S}\rightarrow {\bf L}^{0}\) as follows
\begin{equation}{\label{proeq*14}}\tag{43}
I_{X}(H)=H_{0}\cdot X_{0}+\sum_{i=1}^{n}H_{i}\cdot (X_{T_{i+1}}-X_{T_{i}}),
\end{equation}
where \(H\in {\bf S}\) has the representation
\[H_{t}=H_{0}\cdot 1_{\{0\}}(t)+\sum_{i=1}^{n}H_{i}\cdot 1_{(T_{i},T_{i+1}]}(t).\]
Since this definition is a path by path definition for the step functions \(H_{t}(\omega )\), it does not depend on the choice of the representation of \(H\) in \({\bf S}\).
\begin{equation}{\label{prod*13}}\tag{44}\mbox{}\end{equation}
Definition \ref{prod*13}. A process \(X\) is a {\bf total semimartingale} if \(X\) is RCLL, adapted, and \(I_{X}:{\bf S}_{u}\rightarrow {\bf L}^{0}\) is continuous. \(\sharp\)
Recall that for a process \(X\) and a stopping time \(T\), the notation \(X^{T}\) denotes the process \(\{X_{t\wedge T}\}_{t\geq 0}\).
\begin{equation}{\label{prod*14}}\tag{45}\mbox{}\end{equation}
Definition \ref{prod*14}. A process \(X\) is called a semimartingale when, for each \(t\in [0,\infty )\), \(X^{t}\) is a total semimartingale. \(\sharp\)
We state a sequence results giving some of the stability results which are particular simple.
\begin{equation}{\label{prot21}}\tag{46}\mbox{}\end{equation}
Proposition \ref{prot21}. The set of (total) semimartingales is a vector space. \(\sharp\)
\begin{equation}{\label{prot22}}\tag{47}\mbox{}\end{equation}
Proposition \ref{prot22}. If \(Q\) is a probability measure and absolutely continuous with respect to \(\mathbb{P}\), then every (total) \(\mathbb{P}\)-semimartingale \(X\) is a (total) \(\mathbb{Q}\)-semimartingale.
Proof. Convergence in \(\mathbb{P}\)-probability implies convergence in \(\mathbb{Q}\)-probability. Thus the theorem follows from the definition of \(X\). \(\blacksquare\)
Theorem (Stricker’s Theorem). Let \(X\) be a semimartingale for the filtration \(\{{\cal F}_{t}\}_{t\geq 0}\). Let \(\{{\cal G}_{t}\}_{t\geq 0}\) be a subfiltration of \(\{{\cal F}_{t}\}_{t\geq 0}\) such that \(X\) is adapted to the \({\cal G}\)-filtration. Then \(X\) is a \({\cal G}\)-semimartingale.
Proof. For a filtration \({\cal H}\), let \({\bf S}({\cal H})\) denote the simple predictable processes for the filtration \({\cal H}=\{{\cal H}_{t}\}_{t\geq 0}\). In this case, we have \({\bf S}({\cal G})\) is contained in \({\bf S}({\cal F})\). The theorem is then an immediate consequence of definition. \(\blacksquare\)
The above theorem shows that we can always shrink a filtration and preserve the property of being a semimartingale (as long as the process \(X\) is still adapted), since we are shrinking as well the possible integrands; this, in effect, makes it “easier” for the process \(X\) to be a semimartingale. Expanding the filtration, therefore, should be, and is, a much more delicate issue.
\begin{equation}{\label{prot25}}\tag{48}\mbox{}\end{equation}
Proposition \ref{prot25}. Let \({\cal A}\) be a collection of events in \({\cal F}\) such that \(A_{\alpha},A_{\beta}\in {\cal A}\) then \(A_{\alpha}\cap A_{\beta}=\emptyset\) for \(\alpha\neq\beta\). Let \({\cal H}_{t}\) be the filtration generated by \({\cal F}_{t}\) and \({\cal A}\). Then every \((\{{\cal F}_{t}\}_{t\geq 0},P)\)-semimartingale is also an \((\{{\cal H}_{t}\}_{t\geq 0},P)\)-semimartingale. \(\sharp\)
Corollary. Let \({\cal A}\) be a finite collection of events in \({\cal F}\), and let \(\{{\cal H}_{t}\}_{t\geq 0}\) be the filtration generated by \({\cal F}_{t}\) and \({\cal A}\). Then every \((\{{\cal F}_{t}\}_{t\geq 0},P)\)-semimartingale is also an \((\{{\cal H}_{t}\}_{t\geq 0},P)\)-semimartingale.
Proof. Since \({\cal A}\) is finite, one can always find a finite partition \({\cal B}\) of \(\Omega\) such that \({\cal H}_{t}={\cal F}_{t}\vee {\cal B}\). The result then follows by Proposition \ref{prot25}. \(\blacksquare\)
Note that if \(W=\{W_{t}\}_{t\geq 0}\) is a Brownian motion for a filtration \(\{{\cal F}_{t}\}_{t\geq 0}\), by Proposition \ref{prot25}, we are able to add an infinite number of “future” events to the filtration and \(W\) will no longer be a martingale, but it will stay a semimartingale. This has interesting implications in Finance Theory, the theory of continuous trading. See for example Duffie and Huang \cite{duf}.
A process \(X\) is stopped at \(T-\) if \(X_{t}^{T-}=X_{t}\cdot 1_{\{0\leq t<T\}}+X_{T-}\cdot 1_{\{t\geq T\}}\), where \(X_{0-}=0\). The Corollary of the next proposition states that being a semimartingale is a “local” propery; that is, a local semimartingale is a semimartingale. We set a stronger result by stopping at \(T_{n}-\) rather than at \(T_{n}\) in the next proposition.
Proposition. Let \(X\) be a RCLL, adapted process; let \(\{T_{n}\}_{n\in {\bf N}}\) be a sequence of positive random variables increasing to \(\infty\) a.s.; and let \(\{X_{n}\}_{n\in {\bf N}}\) be a sequence of semimartingales such that, for each \(n\), \(X^{T_{n}-}=(X_{n})^{T_{n}-}\). Then \(X\) is a semimartingale. \(\sharp\)
\begin{equation}{\label{proc1}}\tag{49}\mbox{}\end{equation}
Corollary \ref{proc1}. Let \(X\) be a process. If there exists a sequence \(\{T_{n}\}_{n\in {\bf N}}\) of stopping times increasing to \(\infty\) a.s. such that \(X^{T_{n}}\) or
$X^{T_{n}}\cdot 1_{\{T_{n}>0\}}$ is a semimartingale for each \(n\), then \(X\) is also a semimartingale. \(\sharp\)
\begin{equation}{\label{prot27}}\tag{50}\mbox{}\end{equation}
Proposition \ref{prot27}. Each adapted process with RCLL paths of finite variation on compacts (of finite total variation) is a semimartingale (a total semimartingale).
Proof. It suffices to observe that
\[|I_{X}(H)|\leq\parallel H\parallel_{u}\cdot\int_{0}^{\infty}|dX_{s}|,\]
where \(\int_{0}^{\infty}|dX_{s}|\) denotes the Lebesgue-Stieltjes total variation and where \(\parallel H\parallel_{u}=\sup_{(t,\omega )}|H(t,\omega )|\). \(\blacksquare\)
\begin{equation}{\label{prot28}}\tag{51}\mbox{}\end{equation}
Proposition \ref{prot28}. Each square integrable martingale with RCLL paths is a semimartingale.
Proof. Let \(X\) be a square integrable martingale with \(X_{0}=0\), and let \(H\in {\bf S}\). It suffices to observe that
\begin{align*}
\mathbb{E}\left [(I_{X}(H))^{2}\right ] & =\mathbb{E}\left [\left (\sum_{i=0}^{n}H_{i}\cdot (X_{T_{i+1}}-X_{T_{i}})\right )^{2}\right ]\\
& =\mathbb{E}\left [\sum_{i=0}^{n}H_{i}^{2}\cdot (X_{T_{i+1}}-X_{T_{i}})^{2}\right ]\\
& \leq\parallel H\parallel_{u}^{2}\cdot \mathbb{E}\left [\sum_{i=0}^{n}(X_{T_{i+1}}-X_{T_{i}})^{2}\right ]\\
& =\parallel H\parallel_{u}^{2}\cdot \mathbb{E}\left [\sum_{i=0}^{n}(X_{T_{i+1}}^{2}-X_{T_{i}}^{2})\right ]\\
& =\parallel H\parallel_{u}^{2}\cdot \mathbb{E}[X_{T_{n+1}}^{2}]\\
& \leq\parallel H\parallel_{u}^{2}\cdot \mathbb{E}[X_{\infty}^{2}].
\end{align*}
This completes the proof. \(\blacksquare\)
\begin{equation}{\label{proc21}}\tag{52}\mbox{}\end{equation}
Corollary \ref{proc21}. Each RCLL, locally square integrable local martingale is a semimartingale.
Proof. Apply Proposition \ref{prot28} together with Corollary \ref{proc1}. \(\blacksquare\)
Corollary. A local martingale with continuous paths is a semimartingale.
Proof. Apply Corollary \ref{proc21} together with Proposition \ref{prot147}. \(\blacksquare\)
Corollary. The Wiener process, that is, Brownian motion, is a semimartingale.
Proof. The Wiener process is a martingale with continuous paths if \(W_{0}\) is integrable. It is always a continuous local martingale. \(\blacksquare\)
Definition. We will say an adapted process \(X\) with RCLL paths is decomposable if it can be decomposed \(X_{t}=X_{0}+M_{t}+A_{t}\), where \(M_{0}=A_{0}=0\), \(M\) is a locally square integrable martingale, and \(A\) is RCLL, adapted, with paths of finite variation on compacts. \(\sharp\)
Proposition. A decomposable process is a semimartingale.
Proof. Let \(X_{t}=X_{0}+M_{t}+A_{t}\) be a decomposition of \(X\). Then \(M\) is a semimartingale by Corollary \ref{proc21}, and \(A\) is a semimartingale by Proposition \ref{prot27}. Since semimartingales form a vector space by Proposition \ref{prot21}, we have the result. \(\blacksquare\)


