Johann Christian Vollerdt (1708-1769) was a German painter.
The topics are
Let \((\Omega ,{\cal F},P)\) be a probability space. We wish to model a situation involving randomness unfolding with time. Suppose, for simplicity, that information is never lost or forgotton: thus, as time increases we learn more. We recall that the \(\sigma\)-algebras represent information or knowledge. We thus need a sequence of \(\sigma\)-algebras \(\{{\cal F}_{n}\}\) which are increasing \({\cal F}_{n}\subseteq {\cal F}_{n+1}\) for \(n=0,1,2,\cdots\) with \({\cal F}_{n}\) representing the information or knowledge available to us at time \(n\). We shall always suppose all \(\sigma\)-albegras to be complete. Thus \({\cal F}_{0}\) represents the initial information (if there is none, \({\cal F}_{0}=\{\emptyset ,\Omega\}\), the trivial \(\sigma\)-algebra). On the other hand,
\[{\cal F}_{\infty}\equiv\lim_{n\rightarrow\infty}{\cal F}_{n}\]
represents all we ever will know. Often \({\cal F}_{\infty}\) will be \({\cal F}\). Such a family \(\{{\cal F}_{n}\}\) is called a filtration. A probability space endowed with such a filtration, \((\Omega ,{\cal F},\{{\cal F}_{n}\},P)\) is called a filtered probability space.
\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}
Discrete-Parameter Martingales.
A stochastic process \(X=\{X_{t}:t\in I\}\) is a family of random variables defined on some common probability space. The stochastic process \(X=\{X_{n}\}_{n=0}^{\infty}\) i said to be adapted to the filtration \(\{{\cal F}_{n}\}_{n=0}^{\infty}\) if \(X_{n}\) is \({\cal F}_{n}\)-measurable. So if \(X\) is adapted, we will know the value of \(X_{n}\) at time \(n\). If \({\cal F}_{n}=\sigma (X_{0},X_{1},\cdots ,X_{n})\) we call \(\{{cal F}_{n}\}\) the natural filtration of \(X\). Thus a process is always adapted to its natural filtration.
Definition. A process \(X=\{X_{n}\}\) is called a martingale relative to \((\{{\cal F}_{n}\},P)\) when
(a) \(X\) is adapted to \(\{{\cal F}_{n}\}\);
(b) \(\mathbb{E}[|X_{n}|]<\infty\) for all \(n\);
(c) \(\mathbb{E}[X_{n}|{\cal F}_{n-1}]=X_{n-1}\) \(\mathbb{P}\)-a.s. for \(n\geq 1\).
$X$ is a supermartingale when in place of (c) by \(\mathbb{E}[X_{n}|{\cal F}_{n-1}]\leq X_{n-1}\) \(\mathbb{P}\)-a.s. for \(n\geq 1\) and \(X\) is a submartingale when in place of (c) by \(\mathbb{E}[X_{n}|{\cal F}_{n-1}]\geq X_{n-1}\) \(\mathbb{P}\)-a.s. for \(n\geq 1\). \(\sharp\)
$X$ is a submartingale (resp. supermartingale) if and only if \(-X\) is a supermartingale (resp. submartingale). \(X\) is a martingale if and only if it is both a supermartingale and submartingale. \(\{X_{n}\}\) is a martingale if and only if \(\{X_{n}-X_{0}\}\) is a martingale. So we may without loss of generality take \(X_{0}=0\) when convenient. If \(X\) is a martingale, then for \(m<n\) using the iterated conditional expectation and the martingale property repeatedly
\[\mathbb{E}[X_{n}|{\cal F}_{m}]=\mathbb{E}[\mathbb{E}[X_{n}|{\cal F}_{n-1}]|{\cal F}_{m}]=\mathbb{E}[X_{n-1}|{\cal F}_{m}]=\cdots =\mathbb{E}[X_{m}|{\cal F}_{m}]=X_{m},\]
and similarly for supermartingales and submartingales.
Proposition. Let \(\{X_{n}\}\) be an adapted process with each \(X_{n}\in L^{1}\). Then \(X_{n}\) has an \((\)essentially unique$)$ Doob decomposition
\[X_{n}=X_{0}+M_{n}+A_{n}\mbox{ for all }n\]
with \(\{M_{n}\}\) a martingale null at zero, \(\{A_{n}\}\) a predictable process null at zero. If \(\{X_{n}\}\) is also a submartingale, then \(\{A_{n}\}\) is increasing, i.e., \(A_{n}\leq A_{n+1}\) a.s. for all \(n\). \(\sharp\)
Now think of a gambling game in discrete time. There is no play at time \(0\); there are plays at times \(n=1,2,\cdots ,\) and \(\Delta X_{n}=X_{n}-X_{n-1}\) represents the net winnings per unit stake at play \(n\). Thus if \(\{X_{n}\}\) is a martingale, the game is “fair on average”.
We call a process \(C=\{C_{n}\}_{n=1}^{\infty}\) predictable (or previsible) if \(C_{n}\) is \({\cal F}_{n-1}\)-measurable for all \(n\geq 1\). Think of \(C_{n}\) as the stake on play \(n\). Previsibility says that we have to decide how much to stake on play \(n\) based on the history before time \(n\) (i.e. up to and including play \(n-1\)). The winnings on game \(n\) are \(C_{n}\Delta X_{n}=C_{n}(X_{n}-X_{n-1})\). The total winnings up to time \(n\)
are
\[Y_{n}=\sum_{k=1}^{n}C_{k}\Delta X_{k}=\sum_{k=1}^{n}C_{k}(X_{k}-X_{k-1}).\]
We write \(Y=C\bullet X\), \(Y_{n}=(C\bullet X)_{n}\), and \(\Delta Y_{n}=C_{n}\Delta X_{n}\). We then call \(C\bullet X\) the martingale transform of \(X\) by \(C\).
\begin{equation}{\label{binp18}}\tag{1}\mbox{}\end{equation}
Proposition \ref{binp18}. We have the following properties.
(i) If \(C\) is a bounded nonnegative predictable process and \(X\) is a supermartingale, \(C\bullet X\) is a supermartingale null at zero.
(ii) If \(C\) is bounded and predictable and \(X\) is a martingale, then \(C\bullet X\) is a martingale null at zero. \(\sharp\)
Proposition. An adapted sequence of real integrable random variables \(\{M_{n}\}\) is a martingale if and only if for any bounded previsible sequence \(\{H_{n}\}\)
\[E\left [\sum_{k=1}^{n}H_{k}\Delta M_{k}\right ]=0\mbox{ for }n=1,2,\cdots . \sharp\]
\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}
Stopping Times and Optional Stopping.
A random variable \(T\) taking values in \(\{0,1,2,\cdots\}\) is called a stopping time (or optional time) when
\[\{T\leq n\}=\{\omega :T(\omega )\leq n\}\in {\cal F}_{n}.\]
Equivalently,
\[\{T=n\}\in {\cal F}_{n}\mbox{ or }\{T\geq n\}\in {\cal F}_{n}.\]
Think of \(T\) as a time at which you decide to quit a gambling game: whether or not you quit at time \(n\) depends only on the history up to and including time \(n\), not the future. Thus stopping times model gambling and other situations where there is no foreknowledge, or prescience of the future; in particular, in a financial context, where there is no insider trading.
Theorem (Doob’s Optional Stopping Theorem). Let \(T\) be a stopping time, \(\{X_{n}\}\) be a supermartingale, and assume that one of the following holds.
- \(T\) is bounded \((T(\omega )\leq K\) for some constant \(K\) and all \(\omega\in\Omega )\).
- \(\{X_{n}\}\) is bounded \((|X_{n}(\omega )|\leq K\) for some \(K\) and all \(n,\omega )\).
- \(\mathbb{E}[T]<\infty\) and \((X_{n}-X_{n-1})\) is bounded.
Then \(X_{T}\) is integrable and \(\mathbb{E}[X_{T}]\leq \mathbb{E}[X_{0}]\). If \(\{X_{n}\}\) is a martingale, then \(\mathbb{E}[X_{T}]=\mathbb{E}[X_{0}]\). \(\sharp\)
We write \(X_{n}^{T}\equiv X_{n\wedge T}\) for the sequence \(\{X_{n}\}\)
stopped at time \(T\).
Proposition. We have the following properties.
(i) If \(\{X_{n}\}\) is adapted and \(T\) is a stopping time, the stopped sequence \(\{X_{n\wedge T}\}\) is adpated.
(ii) If \(\{X_{n}\}\) is a martingale \((\)resp. supermartingale$)$ and \(T\) is a stopping time, \(\{X_{n\wedge T}\}\) is a martingale (resp. supermartingale). \(\sharp\)
\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}
The Snell Envelope.
Definition. If \(\{Z_{n}\}_{n=0}^{N}\) is a sequence adapted to a filtration \(\{{\cal F}_{n}\}\), the sequence \(\{U_{n}\}_{n=0}^{N}\) defined by
\[\left\{\begin{array}{l}
U_{N}\equiv Z_{N},\\
U_{n}\equiv\max\{Z_{n},\mathbb{E}[U_{n+1}|{\cal F}_{n}]\}\mbox{ for }n\leq N-1
\end{array}\right .\]
is called the Snell envelope of \(\{Z_{n}\}\). \(\sharp\)
Theorem. The Snell envelope \(\{U_{n}\}\) of \(\{Z_{n}\}\) is a supermartingale, and is the smallest supermartingale dominating \(\{Z_{n}\}\); i.e., \(U_{n}\geq Z_{n}\) for all \(n\). \(\sharp\)
Proposition. \(T_{0}\equiv\inf\{n\geq 0:U_{n}=Z_{n}\}\) is a stopping time, and the stopped sequence \(\{U_{n\wedge T_{0}}\}\) is a martingale. \(\sharp\)
We write \({\cal T}_{n,N}\) for the set of stopping times taking values in \(\{n,n+1,\cdots ,N\}\). We next see that the Snell envelope solves the optimal stopping problem.
Proposition. \(T_{0}\) solves the optimal stopping problem for \(\{Z_{n}\}\)
\[U_{0}=\mathbb{E}[Z_{T_{0}}|{\cal F}_{0}]=\sup\{\mathbb{E}[Z_{T}|{\cal F}_{0}]:T\in {\cal T}_{0,N}\}. \sharp\]
The same argument, starting at time \(n\) rather than time \(0\), gives
Corollary. If \(T_{n}\equiv\inf\{j\geq n:U_{j}=Z_{j}\}\), then
\[U_{n}=\mathbb{E}[Z_{T_{n}}|{\cal F}_{n}]=\sup\{\mathbb{E}[Z_{T}|{\cal F}_{n}]:T\in {\cal T}_{n,N}\}. \sharp\]
As we are attempting to maximize the payoff by stopping \(\{Z_{n}\}\) at the most advantageous time, the Corollary shows that \(T_{n}\) gives the best stopping time that is realistic: it maximizes the expected payoff given only information currently available. We thus call \(T_{0}\) or \(T_{n}\) the optimal stopping time for the problem.


