Quadratic Variations

Edward Antoon Portielje (1861-1949) was a Belgian painter.

 

The topics are

\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}

Quadratic Variations Defined by Convergence in Probability.

Definition. A process \(A\) is increasing (resp. of finite variation) if it is adapted and the paths \(t\mapsto A_{t}(\omega )\) are finite, right-continuous and increasing (resp. of finite variation) for almost every \(\omega\). \(\sharp\)

We will denote by \({\cal A}^{+}\) (resp. \({\cal A})\) the space of increasing (resp. of finite variation) process. We see that \({\cal A}^{+}\subset {\cal A}\). Conversely, any element \(A\in {\cal A}\) can be written as \(A_{t}=A^{+}_{t}-A^{-}_{t}\), where \(A^{+}\) and \(A^{-}\) are in \({\cal A}^{+}\). Moreover, \(A^{+}\) and \(A^{-}\) can be chosen as that for almost every \(\omega\), \(A_{t}^{+}(\omega )-A_{t}^{-}(\omega )\) is the minimal decomposition of \(A_{t}(\omega )\). The process \(\int_{0}^{t}|dA_{s}|=A_{t}^{+}+A_{t}^{-}\) is in \({\cal A}^{+}\) and for a.e. \(\omega\) the measure associated with it is the total variation of that which is associated with \(A(\omega )\); it is called the variation of \(A\).

One can clearly integrate appropriate functions with respect to the measure associated to \(A(\omega )\). More precisely, if \(X\) is progressively measurable and bounded on every interval \([0,t]\) for a.e. \(\omega\), one can define for a.e. \(\omega\), the Stieltjest integral
\begin{equation}{\label{reveq*16}}\tag{1}
(X\bullet A)_{t}(\omega )=\int_{0}^{t}X_{s}(\omega )dA_{s}(\omega ).
\end{equation}
If \(\omega\) is in the set where \(A_{t}(\omega )\) is not of finite variation or \(X_{t}(\omega )\) is not locally integrable with respect to \(dA(\omega )\), we put \((X\bullet A){_t}(\omega )=0\). We will have no difficulty in checking that the process \(X\bullet A\) thus defined is in \({\cal A}\). The hypothesis that \(X\) be progressively measurable is precisely made to ensure that \(X\bullet A\) is adapted.

\begin{equation}{\label{revp412}}\tag{2}\mbox{}\end{equation}

Proposition \ref{revp412}. A continuous martingale \(M\) cannot be in \({\cal A}\) unless it is constant. \(\sharp\)

Because of the above proposition , we will not be able to define integrals with respect to \(M\) by a path by path procedure. We will have to use a global method in which the notions we are about to introduce play a crucial role. If \(\pi=\{0=t_{0}<t_{1}<t_{2}<\cdots\}\) is a subdivision of \(\mathbb{R}_{+}\) with only a finite number of points in each interval \([0,t]\), we define, for a process \(X\),
\[Q_{t}^{\pi}(X)=\sum_{i=0}^{n-1}(X_{t_{i+1}}-X_{t_{i}})^{2}+(X_{t}-X_{t_{n}})^{2}\]
where \(n\) is such that \(t_{n}\leq t<t_{n+1}\). We say that \(X\) is of finite quadratic variation if there exists a process \(\langle X\rangle\) such that, for each \(t\), \(Q_{t}^{\pi}(X)\) converges in probability to \(\langle X\rangle\) as the mesh of \(\pi\) on \([0,t]\) goes to zero.

\begin{equation}{\label{revt413}}\tag{3}\mbox{}\end{equation}

Theorem \ref{revt413}. A continuous and bounded martingale \(M\) is of finite quadratic variation and \(\langle M\rangle\) is the unique continuous increasing adapted process vanishing at zero such that \(M^{2}-\langle M\rangle\) is a martingale. \(\sharp\)

Proposition. For every stopping time \(T\), we have \(\langle M^{T}\rangle =\langle M\rangle^{T}\).

Proof. By the optional stopping time, \((M^{T})^{2}-\langle M\rangle\) is a martingale, so that the result is a consequence of the uniqueness in Theorem \ref{revt413}. \(\sharp\)

Much as it is interesting, Theorem \ref{revt413} is not sufficient for our purposes; it doen not cover, for instance, the case of Brownian motion \(W\) which is not a bounded martingale.

Theorem. If \(M\) is a continuous local martingale, there exists a unique increasing continuous process \(\langle M\rangle\), vanishing at zero, such that \(M^{2}-\langle M\rangle\) is a continuous local martingale. Moreover, for every \(t\) and for any sequence \(\{\pi_{n}\}\) of partitions of \([0,t]\) such that \(\parallel\pi_{n}\parallel\rightarrow 0\), the random variables
\[\sup_{s\leq t}\left |Q_{s}^{\pi_{n}}(M)-\langle M\rangle_{s}\right |\]
converge to zero in probability. \(\sharp\)

The above theorem may still be further extended by polarization.

\begin{equation}{\label{revt419}}\tag{4}\mbox{}\end{equation}

Theorem \ref{revt419}. If \(M\) and \(N\) are two continuous local martingales, there exists a unique continuous process \(\langle M,N\rangle\) in \({\cal A}\), vanishing at zero and such that \(MN-\langle M,N\rangle\) is a local martingale. Moreover, for ant \(t\) and any sequence \(\{\pi_{n}\}\) of partitions of \([0,t]\) such that \(\parallel\pi_{n}\parallel\rightarrow 0\),
\[P\mbox{-}\limsup_{s\leq t}\left |\tilde{Q}_{s}^{\pi_{n}}-\langle M,N\rangle_{s}\right |=0,\]
where
\[\tilde{Q}_{s}^{\pi_{n}}=\sum_{t_{i}\in\pi_{n}}(M_{t_{i+1}}^{s}-M_{t_{i}}^{s})(N_{t_{i+1}}^{s}-N_{t_{i}}^{s})\]
(the stopped processes).

Proof. The uniqueness follows again from Theorem \ref{revp412} after suitable stoppings. Moreover the process
\[\langle M,N\rangle =\frac{1}{4}[\langle M+N\rangle -\langle M-N\rangle]\]
is easily seen to have the desired properties. \(\blacksquare\)

Definition. The process \(\langle M,N\rangle\) is called the bracket of \(M\) and \(N\), and the process \(\langle M\rangle\) is called the increasing process associated with \(M\) or simply the increasing process of \(M\). \(\sharp\)

If \(M\) and \(N\) are independent, the product \(MN\) is a local martingale,hence \(\langle M,N\rangle =0\).

\begin{equation}{\label{revp411}}\tag{5}\mbox{}\end{equation}

Proposition \ref{revp411}.  If \(T\) is a stopping time, then we have \(\langle M^{T},N^{T}\rangle=\langle M,N^{T}\rangle =\langle M,N\rangle^{T}\).

Proof. This is an obvious consequence of the last part of Theorem \ref{revt419}. As an exercise, we may also oberve that \(M^{T}N^{T}-\langle M,N\rangle^{T}\) and \(M^{T}(N-N^{T})\) are local martingales, hence by difference, so is \(M^{T}N-\langle M,N\rangle^{T}\). \(\sharp\)

The map \((M,N)\mapsto\langle M,N\rangle\) is bilinear, symmetric and \(\langle M\rangle\geq 0\); it is also non-degenerate as is shown by the following.

Proposition. \(\langle M\rangle =0\) if and only if \(M\) is constant, that is, \(M_{t}=M_{0}\) a.s. for every \(t\).

Proof. By Proposition \ref{revp411}, it is enough to consider the case of a bounded \(M\) and then by Theorem \ref{revt413}, \(\mathbb{E}[(M_{t}-M_{0})^{2}]=\mathbb{E}[\langle M\rangle ]\); the result follows immediately. \(\blacksquare\)

The following inequality will be very useful in defining stochastic integrals. It shows in particular that \(d\langle M,N\rangle\) is absolutely continuous with respect to \(d\langle M\rangle\).

Definition. A real-valued process \(H\) is said to be measurable if the map \((t,\omega )\mapsto H_{t}(\omega )\) is \({\cal B}\times {\cal F}_{\infty}\)-measurable. \(\sharp\)

The class of measurable processes is obviously larger than the class of progressively measurable processes.

Proposition. For any two continuous local martingales \(M\) and \(N\) and measurable processes \(H\) and \(K\), the inequality
\[\int_{0}^{t}|H_{s}||K_{s}||d\langle M,N\rangle |_{s}\leq\left (\int_{0}^{t}H_{s}^{2}d\langle M\rangle_{s}\right )^{1/2}\left (\int_{0}^{t}K_{s}^{2}d\langle N\rangle_{s}\right )^{1/2}\]
hold a.s. for \(t\leq\infty\). \(\sharp\)

Corollary. (Kunita-Watanabe Inequality). For every \(p\geq 1\) and \(1/p+1/q=1\),
\[\mathbb{E}\left [\int_{0}^{\infty}|H_{s}||K_{s}||d\langle M,N\rangle |_{s}\right ]\leq\left |\!\left |\left (\int_{0}^{\infty}H_{s}^{2}d\langle M\rangle_{s}\right )^{1/2}\right |\!\right |_{p}\cdot\left |\!\left |\left (\int_{0}^{\infty}K_{s}^{2}d\langle N\rangle_{s}\right )^{1/2}\right |\!\right |_{q}.\]
Proof. Straightforward application of H\”{o}lder’s inequality. \(\blacksquare\)

Definition. A continuous semimartingale is a continuous process \(X\) which can be written as \(X=M+A\) where \(M\) is a continuous local martingale and \(A\) a continuous adapted process of finite variation. \(\sharp\)

The decomposition into a local martingale and a finite variation process is unique as follows readily from Proposition \ref{revp412}; however, if a process \(X\) is a continuous semimartingale in two different filtrations \(\{{\cal F}_{t}\}_{t\geq 0}\) and \(\{{\cal G}_{t}\}_{t\geq 0}\), the decomposition may be different even if \({\cal F}_{t}\subset {\cal G}_{t}\) for each \(t\).

Proposition. A continuous semimartingale \(X=M+A\) has a finite quadratic variation and \(\langle X\rangle =\langle M\rangle\).

Proof. If \(\pi\) is a partition of \([0,t]\),
\[\left |\sum_{i}(M_{t_{i+1}}-M_{t_{i}})(A_{t_{i+1}}-A_{t_{i}})\right |\leq\left (\sup_{i}|M_{t_{i+1}}-M_{t_{i}}|\right )\cdot V_{t}(A)\]
where \(V_{t}(A)\) is the variation of \(A\) on \([0,t]\), and this converges to zero when \(\parallel\pi\parallel\) tends to zero because of the continuity of \(M\). Likewise
\[\lim_{\parallel\pi\parallel\rightarrow 0}\sum_{i}(A_{t_{i+1}}-A_{t_{i}})^{2}=0.\]
Note that from the bilinearity and symmetry, we have
\[\langle X\rangle =\langle M+A\rangle =\langle M\rangle +2\langle M,A\rangle +\langle A\rangle .\]
This completes the proof. \(\blacksquare\)

\begin{equation}{\label{revr419}}\tag{6}\mbox{}\end{equation}

Remark \ref{revr419}. Since the process \(\langle X\rangle\) is the limit in probability of the sums \(Q^{\pi_{n}}(X)\), it does not change if we replace \(\{{\cal F}_{t}\}_{t\geq 0}\) by another filtration for which \(X\) is still a semimartingale and likewise if we change \(P\) for a probability measure \(Q\) such that \(Q\ll P\) and \(X\) is still a \(Q\)-semimartingale. \(\sharp\)

Definition. If \(X=M+A\) and \(Y=N+B\) are two continuous semimartingales, we define the bracket of \(X\) and \(Y\) by
\[\langle X,Y\rangle =\langle M,N\rangle =\frac{1}{4}[\langle X+Y\rangle -\langle X-Y\rangle ]. \sharp\]

Obviously, \(\langle X,Y\rangle_{t}\) is the limit in probability of \(\sum_{i}(X_{t_{i+1}}-X_{t_{i}})(Y_{t_{i+1}}-Y_{t_{i}})\), and more generally, if \(H\) is left-continuous and adapted,
\[P\mbox{-}\lim_{\parallel\pi\parallel\rightarrow 0}\sup_{s\leq t}\left |\sum_{i}
H_{t_{i}}(X_{t_{i+1}}^{s}-X_{t_{i}}^{s})(Y_{t_{i+1}}^{s}-Y_{t_{i}}^{s})-\int_{0}^{s}H_{u}d\langle X,Y\rangle_{u}\right |=0\]
(the stopped processes)

\begin{equation}{\label{revd*3}}\tag{7}\mbox{}\end{equation}

Definition \ref{revd*3}. We denote by \({\cal H}^{2}\) the space of \(L^{2}\)-bounded martingales, i.e. the space of martingales such that
\[\sup_{t}\mathbb{E}[M_{t}^{2}]<\infty .\]
We denote by \({\cal H}_{c}^{2}\) the subset of \(L^{2}\)-bounded continuous martingales, and \({\cal H}_{0}^{2}\) the subset of elements of \({\cal H}_{c}^{2}\) vanishing at zero. \(\sharp\)

A Brownian motion is not in \({\cal H}_{c}^{2}\), but it is when suitably stopped, for instance at a constant time. Bounded martingales are in \({\cal H}^{2}\). Moreover, by Doob’s inequality, \(M_{\infty}^{*}=\sup_{t}|M_{t}|\) is in \(L^{2}\) if \(M\in {\cal H}^{2}\); hence \(M\) is uniformly integrable and \(M_{t}=\mathbb{E}[M_{\infty}|{\cal F}_{t}]\) with \(M_{\infty}\in L^{2}\). This sets up a one to one correspondence between \({\cal H}^{2}\) and \(L^{2}(\Omega ,{\cal F}_{\infty},P)\).

Proposition. The space \({\cal H}^{2}\) is a Hilbert space for the norm
\[\parallel M\parallel_{{\cal H}^{2}}=\mathbb{E}[M_{\infty}^{2}]^{1/2}=\lim_{t\rightarrow\infty} \mathbb{E}[M_{t}^{2}]^{1/2},\]
and the set \({\cal H}_{c}^{2}\) is closed in \({\cal H}^{2}\).

Proof. The first statement is obvious; to prove the second, we consider a sequence \(\{M^{(n)}\}\) in \({\cal H}_{c}^{2}\) converging to \(M\) in \({\cal H}^{2}\). By Doob’s inequality
\[\mathbb{E}\left [\left (\sup_{t}|M_{t}^{(n)}-M_{t}|\right )^{2}\right ]\leq 4\cdot\parallel M^{(n)}-M\parallel_{{\cal H}^{2}}^{2};\]
as a result, one can extract a subsequence for which \(\sup_{t}|M_{t}^{(n_{k})}-M_{t}|\) converges to zero a.s., which proves that \(M\in {\cal H}_{c}^{2}\). \(\blacksquare\)

The mapping
\[M\mapsto\parallel M_{\infty}^{X}\parallel_{2}=\mathbb{E}\left [\left (\sup_{t}|M_{t}|\right )^{2}\right ]^{1/2}\]
is also a norm in \({\cal H}^{2}\); it is equivalent to \(\parallel\cdot\parallel_{{\cal H}^{2}}\) since obviously \(\parallel M \parallel_{{\cal H}_{c}^{2}}\leq\parallel M_{\infty}^{*}\parallel_{2}\) and by Doob’s inequality \(\parallel M_{\infty}^{*}\parallel_{2}\leq 2\parallel M\parallel_{{\cal H}_{c}^{2}}\), but it is no longer a Hilbert space norm.

Proposition. A continuous local martingale \(M\) is in \({\cal H}_{c}^{2}\) if and only if the following two conditions hold

  • \(M_{0}\in L^{2}\);
  • \(\langle M\rangle\) is integrable, i.e., \(\mathbb{E}[\langle M\rangle_{\infty}]<\infty\).

In that case, \(M^{2}-\langle M\rangle\) is uniformly integrable and for any pair \(S\leq T\) of stopping times
\[\mathbb{E}\left [M_{T}^{2}-M_{S}^{2}|{\cal F}_{S}\right ]=\mathbb{E}\left [(M_{T}-M_{S})^{2}|{\cal F}_{S}\right ]=\mathbb{E}\left [\langle M\rangle_{S}^{T}|{\cal F}_{S}\right ].\]

Corollary. If \(M\in {\cal H}_{0}^{2}\), then we have
\[\parallel M\parallel_{{\cal H}^{2}}=\parallel\langle M\rangle_{\infty}^{1/2}\parallel_{2}\equiv \mathbb{E}[\langle M\rangle_{\infty}]^{1/2}.\]

Proof. If \(M_{0}=0\), we have \(\mathbb{E}[M_{\infty}^{2}]=\mathbb{E}[\langle M\rangle_{\infty}]\) as is seen in the last proof. \(\blaksquare\)

We could have worked in exactly the same way on \([0,t]\) instead of \([0,\infty ]\).

Corollary. If \(M\) is a continuous local martingale, the following two conditions are equivalent

  • (a) \(M_{0}\in L^{2}\) and \(\mathbb{E}[\langle M\rangle_{t}]<\infty\);
  • (b) \(\{M_{s}:s\leq t\}\) is an \(L^{2}\)-bounded martingale. \(\sharp\)

It is not true that \(L^{2}\)-bounded local martingales are always martingales. Likewise, \(\mathbb{E}[\langle M\rangle_{\infty}]\) may be infinite for an \(L^{2}\)-bounded continuous local martingale.

Proposition. A continuous local martingale \(M\) converges a.s. as \(t\) goes to infinity, on the set \(\{\langle M\rangle_{\infty}<\infty\}\). \(\sharp\)

This follows from Klebaner \cite{kle}. Quadratic variation of \(X\) defined by
\[\langle X\rangle_{t}=\lim_{n\rightarrow\infty}\sum_{i=0}^{k_{n}-1}\left |X_{t_{i+1}^{(n)}}-X_{t_{i}^{(n)}}\right |^{2},\]
where for each \(n\), \(\pi_{n}=\{t_{i}^{(n)}\}_{i=0}^{k_{n}}\) is a partition of \([0,t]\), and the limit is in probability, taken over all partitions with \(\parallel\pi_{n}\parallel\rightarrow 0\) as \(n\rightarrow\infty\).

Quadratic variation of Ito integral \(I_{t}=\int_{0}^{t}X_{s}dW_{s}\) is given by
\begin{equation}{\label{kleeq420}}\tag{8}
\langle I\rangle_{t}=\int_{0}^{t}X_{s}^{2}ds,
\end{equation}
where \(W\) is a Brownian motion. It is easy to verify the result for simple processes. The general case can be proved by approximations by simple processes.

The quadratic covariation of \(X\) and \(Y\) on \([0,t]\) is defined by
\[\langle X,Y\rangle_{t}=\frac{1}{2}\left (\langle X+Y\rangle_{t}-\langle X\rangle_{t}-\langle Y\rangle_{t}\right ).\]
If \(I^{(1)}_{t}\) and \(I^{(2)}_{t}\) are Ito integrals of \(X^{(1)}_{t}\) and \(X^{(2)}_{t}\) with respect to the same Brownian motion \(W_{t}\), then clearly process \(I^{(1)}_{t}+I^{(2)}_{t}\) is also Ito integral process of \(X^{(1)}_{t}+X^{(2)}_{t}\) wit respect to the same Brownian motion \(W_{t}\). Using (\ref{kleeq420}) it follows that
\[\langle I^{(1)},I^{(2)}\rangle_{t}=\int_{0}^{t}X^{(1)}_{s}X^{(2)}_{s}ds.\]
It can be seen that quadratic covariation is given by the limit in probability of processes
\[\langle X,Y\rangle_{t}=\lim_{n\rightarrow\infty}\sum_{i=0}^{k_{n}-1}\left (X_{t_{i+1}^{(n)}}-X_{t_{i}^{(n)}}\right )\cdot\left (Y_{t_{i+1}^{(n)}}-Y_{t_{i}^{(n)}}\right )\]
when \(\parallel\pi_{n}\parallel\rightarrow 0\) as \(n\rightarrow\infty\).

Proposition. If \(X\) is continuous and is of finite variation then \(\langle X,Y\rangle =0\) for any other semimartingale \(Y\). \(\sharp\)

Ito’s Formula.

This follows from Klebaner \cite{kle}.

Theorem. (Ito’s Formula). Let \(X_{s}\) have a stochastic differential for \(0\leq s\leq t\)
\[dX_{s}=\mu_{s}ds+\sigma_{s}dW_{s}.\]
If \(f(x)\) is twice continuously differentiable, then the stochastic differential of the process \(Y_{s}=f(X_{s})\) exists and is given by
\begin{align*}
df(X_{s}) & =f'(X_{s})dX_{s}+\frac{1}{2}f”(X_{s})d\langle X\rangle_{s}\\
& =f'(X_{s})dX_{s}+\frac{1}{2}f”(X_{s})\cdot\sigma_{s}ds\\
& =\left (f'(X_{s})\cdot\mu_{s}+\frac{1}{2}f”(X_{s})\cdot\sigma_{s}^{2}\right )ds+f'(W_{s})\cdot\sigma_{s}dW_{s}.
\end{align*}
In integral notations
\[f(X_{t})=f(X_{0})+\int_{0}^{t}f'(X_{s})dX_{s}+\frac{1}{2}\int_{0}^{t}f”(X_{s})\cdot\sigma_{s}^{2}ds. \sharp\]

We introduce a convention \(dY_{s}dX_{s}=d\langle X,Y\rangle_{s}\), and in particular \((dY_{s})^{2}=d\langle Y\rangle_{s}\). If \(X_{s}\) and \(Y_{s}\) have stochastic differentials
\[dX_{s}=\mu_{s}^{X}ds+\sigma_{s}^{X}dW_{s}\mbox{ and }dY_{s}=\mu_{s}^{Y}ds+\sigma_{s}^{Y}dW_{s},\]
then their quadratic covariation exists and can be obtained by multiplication of \(dX\) and \(dY\), namely
\[d\langle X,Y\rangle_{s}=dX_{s}dY_{s}=\sigma_{s}^{X}\cdot\sigma_{s}^{Y}\cdot (dW_{s})^{2}=\sigma_{s}^{X}\cdot\sigma_{s}^{Y}ds.\]
We show next a representation of \(\langle X,Y\rangle_{s}\) in terms of Ito integrals. This representation gives rise to the integration by parts formula.

Quadratic variation is a limit over decreasing partitions of \([0,t]\)
\[\langle X,Y\rangle_{t}=\lim_{n\rightarrow\infty}\sum_{i=0}^{k_{n}-1}\left (X_{t_{i+1}^{(n)}}-X_{t_{i}^{(n)}}\right )\cdot\left (Y_{t_{i+1}^{(n)}}-Y_{t_{i}^{(n)}}\right ).\]
We have
\begin{align*}
& \sum_{i=0}^{k_{n}-1}\left (X_{t_{i+1}^{(n)}-X_{t_[i}^{(n)}}\right )\cdot\left (Y_{t_{i+1}^{(n)}}-Y_{t_{i}^{(n)}}\right )\\
& =\sum_{i=0}^{k_{n}-1}\left (X_{t_{i+1}^{(n)}}\cdot Y_{t_{i+1}^{(n)}}-X_{t_{i}^{(n)}}\cdot Y_{t_{i}^{(n)}}\right )-\sum_{i=0}^{k_{n}-1}
X_{t_{i}^{(n)}}\cdot\left (Y_{t_{i+1}^{(n)}}-Y_{t_{i}^{(n)}}\right )-\sum_{i=0}^{k_{n}-1}Y_{t_{i}^{(n)}}\cdot\left (X_{t_{i+1}^{(n)}}-X_{t_{i}^{(n)}}\right )\\
& = X_{t}Y_{t}-X_{0}Y_{0}-\sum_{i=0}^{k_{n}-1}X_{t_{i}^{(n)}}\cdot\left (Y_{t_{i+1}^{(n)}}-Y_{t_{i}^{(n)}}\right )-\sum_{i=0}^{k_{n}-1}Y_{t_{i}^{(n)}}\cdot\left (X_{t_{i+1}^{(n)}}-
X_{t_{i}^{(n)}}\right ).
\end{align*}
The last two sums converge in probability to Ito integrals \(\int_{0}^{t}X_{s}dY_{s}\) and \(\int_{0}^{t}Y_{s}dX_{s}\) (ref. Corollary \ref{klec432}). Thus the following expression is obtained
\begin{equation}{\label{kleeq457}}\tag{9}
\langle X,Y\rangle_{t}=X_{t}Y_{t}-X_{0}Y_{0}-\int_{0}^{t}X_{s}dY_{s}-\int_{0}^{t}Y_{s}dX_{s}.
\end{equation}
Thus the formula for integration by parts is given by
\[X_{t}Y_{t}-X_{0}Y_{0}=\int_{0}^{t}X_{s}dY_{s}+\int_{0}^{t}Y_{s}dX_{s}+\langle X,Y\rangle_{t}.\]
In differential notations this reads
\[d(X_{t}Y_{t})=X_{t}dY_{t}+Y_{t}dX_{t}+d\langle X,Y\rangle_{t}=X_{t}dY_{t}+Y_{t}dX_{t}+\sigma_{t}^{X}\cdot\sigma_{t}^{Y}dt.\]
Note that if one of the processes is continuous and is of finite variation, then the covariation term is zero. Thus for such processes the stochastic product rule is the same as usual.

Formula (\ref{kleeq457}) provides an alternative representation for quadartic variation
\[\langle X\rangle_{t}=X_{t}^{2}-X_{0}^{2}-2\int_{0}^{t}X_{s}dX_{s}.\]

Let \({\bf W}_{s}=(W_{s}^{(1)},\cdots ,W_{s}^{(n)})\) be an \(n\)-dimensional Brownian motion, that is, all coordinates \(W_{s}^{(i)}\) are independent one-dimensional Brownian motions. Let us consider the \(m\)-dimensional Ito process
\[d{\bf X}_{s}=\mbox{\boldmath \(\mu\)}_{s}ds+\mbox{\boldmath \(\sigma\)}_{s}d{\bf W}_{s},\]
where \(\mbox{\boldmath \)latex \mu$}$ is an \(n\)-dimensional vector-valued function, \(\mbox{\boldmath \)latex \sigma$}$ is \(m\times n\) matrix-valued function, and
\begin{equation}{\label{kleeq467}}\tag{10}
dX_{s}^{(i)}=\mu_{s}^{(i)}ds+\sum_{j=1}^{n}\sigma_{s}^{(ij)}dW_{s}^{(j)}\mbox{ for }i=1,\cdots ,m.
\end{equation}

\begin{equation}{\label{klet414}}\tag{11}\mbox{}\end{equation}

Proposition \ref{klet414}.  Let \(W^{(1)}\) and \(W^{(2)}\) be independent Brownian motions. Then \(\langle W^{(1)},W^{(2)}\rangle_{s}=0\).

Proof. Let \(\pi_{n}=\{t_{i}^{(n)}\}_{i=0}^{k_{n}}\) be a partition of \([0,t]\) and consider
\[Q_{n}=\sum_{i=0}^{k_{n}-1}\left (W^{(1)}_{t_{i+1}^{(n)}}-W^{(1)}_{t_{i}^{(n)}}\right )\cdot\left (W^{(2)}_{t_{i+1}^{(n)}}-W^{(2)}_{t_{i}^{(n)}}\right ).\]
Using independence of \(W^{(1)}\) and \(W^{(2)}\), \(\mathbb{E}[Q_{n}]=0\). Since increments of Brownian motion are independent, the variance of the sum is sum of variances, and we have
\begin{align*}
\mbox{Var}[Q_{n}] & =\sum_{i=0}^{k_{n}-1}\mathbb{E}\left [\left (W^{(1)}_{t_{i+1}^{(n)}}-W^{(1)}_{t_{i}^{(n)}}\right )^{2}\right ]\cdot \mathbb{E}\left [\left(W^{(2)}_{t_{i+1}^{(n)}}-W^{(2)}_{t_{i}^{(n)}}\right )^{2}\right ]\\
& =\sum_{i=0}^{k_{n}-1}\left (t_{i+1}^{(n)}-t_{i}^{(n)}\right )^{2}\leq\max_{i}\{t_{i+1}^{(n)}-t_{i}^{(n)}\}\cdot\sum_{i=0}^{k_{n}-1}\left (t_{i+1}^{(n)}-t_{i}^{(n)}\right )=\parallel\pi_{n}\parallel\cdot t.
\end{align*}
Thus \(\mbox{Var}[Q_{n}]=\mathbb{E}[Q_{n}^{2}]\rightarrow 0\) as \(\parallel\pi_{n}\parallel\rightarrow 0\). This implies that \(Q_{n}\rightarrow 0\) in probability, and theresult is proved. \(\blacksquare\)

Using Proposition \ref{klet414} and bilinearity of covariation, it is easy to see from (\ref{kleeq467})
\[d\langle X^{(i)},X^{(j)}\rangle_{s}=dX^{(i)}_{s}dX^{(j)}_{s}=a_{ij}ds\mbox{ for }i,j=1,\cdots ,m,\]
where \({\bf a}\), called diffusion matrix, is given by \({\bf a}=\mbox{\boldmath \)latex \sigma$}\mbox{\boldmath \(\sigma\)}^{T}$.

Theorem. (Ito’s Formula). If \({\bf X}_{s}=(X_{s}^{(1)},\cdots ,X_{s}^{(m)})\) is a vector Ito process and \(f(x_{1},\cdots ,x_{m})\) is a \(C^{2}\)-function, then \(f(X_{s}^{(1)},\cdots ,X_{s}^{(m)})\) is also an Ito process, moreover its stochastic differentials is given by
\[df({\bf X}_{s})=\sum_{i=1}^{m}\frac{\partial f}{\partial x_{i}}({\bf X}_{s})dX_{s}^{(i)}+\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{m}
\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}({\bf X}_{s})d\langle X^{(i)},X^{(j)}\rangle_{s}. \sharp\]

Corollary. Let \(f(x,t)\) be twice continuously differentiable in \(x\), and continuously differentiable in \(t\), i.e., a \(C^{2,1}\)-function, and \(X\) be an Ito process, then
\[df(X_{t},t)=\frac{\partial f}{\partial x}(X_{t},t)dX_{t}+\frac{\partial f}{\partial t}(X_{t},t)+\frac{1}{2}\sigma_{t}^{2}\cdot\frac{\partial^{2}f}{\partial x^{2}}(X_{t},t)dt. \sharp\]

Example. We find stochastic differential of \(X_{t}=\exp (W_{t}-\frac{t}{2})\), where \(W\) is a Brownian motion. Using Ito’s formula with \(f(x,t)=\exp (x-\frac{t}{2})\), \(X_{t}=f(W_{t},t)\) satisfies
\begin{align*}
dX_{t} & =df(W_{t},t)=\frac{\partial f}{\partial x}dW_{t}+\frac{\partial f}{\partial t}dt+\frac{1}{2}\frac{\partial^{2}f}{\partial x^{2}}dt\\
& =f(W_{t},t)dW_{t}-\frac{1}{2}f(W_{t},t)dt+\frac{1}{2}f(W_{t},t)dt\\
& =f(W_{t},t)dW_{t}=X_{t}dW_{t}. \sharp
\end{align*}

$U$ is called the {\bf stochastic exponential} of \(X\), and is denoted by \({\cal E}(X)\), if \(U\) satisfies
\begin{equation}{\label{kleeq463}}\tag{12}
dU_{t}=U_{t}dX_{t}\mbox{ with }U_{0}=1\mbox{ or }U_{t}=1+\int_{0}^{t}U_{s}dX_{s}.
\end{equation}
If \(X\) is of finite variation then the solution to (\ref{kleeq463}) is given by \(U_{t}=e^{X_{t}}\).

Proposition. Let \(X\) be a continuous process. If
\[U_{t}={\cal E}(X)_{t}=\exp\left (X_{t}-X_{0}-\frac{1}{2}\langle X\rangle_{t}\right ),\]
then it satsfies \((\ref{kleeq463})\).

Proof. We write \(U_{t}=e^{V_{t}}\) with \(V_{t}=X_{t}-X_{0}-\frac{1}{2}\langle X\rangle_{t}\). Then
\[d{\cal E}(X)_{t}=dU_{t}=d(e^{V_{t}})=e^{V_{t}}dV_{t}+\frac{1}{2}e^{V_{t}}d\langle V\rangle_{t}.\]
Since \(\langle X\rangle_{t}\) is of finite variation, and \(X_{t}\) is continuous, \(\langle X,\langle X\rangle\rangle_{t}=0\), and \(\langle V\rangle_{t}=\langle X\rangle_{t}\). Using this with the expression for \(V_{t}\), we obtain
\[d{\cal E}(X)_{t}=e^{V_{t}}dX_{t}-\frac{1}{2}e^{V_{t}}d\langle X\rangle_{t}+\frac{1}{2}e^{V_{t}}d\langle X\rangle_{t}=e^{V_{t}}dX_{t},\]
and (\ref{kleeq463}) is established. The proof of uniqueness is done by assuming that there is another process satisfyin (\ref{kleeq463}), say \(Y_{t}\), and showing by integration by parts that \(d(Y_{t}/U_{t})=0\). \(\blacksquare\)

Note that if \(U_{0}\neq 1\), then
\[U_{t}=U_{0}+\int_{0}^{t}U_{s}dX_{s}\mbox{ and }U_{t}=U_{0}\cdot\exp\left (X_{t}-X_{0}-\frac{1}{2}\langle X\rangle_{t}\right ).\]

Example. An application of the stochastic exponential is provided by the relation between the stock process and its return process. Let \(S_{t}\) denotes the price of stock. Return on stock \(R_{t}\) is defined by the relation \(dR_{t}=dS_{t}/S_{t}\). Thus \(dS_{t}=S_{t}dR_{t}\) and the stock price is the stochastic exponential of the return. Returns are usually easier to model from the first principles. For example, in the Black-Scholes model \(R_{t}=\mu t+\sigma W_{t}\). The stock price is then given by
\[S_{t}=S_{0}\cdot\exp\left (R_{t}-R_{0}-\frac{1}{2}\langle R\rangle_{t}\right )=S_{0}\cdot\exp\left (\left (\mu-\frac{\sigma^{2}}{2}\right )t+\sigmaW_{t}\right ). \sharp\]

For a semimartingale \(X\), its stochastic exponential \({\cal E}(X)_{t}=U_{t}\) is defined as the unique solution of the equation
\[U_{t}=1+\int_{0}^{t}U_{s-}dX_{s}\mbox{ and }U_{0}=1.\]
For a continuous semimartingale \(X\), the equation for its stochastic exponential is
\[dU_{t}=U_{t}dX_{t}\mbox{ and }U_{0}=1.\]

\begin{equation}{\label{klet88}}\tag{13}\mbox{}\end{equation}

Proposition \ref{klet88}. Let \(X\) be a continuous semimartingale. Then its stochastic exponential is given by
\[U_{t}={\cal E}(X)_{t}=\exp\left (X_{t}-X_{0}-\frac{1}{2}\langle X\rangle_{t}\right ). \sharp\]

\begin{equation}{\label{klet89}}\tag{14}\mbox{}\end{equation}

Proposition \ref{klet89}. Let \(X\) and \(Y\) be semimartingales on the same space. Then
\[{\cal E}(X)_{t}\cdot {\cal E}(Y)_{t}={\cal E}(X+Y+\langle X,Y\rangle )_{t}.\]
If \(X\) is continuous with \(X_{0}=0\), then
\[\frac{1}{{\cal E}(X)_{t}}={\cal E}(-X+\langle X\rangle )_{t}. \sharp\]

\begin{equation}{\label{klet811}}\tag{15}\mbox{}\end{equation}

Proposition \ref{klet811}. (Novikov’s Condition). Let \(M\) be a continuous local martingale with \(M_{0}=0\). Suppose that for each \(t\)
\[\mathbb{E}\left [\exp\left (\frac{1}{2}\langle M\rangle_{t}\right )\right ]<\infty .\]
Then \({\cal E}(M)\) is a martingale with mean one. In particular, if for each \(t\) there is a constant \(c_{t}\) such that \(\langle M\rangle_{t}<c_{t}\), then \({\cal E}(M)\) is a martingale. \(\sharp\)

\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}

Quadratic Variations Defined by Doob-Meyer Decomposition.

This follows from Karatzas and Shreve \cite{kar}.

Definition. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a right-continuous martingale. We say that \(X\) is {\bf square-integrable} if \(\mathbb{E}[X_{t}^{2}]<\infty\) for every \(t\geq 0\). If, in addition, \(X_{0}=0\) a.s., we write \(X\in {\cal M}_{2}\), or \(X\in {\cal M}_{2}^{c}\) if \(X\) is also continuous. \(\sharp\)

For any \(X\in {\cal M}_{2}\), we observe that \(\{(X_{t}^{2},{\cal F}_{t})\}_{t\geq 0}\) is a nonnegative submartingale, hence of class DL, and so \(X^{2}\) has a unique Doob-Meyer decomposition in Theorem \ref{kart1410}
\[X_{t}^{2}=M_{t}+A_{t}\mbox{ for }t\geq 0\]
where \(\{(M_{t},{\cal F}_{t})\}_{t\geq 0}\) is a right-continuous martingale and \(\{(A_{t},{\cal F}_{t})\}_{t\geq 0}\) is a natural increasing process. We normalize these processes so that \(M_{0}=A_{0}=0\) \(P\)-a.s. If \(X\in {\cal M}_{2}^{c}\), then \(A\) and \(M\) are continuous from Proposition \ref{kart1414}.

Definition. For \(X\in {\cal M}_{2}\), we define the {\bf quadratic variation} of \(X\) to be the process \(\langle X\rangle_{t}\equiv A_{t}\), where \(A\) is the natural increasing process in the Doob-Meyer decomposition \(X^{2}\). In other words, \(\langle X\rangle \)latex is that unique up to indistinguishability adapted, natural increasing process, for which \(\langle X\rangle_{0}=0\) a.s. and \(X^{2}-\langle X\rangle\) is a martingale. \(\sharp\)

If we take two elements \(X,Y\) of \({\cal M}_{2}\), then both processes \((X+Y)^{2}-\langle X+Y\rangle\) and \((X-Y)^{2}-\langle X-Y\rangle\) are martingales, and therefore so is their difference \(4XY-(\langle X+Y\rangle -\langle X-Y\rangle )\).

\begin{equation}{\label{kard155}}\tag{16}\mbox{}\end{equation}

Definition \ref {kard155}. For any two martingales \(X,Y\) in \({\cal M}_{2}\), we define their cross-variation process $\langle X,Y\rangle$ by
\[\langle X,Y\rangle_{t}\equiv\frac{1}{4}(\langle X+Y\rangle_{t}-\langle X-Y\rangle_{t})\mbox{ for }t\geq 0,\]
and observe that \(XY-\langle X,Y\rangle\) is a martingale. Two elements \(X,Y\) of \({\cal M}_{2}\) are called {\bf orthogonal} if \(\langle X,Y\rangle_{t}=0\) \(P\)-a.s. holds for every \(t\geq 0\). \(\sharp\)

The uniqueness argument in Theorem \ref{kart1410} also shows that \(\langle X,Y\rangle\) is, up to indistinguishability, the only process of the form \(A=A_{1}-A_{2}\) with \(A_{1},A_{2}\) adapted and natural increasing such that \(XY-A\) is a martingale. In particular, \(\langle X,X\rangle =\langle X\rangle\). In view of the identities
\[\mathbb{E}\left .\left [(X_{t}-X_{s})(Y_{t}-Y_{s}\right |{\cal F}_{s}\right ]=\mathbb{E}\left .\left [X_{t}Y_{t}-X_{s}Y_{s}\right |{\cal F}_{s}\right ]=
\mathbb{E}\left .\left [\langle X,Y\rangle_{t}-\langle X,Y\rangle_{s}\right |{\cal F}_{s}\right ],\]
valid \(P\)-a.s. for every \(0\leq s<t<\infty\), the orthogonality of \(X,Y\) in \({\cal M}_{2}\) is equivalent to the statement “$XY$ is a martingale” or “the increments of \(X\) and \(Y\) over \([s,t]\) are conditionally
uncorrelated given \({\cal F}_{s}\)”.

Proposition. (Karatzas and Shreve \cite{kar}). \(\langle\cdot ,\cdot\rangle\) is a bilinear form on \({\cal M}_{2}\), i.e., for any members \(X,Y,Z\) of \({\cal M}_{2}\) and real numbers \(\alpha ,\beta\), we have

\begin{align*}\langle\alpha X+\beta Y,Z\rangle & =\alpha\langle X,Z\rangle+\beta\langle Y,Z\rangle;\\ \langle X,Y\rangle & =\langle Y,X\rangle;\\ \langle X,Y\rangle^{2} & \leq\langle X\rangle\cdot\langle Y\rangle. \sharp\end{align*}

Let \(\{X_{t}\}_{t\geq 0}\) be a process, fix \(t>0\), and let \(\pi =\{t_{0},t_{1},\cdots ,t_{m}\}\) with \(0=t_{0}<t_{1}<\cdots <t_{m}=t\) be a partition of \([0,t]\). We define the \(p\)-th variation of \(X\) over the
partition \(\pi\) to be
\[V_{t}^{(p)}(\pi )=\sum_{k=1}^{m}\left |X_{t_{k}}-X_{t_{k-1}}\right |^{p}.\]
If \(V_{t}^{(2)}(\pi )\) converges in some sense as \(\parallel\pi\parallel\rightarrow 0\), the limit is entitled to be called the quadratic variation of \(X\) on \([0,t]\).

\begin{equation}{\label{kart158}}\tag{17}\mbox{}\end{equation}

Proposition \ref{kart158}. (Karatzas and Shreve \cite{kar}). Let \(X\) be in \({\cal M}_{2}^{c}\). For partitions \(\pi\) of \([0,t]\), we hve
\[\lim_{\parallel\pi\parallel\rightarrow 0}V_{t}^{(2)}(\pi )=\langle X\rangle_{t}\mbox{ in probability};\]
i.e. for every \(\epsilon >0\), \(\eta >0\) there exists \(\delta >0\) such that \(\parallel\pi\parallel <\delta\) implies
\[P\left\{|V_{t}^{(2)}(\pi )-\langle X\rangle_{t}|>\epsilon\right\}<\eta . \sharp\]

\begin{equation}{\label{karp1511}}\tag{18}\mbox{}\end{equation}

Proposition \ref{karp1511}. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) be a continuous process with the property that for each fixed \(t>0\) and for some \(p>0\)
\[\lim_{\parallel\pi\parallel\rightarrow 0}V_{t}^{(p)}(\pi )=Y_{t}\mbox{ in probability},\]
where \(Y_{t}\) is a random variable taking values in \([0,\infty )\) a.s. Then, we have
\[\lim_{\parallel\pi\parallel\rightarrow 0}V_{t}^{(q)}(\pi )=\left\{
\begin{array}{ll}
0 \mbox{ in probability} & \mbox{ for \(q>p\)}\\
\infty \mbox{ in probability} & \mbox{ for \(0<q<p\)}
\end{array}\right .\]
on the event \(\{Y_{t}>0\}\). \(\sharp\)

\begin{equation}{\label{karp1512}}\tag{19}\mbox{}\end{equation}

Proposition \ref{karp1512}. Let \(X\) be in \({\cal M}_{2}^{c}\), and \(T\) be a stopping time of \(\{{\cal F}_{t}\}_{t\geq 0}\). If \(\langle X\rangle_{T}=0\) \(P\)-a.s., then we have \(\mathbb{P}\left\{X_{T\wedge t}=0;\mbox{ for all }t\geq 0\right\}=1\). \(\sharp\)

The conclusion to be drawn from Propositions \ref{kart158}, \ref{karp1511} and \ref{karp1512} is that for continuous, square-integrable martingales, quadratic variation is the “right” variation to study. All variations of higher order are zero, and, except in trivial cases where the martingale is a.s. constant on an initial interval, all variations of lower order are infinite with positive probability.

Proposition. Let \(\{(X_{t},{\cal F}_{t})\}_{t\geq 0}\) and \(\{(Y_{t},{\cal F}_{t})\}_{t\geq 0}\) be members of \({\cal M}_{2}^{c}\). There is a unique, up to indistinguishability, \({\cal F}_{t}\)-adapted, continuous process of bounded variation \(\{(A_{t},{\cal F}_{t})\}_{t\geq 0}\) satisfying \(A_{0}=0\) \(P\)-a.s. such that \(\{(X_{t}Y_{t}-A_{t},{\cal F}_{t})\}_{t\geq 0}\) is a martingale. This process is given by the cross-variation \(\langle X,Y\rangle\) of Definition \ref{kard155}. \(\sharp\)

Proposition. (Karatzas and Shreve \cite{kar}). Let \(X,Y\) be members of \({\cal M}_{2}^{c}\) and \(\pi\) be partitions of \([0,t]\). Then we have
\[\lim_{\parallel\pi\parallel\rightarrow0}\sum_{k=1}^{m}(X_{t_{k}}-X_{t_{k-1}})(Y_{t_{k}}-Y_{t_{k-1}})=\langle X,Y\rangle_{t}\mbox{ in probability.}\sharp\]

Next we consider the cross-variation of continuous local martingale.

\begin{equation}{\label{karp1517}}\tag{20}\mbox{}\end{equation}

Proposition \ref{karp1517}. Let \(X\) and \(Y\) be two continuous local martingales. Then there is a unique, up to indistinguishability, adapted, continuous process of bounded variation \(\langle X,Y\rangle\) satisfying \(\langle X,Y\rangle_{0}=0\) \(P\)-a.s. such that \(XY-\langle X,Y\rangle\) is a continuous local martingale. If \(X=Y\), we write \(\langle X\rangle =\langle X,X\rangle\), and this process is nondecreasing. \(\sharp\)

Definition. We call the process \(\langle X,Y\rangle\) of Proposition \ref{karp1517} the cross-variation of \(X\) and \(Y\), in accordance with Definition \ref{kard155}. We call \(\langle X\rangle\) the quadratic variation of \(X\). \(\sharp\)

Proposition. We have the following properties

(i) A local martingale of class DL is a martingale.

(ii) A nonnegative local martingale is a supermartingale.

(iii) If \(M\) is a continuous local martingale and \(T\) is a stopping time of \(\{{\cal F}_{t}\}_{t\geq 0}\), then \(\mathbb{E}[M_{T}^{2}]\leq \mathbb{E}\left [\langle M\rangle_{T}\right ]\), where \(M_{\infty}^{2}\equiv\liminf_{t\rightarrow\infty}M_{t}^{2}\). \(\sharp\)

\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}

Quadratic Variations Defined by Stochastic Integrals.

Approach I and Ito’s Formula.

This approach follows from Chung and Williams \cite{chu}.

\begin{equation}{\label{chut41}}\tag{21}\mbox{}\end{equation}

Theorem \ref{chut41}. (Chung and Williams \cite{chu}). Let \(\{\pi_{n}\}\) be a sequence of partitions of \([0,t]\) such that \(\lim_{n\rightarrow\infty}\parallel\pi_{n}\parallel =0\). Suppose \(M\) is a continuous local martingale (i.e. continuous local \(L^{1}\)-martingale) and for each \(n\) let
\[S_{t}^{(n)}=\sum_{j}(M_{t_{j+1}}-M_{t_{j}})^{2}\]
where the sum is over all \(j\) such that both \(t_{j}\) and \(t_{j+1}\) are in \(\pi_{n}\). Then

(i) if \(M\) is bounded, \(\{S_{t}^{(n)}\}\) converges in \(L^{2}\) to
\begin{equation}{\label{chueq41}}\tag{22}
\langle M\rangle_{t}\equiv M_{t}^{2}-M_{0}^{2}-2\int_{0}^{t}MdM,
\end{equation}
where \(\int_{0}^{t}MdM\) is the stochastic integral.

(ii) \(\{S_{t}^{(n)}\}\) converges in probability to \(\langle M\rangle_{t}\). \(\sharp\)

We call \(\langle M\rangle_{t}\), defined in (\ref{chueq41}), the quadratic variation of \(M\) at time \(t\). It is essential that the partitions \(\pi_{t}^{(n)}\) do not depend on \(\omega\). Indeed, otherwise the supremum of the quadratic sums defining \(S_{t}^{(n)}\) over all partitions depending on \(\omega\) may be \(+\infty\).

Example.  In the case where \(W\) is a Brownian motion in \(\mathbb{R}\), the quadratic variation process \(\langle W\rangle \)latex is indistinguishable from \(t\).  \(\sharp\)

A process \(\{M_{t}\}_{t\geq 0}\) will be called integrable if and only if \(M_{t}\in L^{1}\) for each \(t\).

\begin{equation}{\label{chut42}}\tag{23}\mbox{}\end{equation}

Proposition \ref{chut42}. (Chung and Williams \cite{chu}). Let \(M\) be a continuous \(L^{2}\)-martingale. Then, we have the following properties.

(i) \(\{\langle M\rangle_{t}\}_{t\geq 0}\) is a continuous integrable increasing process with \(\langle M\rangle_{0}=0\).

(ii) \(\left\{\int_{0}^{t}MdM\right\}_{t\geq 0}\) is a continuous martingale with zero mean (i.e. continuous \(L^{1}\)-martingale).

(iii) For each \(t\), the sequence \(\{S_{t}^{(n)}\}\), defined in Theorem \ref{chut41}, converges in \(L^{1}\) to \(\langle M\rangle_{t}\).

(iv) The content \(\lambda_{\langle M\rangle}\) of \(\langle M\rangle\) is given by
\begin{equation}{\label{chueq44}}\tag{24}
\lambda_{\langle M\rangle}(A)=\mathbb{E}\left [\int_{0}^{\infty}I_{A}(s)d\langle M\rangle_{s}\right ]
\end{equation}
for each \(A\in {\cal A}\), and furthermore \(\lambda_{\langle M\rangle}=\mu_{M}\) on \({\cal A}\).

(v) For \(X\in\Lambda^{2}({\cal P},M)\) and each \(t\),
\[\mathbb{E}\left [\int_{0}^{t}X_{s}^{2}d\langle M\rangle_{s}\right ]=\int_{\mathbb{R}_{+}\times\Omega}I_{[0,t]}\cdot X^{2}d\mu_{M}.\]

(vi) For \(X\in {\bf L}^{2}({\cal P},\mu_{M})\),
\[\mathbb{E}\left [\int_{0}^{\infty}X_{s}^{2}d\langle M\rangle_{s}\right ]=\int_{\mathbb{R}_{+}\times\Omega}X^{2}d\mu_{M}. \sharp\]

Definition (\ref{chueq41}) of \(\langle M\rangle\) used the stochastic integral and consequently the result that \(\mu_{M}\) can be extended from \({\cal A}\) to a measure on \({\cal P}\). When \(M\) is a continuous bounded martingale, we can prove directly that a sequence of quadratic sums \(\{S_{t}^{(n)}\}\) converges in \(L^{2}\) to a limit \(S_{t}\) such that \(\{S_{t}\}_{t\geq 0}\) is a continuous increasing process and \(\{M_{t}^{2}-M_{0}^{2}-S_{t}\}_{t\geq 0}\) is a martingale. Then the right member of (\ref{chueq44}), with \(\langle M\rangle\) replaced by \(S\), defines a measure on sets \(A\) in \({\cal P}\) which agrees with \(\mu_{M}\) on \({\cal A}\), since \(S\) has all the properties of \(\langle M\rangle\) required for the proof of Proposition \ref{chut42} (iv). Consequently, for such \(M\), \(\mu_{M}\) can be defined directly on \({\cal P}\).

Proposition. (Chung and Williams \cite{chu}). Let \(M\) be a continuous bounded martingale. For each \(t\) and \(n\) let
\[t_{j}^{(n)}=t\wedge (j\cdot 2^{-n})\mbox{ for }j\in {\bf N}\cup\{0\}\mbox{ and }S_{t}^{(n)}=\sum_{j=0}^{\infty}\left (M_{t_{j+1}^{(n)}}-M_{t_{j}^{(n)}}\right )^{2}.\]
Then, for each \(t\), the sequence \(\{S_{t}^{(n)}\}\) converges in \(L^{2}\). If \(S_{t}\) denotes the \(L^{2}\)-limit of this sequence, then \(\{M_{t}^{2}-M_{0}^{2}-S_{t}\}_{t\geq 0}\) is a martingale and there is a version of \(\{S_{t}\}_{t\geq 0}\) which is a continuous increasing process. \(\sharp\)

When \(M\) is a continuous \(L^{2}\)-martingale, we can also prove the extendibility of \(\mu_{M}\) from \({\cal A}\) to \({\cal P}\) as follows. We may assume \(M_{0}=0\) since \(M\) and \(M-M_{0}\) induce the same content \(\mu_{M}\) on \({\cal A}\). Since \(M\) is a local martingale, there is a localizing sequence \(\{T_{n}\}\) such that for each \(n\), \(M^{(n)}=\{M_{t\wedge T_{n}}\}_{t\geq 0}\) is bounded, and thus \(\mu_{M^{(n)}}\) can be defined on \({\cal P}\) as above. Then \(\int_{0}^{t} MdM=\lim_{n\rightarrow\infty}\int_{0}^{t}M^{(n)}dM^{(n)}\) defines a continuous local martingale without the use of \(\mu_{M}\). Equation (\ref{chueq41}), Theorem \ref{chut41} and Proposition \ref{chut42} then follow. In particular, the right side of (\ref{chueq44}) defines a measure on \({\cal P}\) which agrees with \(\mu_{M}\) on \({\cal A}\).

Next we shall discuss the decomposition afforded by (\ref{chueq41}) of the continuous local submartingale \(M_{t}^{2}\) as the sum of the continuous local martingale \(M_{0}^{2}+2\int_{0}^{t}MdM\) and the continuous increasing process \(\langle M\rangle_{t}\). We begin with the canonical example of Brownian motion.

Example. If \(W\) is a Brownian motion in \(\mathbb{R}\) with \(W_{0}\in L^{2}\), then by (\ref{chueq41}) we have a.s.
\[W_{t}^{2}=W_{0}^{2}+2\int_{0}^{t}WdW+t.\]
By Proposition \ref{chut42}, \(\left\{W_{0}^{2}+2\int_{0}^{t}WdW\right\}_{t\geq 0}\) is a continuous martingale. Clearly, \(\{t\}_{t\geq 0}\) is a continuous integrable increasing process with initial value zero. \(\sharp\)

For a continuous \(L^{2}\)-martingale \(M\), by (\ref{chueq41}), we have the following decomposition of the continuous submartingale \(M^{2}\)
\begin{equation}{\label{chueq413}}\tag{25}
M_{t}^{2}=M_{0}^{2}+2\int_{0}^{t}MdM+\langle M\rangle_{t}.
\end{equation}
By Proposition \ref{chut42}, \(\left\{M_{0}^{2}+2\int_{0}^{t}MdM\right\}_{t\geq 0}\) is a continuous martingale and \(\langle M\rangle\) is a continuous integrable increasing process with initial value zero.
This decomposition is in fact unique (up to indistinguishability) and is a special case of the Doob-Meyer decomposition.

Definition. A process \(V\) is said to be locally of bounded variation if and only if it is adapted and almost surely the sample function \(t\mapsto V_{t}(\omega )\) is of bounded variation on each bounded
interval in \(\mathbb{R}_{+}\). \(\sharp\)

Some authors say a process \(V\) is locally of bounded variation if it is adapted and there is a sequence of stopping times \(\{T_{n}\}\) increasing to \(\infty\) such that almost surely the sample function \(t\mapsto V_{t\wedge T_{n}}(\omega )\) is of bounded variation on \(\mathbb{R}_{+}\) for each \(n\). This is equivalent to the above definition. We only treat the case of continuous \(V\) here. A process is continuous and locally of bounded variation if and only if it is the difference of two continuous increasing processes.

\begin{equation}{\label{chul44}}\tag{26}\mbox{}\end{equation}

Proposition \ref{chul44}. (Chung and Williams \cite{chu}). Let \(V\) be a continuous \(L^{2}\)-martingale that is locally of bounded variation. Then \(P\{V_{t}=V_{0}\mbox{ for all }t\geq 0\}=1\). \(\sharp\)

\begin{equation}{\label{chuc45}}\tag{27}\mbox{}\end{equation}

Corollary \ref{chuc45}. (Chung and Williams \cite{chu}). Let \(V\) be a continuous local martingale \((\)i.e. continuous local \(L^{1}\)-martingale$)$ that is locally of bounded variation. Then \(P\{V_{t}=V_{0}\mbox{ for all }t\geq 0\}=1\). \(\sharp\)

The following decomposition theorem is an immediate consequence of (\ref{chueq413}), Proposition \ref{chul44} and the fact that the difference of two continuous increasing processes is continuous and locally of bounded variation.

\begin{equation}{\label{chut46}}\tag{28}\mbox{}\end{equation}

Theorem \ref{chut46}. Let \(M\) be a continuous \(L^{2}\)-martingale. Then there is a unique decomposition of \(M^{2}\) as the sum of a continuous martingale and a continuous integrable increasing process with initial value zero. This decomposition is given by
\begin{equation}{\label{chueq414}}\tag{29}
M_{t}^{2}=M_{0}^{2}+2\int_{0}^{t}MdM+\langle M\rangle_{t}
\end{equation}
for all \(t\geq 0\). \(\sharp\)

If \(M\) is a continuous local martingale, equation (\ref{chueq414}) still
holds, but then \(M_{0}^{2}+2\int_{0}^{t}MdM\) is a local martingale and
$\langle M\rangle$ need not be integrable. The uniqueness of the decomposition
follows from Corollary \ref{chuc45}. Stated formally we have the following.

\begin{equation}{\label{chut47}}\tag{30}\mbox{}\end{equation}

Theorem \ref{chut47}. Let \(M\) be a continuous local martingale. Then there is a unique decomposition of \(M^{2}\) as the sum of a continuous local martingale and a continuous increasing process with initial value zero. This decomposition is given by (\ref{chueq414}). \(\sharp\)

Proposition. Let \(M\) be a continuous local martingale and let \(Y\) be a bounded continuous adapted process. Let \(\{\pi_{n}\}\) be a sequence of partitions of \([0,t]\) such that \(\lim_{n\rightarrow\infty} \parallel\pi_{n}\parallel =0\). For each \(n\in {\bf N}\), let
\[Z_{n}=\sum_{j}Y_{t_{j}}\cdot\left (M_{t_{j+1}}-M_{t_{j}}\right )^{2}\]
where the sum is over all \(j\) such that \(t_{j}\) and \(t_{j+1}\) are both in \(\pi_{n}\). Then \(\{Z_{n}\}\) converges in probability to \(\int_{0}^{t}Y_{s}d\langle M\rangle_{s}\). \(\sharp\)

Let \(M\) and \(N\) be continuous local martingale (i.e. continuous local \(L^{1}\)-martingale). Then so too are \(M+N\) and \(M-N\). Hence, by Theorem \ref{chut46}, \((M+N)^{2}-\langle M+N\rangle\) and \((M-N)^{2}-\langle M-N\rangle\) are local martingales and consequently so is
\begin{equation}{\label{chueq57}}\tag{31}
MN-\frac{1}{4}(\langle M+N\rangle -\langle M-N\rangle )=\frac{1}{4}\left ((M+N)^{2}-\langle M+N\rangle -(M-N)^{2}+\langle M-N\rangle\right ).
\end{equation}

Definition. For continuous local martingales \(M\) and \(N\), let
\[\langle M,N\rangle =\frac{1}{4}(\langle M+N\rangle -\langle M-N\rangle ).\]
We call \(\langle M,N\rangle\) the {\bf mutual variation process} of \(M\) and \(N\). \(\sharp\)

Note that when \(M=N\), \(\langle M,M\rangle =\langle M\rangle\). Thus the mutual variation process is an extension of the quadratic variation process. Actually, in the general theory of right-continuous local \(L^{2}\)-martingales there are two processes which may be so named. One of these, denoted by \(\langle M,N\rangle\), is defined as in the above. The other, denoted by \([M,N]\), is defined to be the unique predictable process that is locally of bounded variation, starts from zero, and is such that \(MN-[M,N]\) is a local martingale. When \(M\) and \(N\) are continuous, \(\langle M,N\rangle\) is continuous, hence predictable, and so \(\langle M,N\rangle =[M,N]\). Thus, in the situation we consider here, the two processes coincide and we will refer to this process as the mutual variation process and denote it by \(\langle M,N\rangle\). The process \(\langle M,N\rangle\) is the difference of two continuous increasing processes and hence is a continuous process which is locally of bounded variation. Therefore by (\ref{chueq57}) we have a decomposition of \(MN\) which is formally described by the following result.

\begin{equation}{\label{chut52}}\tag{32}\mbox{}\end{equation}

Proposition \ref{chut52}.  Let \(M\) and \(N\) be continuous local martingals. Then there is a unique decomposition of \(MN\) as the sum of a continuous local martingale and a continuous process which is locally of bounded variation and has initial value zero. This decomposition is given for each \(t\) by
\[(MN)_{t}=(MN)_{0}+\frac{1}{2}\int_{0}^{t}(M+N)_{s}d(M+N)_{s}-\frac{1}{2}\int_{0}^{t}(M-N)_{s}d(M-N)_{s}+\langle M,N\rangle_{t}. \sharp\]

If \(M\) and \(N\) are actually \(L^{2}\)-martingales, then by Theorem \ref{chut46} the right side of (\ref{chueq57}) is a martingale, and \(\langle M+N\rangle\) and \(\langle M-N\rangle\) are integrable. In this case we have the following refinement of Proposition \ref{chut52}.

Proposition. Let \(M\) and \(N\) be continuous \(L^{2}\)-martingales. Then there is a unique decomposition of \(MN\) as the sum of a continuous martingale and a continuous integrable process which is locally of bounded variation and has initial value zero. Moreover, for \(0\leq s<t\) we have
\[\mathbb{E}\left .\left [(M_{t}-M_{s})\cdot (N_{t}-N_{s})\right |{\cal F}_{s}\right ]=\mathbb{E}\left .\left [M_{t}N_{t}-M_{s}N_{s}\right |{\cal F}_{s}\right ]
=\mathbb{E}\left .\left [\langle M,N\rangle_{t}-\langle M,N\rangle_{s}\right |{\cal F}_{s}\right ]. \sharp\]

Corollary. Let \(M\) and \(N\) be continuous local martingales and \(T\) be a stopping time. For each \(t\), let \(M_{t}^{T}=M_{t\wedge T}\) and \(N_{t}^{T}=N_{t\wedge T}\). Then, a.s. for each \(t\) we have
\[\langle M^{T},N^{T}\rangle_{t}=\langle M,N\rangle_{t\wedge T}. \sharp\]

In other words, this corollary states that the mutual variation process for two continuous local martingales stopped by the same stopping time is indistinguishable from their mutual variation process stopped by that time.

Proposition. Let \(M\) and \(N\) be continuous local martingales. For each \(t\) let \(\{\pi_{n}\}\) be a sequence of partitions of \([0,t]\) such that \(\lim_{n\rightarrow\infty}\parallel\pi_{n}\parallel =0\). Then, as \(n\rightarrow\infty\),
\begin{equation}{\label{chueq510}}\tag{33}
\sum_{j}(M_{t_{j+1}}-M_{t_{j}})(N_{t_{j+1}}-N_{t_{j}})\rightarrow\langle M,N\rangle_{t}\mbox{ in probability}
\end{equation}
where the sum is over all \(j\) such that \(t_{j}\) and \(t_{j+1}\) are in \(\pi_{n}\). If \(M\) and \(N\) are in fact \(L^{2}\)-martingales, then the convergence in \((\ref{chueq510})\) also holds in \(L^{1}\). \(\sharp\)

If \(M\) is a continuous local martingale and \(X\in\Lambda ({\cal P},M)\), we use \(X\bullet M\) to denote the process defined by
\[(X\bullet M)_{t}=\int_{0}^{t}X_{s}dM_{s}.\]

Proposition. Let \(M\) and \(N\) be continuous local martingales, \(X\in\Lambda ({\cal P},M)\) and \(Y\in\Lambda ({\cal P},N)\). Then a.s. we have for all \(t\)
\begin{equation}{\label{chueq514}}\tag{34}
\langle X\bullet M,Y\bullet N\rangle_{t}=\int_{0}^{t}X_{s}Y_{s}d\langle M,N\rangle_{s}. \sharp
\end{equation}

Corollary. Let \(M\) be a continuous local martingale and suppose \(X\in\Lambda ({\cal P},M)\). Then a.s. we have for all \(t\)
\[\langle X\bullet M\rangle_{t}=\int_{0}^{t}X_{s}^{2}d\langle M\rangle_{s}. \sharp\]

\begin{equation}{\label{chut61}}\tag{35}\mbox{}\end{equation}

Theorem \ref{chut61}. (Characterization of Brownian Motion). A process \(\{M_{t}\}_{t\geq 0}\) is a Brownian motion in \(\mathbb{R}\) if and only if there is a standard filtration \(\{{\cal F}_{t}\}\) such that \(\{M_{t}\}_{t\geq 0}\) is a continuosu local martingale with respect to \(\{{\cal F}_{t}\}_{t\geq 0}\) with quadratic variation \(\langle M\rangle\) satisfying \(\langle M\rangle_{t}=t\) a.s. for all \(t\). \(\sharp\)

Now it is the right time to discuss the Ito’s formula.

Theorem. (Ito’s Formula). Let \(M\) be a continuous local martingale \((\)i.e. continuous local \(L^{1}\)-martingale$)$ and \(V\) be a continuous process which is locally of bounded variation. Let \(f\) be a continuous real-valued function defined on \(\mathbb{R}^{2}\) such that the partial derivatives \(\partial f/\partial x\), \(\partial f/\partial y\), and \(\partial^{2}f/\partial y^{2}\) exist and are continuous for all
$(x,y)$ in \(\mathbb{R}^{2}\). Then a.s., we have for each \(t\)
\begin{equation}{\label{chueq52}}\tag{36}
f(M_{t},V_{t})-f(M_{0},V_{0})=\int_{0}^{t}\frac{\partial f}{\partial x}(M_{s},V_{s})dM_{s}+\int_{0}^{t}\frac{\partial f}{\partial y}(M_{s},V_{s})dV_{s}+\frac{1}{2}\int_{0}^{t}\frac{\partial^{2}f}{\partial x^{2}}(M_{s},V_{s})d\langle M\rangle_{s}. \sharp
\end{equation}

A suggestive way to write (\ref{chueq52}) is by using differentials
\begin{equation}{\label{chueq53}}\tag{37}
df(M_{t},V_{t})=\frac{\partial f}{\partial x}(M_{t},V_{t})dM_{t}+\frac{\partial f}{\partial y}(M_{t},V_{t})dV_{t}+\frac{1}{2}\frac{\partial^{2}f}{\partial x^{2}}(M_{t},V_{t})d\langle M\rangle_{t}.
\end{equation}
Of course the rigorous interpretation of (\ref{chueq53}) is the integrated form (\ref{chueq52}).

Example. Let \(W\) denote a Brownian motion in \(\mathbb{R}\) and let \(f(x)=x^{2}\). With \(M=W\) and \(f(x,y)=f(x)\), (\ref{chueq53}) becomes \(dW_{t}^{2}= 2W_{t}dW_{t}+dt\). Formally this suggests \((dW_{t})^{2}=dt\). For general \(M\) the appropriate formalism is \((dM_{t})^{2}=d\langle M\rangle_{t}\). Heuristically this explains the presence of the additional term in the Ito formula.  \(\sharp\)

Now we consider the multi-dimensional Ito formula

Theorem. (Ito’s Formula) Let \(m,n\in {\bf N}\). Let \(M^{(i)}\) be a continuous local martingale for \(1\leq i\leq m\), and \(V^{(k)}\) be a continuous process which is locally of bounded variation for \(1\leq k\leq n\). Suppose that \(D\) is a domian in \(\mathbb{R}^{m+n}\) such that a.s. \({\bf Z}_{t}=(M_{t}^{(1)},\cdots , M_{t}^{(m)},V_{t}^{(1)},\cdots ,V_{t}^{(n)})\) takes values in \(D\) for all \(t\). Let \(f({\bf x},{\bf y})\) be a continuous real-valued function of \(({\bf x},{\bf y})\in D\) such that \(\partial f/\partial x_{i}\), \(\partial^{2}f/\partial x_{i}\partial x_{j}\) for \(1\leq i,j\leq m\) and \(\partial f/\partial y_{k}\) for \(1\leq k\leq n\), exist and are continuous in \(D\). Then a.s. we have for all \(t\)
\[f({\bf Z}_{t})-f({\bf Z}_{0})=\sum_{i=1}^{m}\int_{0}^{t}\frac{\partial f}{\partial x_{i}}({\bf Z}_{s})dM_{s}^{(i)}+\sum_{k=1}^{n}\int_{0}^{t}\frac{\partial f}{\partial y_{k}}({\bf Z}_{s})dV_{s}^{(k)}+\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{m}\int_{0}^{t}\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}({\bf Z}_{s})d\langle M^{(i)},M^{(j)}\rangle_{s}. \sharp\]

It is worthwhile stating the Ito formula for Brownian motion as a corollary

\begin{equation}{\label{chuc511}}\tag{38}\mbox{}\end{equation}

Corollary \ref{chuc511}.  Let \({\bf W}=(W^{(1)},\cdots ,W^{(n)})\) be a \(n\)-dimensional Brownian motion. Suppose that \(D\) is a domain in \(\mathbb{R}^{n}\) such that \(P\{{\bf W}_{t}\in D \mbox{ for all }t\}=1\). Let \(f\) be a continuous real-valued function on \(D\) such that \(\partial f/\partial x_{i}\) and \(\partial^{2}f/\partial x_{i}\partial x_{j}\) exist and are continuous in \(D\) for \(1\leq i,j\leq n\). Then a.s. we have for all \(t\)
\begin{equation}{\label{chueq523}}\tag{39}
f({\bf W}_{t})-f({\bf W}_{0})=\sum_{i=1}^{n}\int_{0}^{t}\frac{\partial f}{\partial x_{i}}({\bf W}_{s})dW_{s}^{(i)}+\frac{1}{2}\int_{0}^{t}\Delta f({\bf W}_{s})ds
\end{equation}
where \(\Delta f\) denotes the Laplacian of \(f\), defined by \(\Delta f=\sum_{i=1}^{n}\partial^{2}f/\partial x_{i}^{2}\). \(\sharp\)

Example. If Corollary \ref{chuc511} is applied to a Brownina motion in \(\mathbb{R}^{3}\) starting at \({\bf x}_{0}\neq {\bf 0}\) with \(D=\mathbb{R}^{3}\setminus \{{\bf 0}\}\) and $latex f({\bf x})=1/\parallel
{\bf x}\parallel$; then since \(P^{{\bf x}_{0}}\{{\bf W}_{t}={\bf 0}\mbox{ for all }t\geq 0\}=0\), \(\partial f/\partial x_{i}=-x_{i}/\parallel {\bf x}\parallel^{3}\) and \(\Delta f=0\) in \(D\), (\ref{chueq523}) becomes
\[\frac{1}{\parallel {\bf W}_{t}\parallel}-\frac{1}{\parallel {\bf W}_{0}\parallel}=-\sum_{i=1}^{3}\int_{0}^{t}\frac{W_{s}^{(i)}}{\parallel {\bf W}_{s}\parallel^{3}}dW_{s}^{(i)}.\]
Each of the stochastic integrals in the sum on the right is a local martingale and hence \(\{1/\parallel {\bf W}_{t}\parallel\}_{t\geq 0}\) is a local martingale. \(\sharp\)

Example. (Bessel Process). Let \({\bf W}\) be a Brownian motion in \(\mathbb{R}^{n}\) for \(n\geq 2\) such that \({\bf W}_{0}={\bf x}_{0}\in\mathbb{R}^{n}\setminus \{{\bf 0}\}\) \(P\)-a.s. Then $latex P\{{\bf
W}_{t}={\bf 0}\mbox{ for some } t\geq 0\}=0$. By applying Corollary \ref{chuc511} with \(D=\mathbb{R}^{n} \setminus\{{\bf 0}\}\) and \(f({\bf x})=\parallel {\bf x}\parallel\), so that \(\partial f/\partial x_{i}=x_{i}/\parallel {\bf x}\parallel\) and \(\Delta f=(n-1)/\parallel {\bf x}\parallel\), we obtain a.s. for all \(t\geq 0\)
\[\parallel {\bf W}_{t}\parallel -\parallel {\bf x}_{0}\parallel =\int_{0}^{t}\frac{{\bf W}_{s}}{\parallel {\bf W}_{s}\parallel}d{\bf W}_{s}+\int_{0}^{t}\left (\frac{n-1}{2}\right )\cdot\frac{1}{\parallel {\bf W}_{s}\parallel}ds,\]
where
\[\int_{0}^{t}\frac{{\bf W}_{s}}{\parallel {\bf W}_{s}\parallel}d{\bf W}_{s}=\sum_{i=1}^{n}\int_{0}^{t}\frac{W_{s}^{(i)}}{\parallel {\bf W}_{s}\parallel}dW_{s}^{(i)}\]
defined a continuous local martingale \(\{M_{t}\}_{t\geq 0}\). By (\ref{chueq514}) and the fact that \(\langle W^{(i)},W^{(j)}\rangle_{t}=\delta_{ij}\cdot t\), we have
\[\langle M\rangle_{t}=\sum_{i=1}^{n}\int_{0}^{t}\frac{(W_{s}^{(i)})^{2}}{\parallel {\bf W}_{s}\parallel^{2}}ds=t.\]
It follows from the characterization of Brownian motion given in Theorem \ref{chut61} that \(W\) is a one-dimensional Brownian motion. Thus \(R_{t}\equiv\parallel {\bf W}_{t}\parallel\) is a (weak) solution of the stochastic differential equation
\[dR_{t}=\left (\frac{n-1}{2}\right )\cdot\frac{1}{R_{t}}dt+dW_{t},\]
where \(W\) is a one-dimensional Brownian motion. \(\sharp\)

Employing the Ito formula, we can prove the following result.

Proposition. Let \(M\) and \(A\) be continuous adapted processes such that \(A\) is increasing and \(A_{0}=0\). For each \(\alpha\in \mathbb{R}\), let \(Z^{(\alpha )}\) be the process defined by
\[Z_{t}^{(\alpha )}=\exp\left (\alpha\cdot M_{t}-\frac{\alpha^{2}}{2}\cdot A_{t}\right ).\]
Then the following two assertions are equivalent

(a) \(M\) is a local martingale and \(\langle M\rangle =A\).

(b) For each \(\alpha\in \mathbb{R}\), \(Z^{(\alpha )}\) is a local martingale.

Moreover, if \(M\) is an \(L^{2}\)-martingale with \(\langle M\rangle =A\) and \(\alpha\) is such that \(Z_{0}^{(\alpha )}\in L^{2}\) and
\[\mathbb{E}\left [\int_{0}^{t}(Z_{s}^{(\alpha )})^{2}dA_{s}\right ]<\infty\mbox{ for each }t,\]
then \(Z^{(\alpha )}\) is an \(L^{2}\)-martingale. On the other hand, if the following two conditions are satisfied

  • the random variable \(A_{t}\) is bounded for each \(t\),
  • there is \(\alpha_{0}>0\) such that \(\mathbb{E}[\exp (\alpha_{0}\cdot |M_{t}|)]<\infty\) for each \(t\) and \(Z^{(\alpha )}\) is a martingale for \(|\alpha |\leq\frac{1}{2}\cdot\alpha_{0}\),

then \(M\) is an \(L^{2}\)-martingale with \(\langle M\rangle =A\). \(\sharp\)

Approach II and Ito’s Formula.

This follows from Protter \cite{pro}. Let \(M\) and \(N\) be semimartingales. The quadratic variation process of \(M\), denoted \(\langle M\rangle\), is defined by (ref. equation (\ref{reveq*99}))
\[\langle M\rangle =M^{2}-2\int M_{-}dM.\]
Recall that \(M_{0-}=0\). The {\bf quadratic covariation} of \(M\) and \(N\), also called the {\bf bracket process} of \(M\) and \(N\), is defined by
\[\langle M,N\rangle =MN-\int M_{-}dN-\int N_{-}dM.\]
It is clear that the operation \((M,N)\mapsto\langle M,N\rangle \)latex is bilinear and symmetric. We therefore have a polarization identity
\[\langle M,N\rangle =\frac{1}{2}(\langle M+N\rangle -\langle M\rangle-\langle N\rangle ).\]

\begin{equation}{\label{prot222}}\tag{40}\mbox{}\end{equation}

Proposition \ref{prot222}.(Protter \cite{pro}). The quadratic variation process of \(M\) is a RCLL, increasing, adapted process. Moreover it satisfies

(i) \(\langle M\rangle _{0}=M_{0}^{2}\) and \(\Delta\langle M\rangle =(\Delta M)^{2}\)

(ii) If \(\{\sigma_{n}\}\) is a sequence of random partitions tending to the identity \((\)ref. Definition \ref{prod*7}$)$, then
\[M_{0}^{2}+\sum_{i}(M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}})^{2}\rightarrow\langle M\rangle\]
with convergence in ucp, where \(\sigma_{n}\) is the sequence \(0=T_{0}^{(n)}\leq T_{1}^{(n)}\leq\cdots\leq T_{n_{k}}^{(n)}\) and where \(T_{i}^{(n)}\) are stopping times.

(iii) If \(T\) is a stopping time, then
\[\langle M^{T},M\rangle =\langle M,M^{T}\rangle =\langle M^{T}\rangle=\langle M\rangle ^{T}. \sharp\]

An immediate consequence of Proposition \ref{prot222} is the observation that if \(W\) is a Brownian motion, then \(\langle W\rangle _{t}=t\). Another consequence of Proposition \ref{prot222} is that if \(M\) is a semimartingale with continuous paths of finite variation, then \(\langle M\rangle \)latex is the constant process equal to \(M_{0}^{2}\). To see this one need only observe that
\[\sum (M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}})^{2}\leq\sup_{i}\left |M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}}\right |\cdot\sum_{i}\left |M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}}\right |\leq\sup_{i}\left |
M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}}\right |\cdot V,\]
where \(V\) is the total variation. Therefore the sums tend to zero as \(\parallel\sigma_{n}\parallel\rightarrow 0\). Proposition \ref{prot222} also has several consequences.

\begin{equation}{\label{proc2221}}\tag{41}\mbox{}\end{equation}

Corollary \ref{proc2221}. (Protter \cite{pro}). The bracket process \(\langle M,N\rangle \)latex of two semimartingales has paths of finite variation on compacts, and it is also a semimartingale.

Proof. By the polarization identity \(\langle M,N\rangle \)latex is the difference of two increasing processes, hence its paths are of finite variation. Moreover, the paths are clearly RCLL, and the process is adapted. Hence by Proposition \ref{prot27} it is a semimartingale. \(\blcksquare\)

Corollary. (Integration by Parts). Let \(M\) and \(N\) be two semimartingales. Then \(MN\) is a semimartingale and
\[MN=\int M_{-}dN+\int N_{-}dM+\langle M,N\rangle .\]

Proof. The formula follows trivially from the definition of \(\langle M,N\rangle\). That \(MN\) is a semimartingale follows from the formula, Proposition \ref{prot219}, and Corollary \ref{proc2221} above. \(\blcksquare\)

A proposition analogous to Proposition \ref{prot222} holds for \(\langle M,N\rangle \)latex as well as \(\langle M\rangle \)latex .

\begin{equation}{\label{prot223}}\tag{42}\mbox{}\end{equation}

Proposition \ref{prot223}. Let \(M\) and \(N\) be two semimartingales. Then the bracket process \(\langle M,N\rangle \)latex satisfies

(i) \(\langle M,N\rangle _{0}=M_{0}\cdot N_{0}\) and \(\Delta \langle M,N\rangle =\Delta M\cdot\Delta N\)

(ii) If \(\{\sigma_{n}\}\) is a sequence of random partitions tending to the identity, then
\[\langle M,N\rangle =M_{0}\cdot N_{0}+\lim_{n\rightarrow\infty}\sum_{i}(M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}})\cdot (N^{T_{i+1}^{(n)}}-N^{T_{i}^{(n)}}),\]
where convergence is in ucp, and where \(\sigma_{n}\) is the sequence \(0=T_{0}^{(n)}\leq T_{1}^{(n)}\leq\cdots\leq T_{n_{k}}^{(n)}\) with \(T_{i}^{(n)}\) stopping times.

(iii) If \(T\) is a stopping time, then
\[\langle M^{T},N\rangle =\langle M,N^{T}\rangle =\langle M^{T},N^{T}\rangle=\langle M,N\rangle ^{T}. \sharp\]

\begin{equation}{\label{prot225}}\tag{43}\mbox{}\end{equation}

Theorem \ref{prot225}. (Kunita-Watanabe Inequality). Let \(M\) and \(N\) be two semimartingales, and let \(X\) and \(Y\) be two measurable processes. Then one has a.s.
\[\int_{0}^{\infty}|X_{s}||Y_{s}||d\langle M,N\rangle _{s}|\leq\left (\int_{0}^{\infty}X_{s}^{2}d\langle M\rangle _{s}\right )^{1/2}\left (\int_{0}^{\infty}Y_{s}^{2}d\langle N\rangle _{s}\right )^{1/2}. \sharp\]

Corollary. Let \(M\) and \(N\) be two semimartingales, and let \(X\) and \(Y\) be two measurable processes. Then
\[\mathbb{E}\left [\int_{0}^{\infty}|X_{s}||Y_{s}||d\langle M,N\rangle _{s}|\right ]\leq\left |\!\left |\left (\int_{0}^{\infty}X_{s}^{2}d\langle M\rangle _{s}\right )^{1/2}\right |\!\right |_{L^{p}}\cdot
\left |\!\left |\left (\int_{0}^{\infty}Y_{s}^{2}d\langle N\rangle _{s}\right )^{1/2}\right |\!\right |_{L^{q}}\]
if \(1/p+1/q=1\).

Proof. Apply H\”{o}lder’s inequality to the Kunita-Watanabe inequality. \(\blacksquare\)

Since Theorem \ref{prot225} and its Corollary are path-by-path Lebesgue-Stieltjes results, we do not have to assume that the integral processes \(X\) and \(Y\) be adapted. Since the process \(\langle M\rangle \)latex is nondecreasing with right continuous paths, and since \(\Delta \langle M\rangle _{t}=(\Delta M_{t})^{2}\) for all \(t\geq 0\) with the convention that \(M_{0-}=0\), we can decompose \(\langle M\rangle \)latex path by path into its continuous part and its pure jump part.

Definition. For a semimartingale \(M\), the process \(\langle M\rangle ^{c}\) denotes the path by path continuous part of \(\langle M\rangle \)latex . \(\sharp\)

We can then write
\[\langle M\rangle _{t}=\langle M\rangle _{t}^{c}+M_{0}^{2}+\sum_{0<s\leq t}(\Delta M_{s})^{2}\]
and
\[\langle M\rangle _{t}=\langle M\rangle _{t}^{c}+\sum_{0\leq s\leq t}(\Delta M_{s})^{2}.\]
Observe that \(\langle M\rangle _{0}^{c}=0\).

Definition. A semimartingale \(M\) will be called quadratic pure jump if \(\langle M\rangle ^{c}=0\). \(\sharp\)

Proposition. If \(M\) is adapted, RCLL, with paths of finite variation on compacts, then \(M\) is a quadratic pure jump semimartingale. \(\sharp\)

Proposition. Let \(M\) be a quadratic pure jump semimartingale. Then for any semimartingale \(N\) we have
\[\langle M,N\rangle _{t}=M_{0}\cdot N_{0}+\sum_{0<s\leq t}\Delta M_{s}\cdot\Delta N_{s}.\]

Proof. The Kunita-Watanabe inequality tells us \(d\langle M,N\rangle _{s}\) is a.s. absolutely continuous with respect to \(d\langle M\rangle \)latex (path by path). Thus \(\langle M\rangle ^{c}=0\) implies \(\langle M,N\rangle ^{c}=0\), and hence \(\langle M,N\rangle \)latex is the sum of its jumps, and the results follows by Proposition \ref{prot223}. \(\blacksquare\)

Proposition. Let \(M\) and \(N\) be two semimartingales, and let \(X,Y\in {\bf L}\). Then
\[\langle X\bullet M,Y\bullet N\rangle _{t}=\int_{0}^{t} X_{s}Y_{s}d\langle M,N\rangle _{s}\]
and, in particular,
\[\langle X\bullet M\rangle _{t}=\int_{0}^{t}X_{s}^{2}d\langle M\rangle _{s}.\]

Proof. First assume (without loss of generality) that \(M_{0}=N_{0}=0\). It suffices to establish the following result
\begin{equation}{\label{proeq2*}}\tag{44}
\langle X\bullet M,N\rangle _{t}=\int_{0}^{t}X_{s}d\langle M,N\rangle _{s},
\end{equation}
and then apply it again, by the symmetry of the form \(\langle \cdot ,\cdot \rangle\). First suppose \(X\) is the indiator of a stochastic interval. That is, \(X=I_{[0,T]}\), where \(T\) is a stopping time. Establishing
(\ref{proeq2*}) is equivalent in this case to showing \(\langle M^{T},N\rangle =\langle M,N\rangle^{T}\), a result that is an obvious consequence of Proposition \ref{prot223}, which approximates \(\langle M,N\rangle \)latex by sums. Next suppose \(X=U\cdot I_{(S,T]}\), where \(S\) and \(T\) are stopping times, \(S\leq T\) a.s., and \(U\in {\cal F}_{S}\). Then
\[\int X_{s}dM_{s}=U\cdot (M^{T}-M^{S}),\]
and by Proposition \ref{prot223}
\[\langle X\bullet M,N\rangle =U\cdot (\langle M^{T},N\rangle -\langle M^{S},N\rangle )=U\cdot (\langle M,N\rangle ^{T}-\langle M,N\rangle ^{S})=\int X_{s}d\langle M,N\rangle _{s}.\]
The result now follows for \(X\in {\bf S}\) by linearity. Finally, suppose \(X\in {\bf L}\) and let \(\{X^{(n)}\}\) be a sequence in \({\bf S}\) converging in ucp to \(X\). Let \(Z^{(n)}=X^{(n)}\bullet M\) and \(Z=X\bullet M\). We know \(Z^{(n)}\) and \(Z\) are all semimartingales. We have
\[\int X_{s}^{(n)}d\langle M,N\rangle _{s}=\langle Z^{(n)},N\rangle ,\]
since \(X^{(n)}\in {\bf S}\), and using integration by parts
\[\langle Z^{(n)},N\rangle =NZ^{(n)}-\int N_{-}dZ^{(n)}-\int Z_{-}^{(n)}dN=NZ^{(n)}-\int N_{-}X^{(n)}dM-\int Z_{-}^{(n)}dN.\]
By the definition of the stochastic integral, we know \(Z^{(n)}\rightarrow Z\) in ucp, and since \(X^{(n)}\rightarrow X\) in ucp, letting \(n\rightarrow\infty\) we have
\begin{align*}
\lim_{n\rightarrow\infty} \langle Z^{(n)},N\rangle & =NZ-\int N_{-}XdM-\int Z_{-}dN\\
& =NZ-\int N_{-}dZ-\int Z_{-}dN=\langle Z,N\rangle ,
\end{align*}
again by integration by parts. Since
\[\lim_{n\rightarrow\infty}\int X_{s}^{(n)}d\langle M,N\rangle _{s}=\int X_{s}d\langle M,N\rangle _{s},\]
we have
\[\langle Z,N\rangle =\langle X\bullet M,N\rangle =\int X_{s}d\langle M,N\rangle ,\]
and the proof is complete. \(\blacksquare\)

Proposition. Let \(X\) be a RCLL, adapted process, and let \(M\) and \(N\) be two semimartingales. Let \(\{\sigma_{n}\}\) be a sequence of random partitions tending to the identity. Then
\[\sum X_{T_{i}^{(n)}}\cdot (M^{T_{i+1}^{(n)}}-M^{T_{i}^{(n)}})\cdot (N^{T_{i+1}^{(n)}}-N^{T_{i}^{(n)}})\]
converges in ucp to \(\int X_{s-}d\langle M,N\rangle _{s}\) (with \(X_{0-}=0\)). Here \(\sigma_{n}\) is the random partition \(0\leq T_{0}^{(n)}\leq T_{1}^{(n)}\leq\cdots\leq T_{k_{n}}^{(n)}\). \(\sharp\)

We will state and prove a change of variables formula valid for all semimartingales.

\begin{equation}{\label{prot232}}\tag{45}\mbox{}\end{equation}

Theorem \ref{prot232}. (Ito’s Formula). Let \(X\) be a semimartingale and let \(f\) be a \(C^{2}\) real-valued function. Then \(f(X)\) is again a semimartingale, and the following formula holds
\[f(X_{t})-f(X_{0})=\int_{0+}^{t}f'(X_{s-})dX_{s}+\frac{1}{2}\int_{0+}^{t}f”(X_{s-})d\langle X\rangle _{s}^{c}+\sum_{0<s\leq t}\left (f(X_{s})-f(X_{s-})-f'(X_{s-})\cdot\Delta X_{s}\right ). \sharp\]

Corollary. (Ito’s Formula). Let \(X\) be a continuous semimartingale and let \(f\) be a \(C^{2}\) real-valued function. Then \(f(X)\) is again a semimartingale and the following formula holds
\[f(X_{t})-f(X_{0})=\int_{0+}^{t}f'(X_{s})dX_{s}+\frac{1}{2}\int_{0+}^{t}f”(X_{s})d\langle X\rangle _{s}. \sharp\]

The Ito’s formula has the multidimensional analog.

\begin{equation}{\label{prot233}}\tag{46}\mbox{}\end{equation}

Theorem \ref{prot233}. (Ito’s Formula). Let \({\bf X}=(X^{(1)},\cdots ,X^{(n)})\) be an \(n\)-dimensional semimartingales, and let \(f:\mathbb{R}^{n}\rightarrow \mathbb{R}\) have continuous second order partial derivatives. Then \(f({\bf X})\) is a semimartingale and the following formula holds
\begin{align*}
f({\bf X}_{t})-f({\bf X}_{0}) & =\sum_{i=1}^{n}\int_{0+}^{t}\frac{\partial f}{\partial x_{i}}({\bf X}_{s-})dX_{s}^{(i)}+\frac{1}{2}\sum_{1\leq i,j\leq n}\int_{0+}^{t}\frac{\partial^{2}f}
{\partial x_{i}\partial x_{j}}({\bf X}_{s-})d\langle X^{(i)},X^{(j)}\rangle _{s}^{c}\\
& +\sum_{0<s\leq t}\left (f({\bf X}_{s})-f({\bf X}_{s-})-\sum_{i=1}^{n}\frac{\partial f}{\partial x_{i}}({\bf X}_{s-})\cdot\Delta X_{s}^{(i)}\right ). \sharp
\end{align*}

The stochastic integral calculus, as revealed by Theorems \ref{prot232} and \ref{prot233}, is different from the classical Lebesgue-Stieltjes calculus. By restricting the class of integrands to semimartingales made left continuous (instead of \({\bf L}\)), one can define a stochastic integral that obeys the traditional rules of the Lebesgue-Stieltjes calculus.

Definition. Let \(X\) and \(Y\) be semimartingales. Define the Fisk-Stratonovich integral of \(Y\) with respect to \(X\) by
\[\int_{0}^{t}Y_{s-}\circ dX_{s}\equiv\int_{0}^{t}Y_{s-}dX_{s}+\frac{1}{2}\langle Y,X\rangle _{t}^{c}. \sharp\]

\begin{equation}{\label{prot234}}\tag{47}\mbox{}\end{equation}

Theorem \ref{prot234}. Let \(X\) be a semimartingale and let \(f\) be \(C^{3}\). Then
\[f(X_{t})-f(X_{0})=\int_{0+}^{t}f'(X_{s-})\circ dX_{s}+\sum_{0<s\leq t}\left (f(X_{s})-f(X_{s-})-f'(X_{s-})\cdot\Delta X_{s}\right ). \sharp\]

Note that if \(X\) is a semimartingale with continuous paths, then Theorem \ref{prot234} reduces to the classical Riemann-Stieltjes formula
\[f(X_{t})-f(X_{0})=\int_{0}^{t}f'(X_{s})\circ dX_{s};\]
this is the main attraction of the Fisk-Stratonovich integral.

Corollary. (Integration by Parts). Let \(X\) and \(Y\) be semimartingales with at least one of \(X\) and \(Y\) continuous. Then
\[X_{t}Y_{t}-X_{0}Y_{0}=\int_{0+}^{t}X_{s-}\circ dY_{s}+\int_{0+}^{t}Y_{s-}\circ dX_{s}.\]

Proof. The standard integration by parts formula is
\[X_{t}Y_{t}=\int_{0}^{t}X_{s-}dY_{s}+\int_{0}^{t}Y_{s-}dX_{s}+\langle X,Y\rangle _{t}.\]
However \(\langle X,Y\rangle _{t}=\langle X,Y\rangle _{t}^{c}+X_{0}Y_{0}\) if one of \(X\) and \(Y\) is continuous. Thus adding \(\frac{1}{2}\langle X,Y\rangle _{t}^{c}\) to each integral on the right side yields the result. \(\blacksquare\)

As an application of the change of variables formula, we investigate a simple, yet important and nontrivial, stochastic differential equation. We treat it in integral form.

\begin{equation}{\label{prot236}}\tag{48}\mbox{}\end{equation}

Proposition \ref{prot236}. Let \(X\) be a semimartingale with \(X_{0}=0\). Then there exists a (unique) semimartingale \(Z\) that satisfies the equation
\[Z_{t}=1+\int_{0}^{t}Z_{s-}dX_{s};\]
$Z$ is given by
\[Z_{t}=\exp\left (X_{t}-\frac{1}{2}\langle X\rangle _{t}\right )\cdot\prod_{0<s\leq t}(1+\Delta X_{s})\cdot\exp\left (-\Delta X_{s}+\frac{1}{2}(\Delta X_{s})^{2}\right )\]
where the infinite produce converges. \(\sharp\)

Definition. For a semimartingale \(X\) with \(X_{0}=0\), the stochastic exponential of \(X\), written \({\cal E}(X)\), is the \((\)unique$)$ semimartingale \(Z\) that is a solution of
\[Z_{t}=1+\int_{0}^{t}Z_{s-}dX_{s}. \sharp\]

The stochastic exponential is also known as the Dol\'{e}ans-Dade exponential. Proposition \ref{prot236} gives a general formula for \({\cal E}(X)\). This formula simplifies considerably when \(X\) is continuous. Indeed, let \(X\) be a continuous semimartingale with \(X_{0}=0\). Then
\[({\cal E}(X))_{t}=\exp\left (X_{t}-\frac{1}{2}\langle X\rangle _{t}\right ).\]
An important special case is when the semimartingale \(X\) is a multiple \(\lambda\) of a standard Brownian motion \(W=\{W_{t}\}_{t\geq 0}\). Since \(\lambda W\) has no jump we have
\[({\cal E}(\lambda W))_{t}=\exp\left (\lambda W_{t}-\frac{\lambda^{2}}{2}\cdot \langle W\rangle _{t}\right )=\exp\left (\lambda W_{t}-\frac{\lambda^{2}}{2}\cdot t\right ).\]
Moreover, since
\[({\cal E}(\lambda W))_{t}=1+\lambda\cdot\int_{0}^{t}({\cal E}(\lambda W))_{s-}dW_{s}\]
we see that \(({\cal E}(\lambda W))_{t}\) is a continuous martingale. The process \({\cal E}(\lambda W)\) is sometimes referred to as geometric Brownian motion.

\begin{equation}{\label{prot237}}\tag{49}\mbox{}\end{equation}

Proposition \ref{prot237}. Let \(X\) and \(Y\) be two semimartingales with \(X_{0}=Y_{0}=0\). Then \({\cal E}(X)\cdot {\cal E}(Y)={\cal E}(X+Y+\langle X,Y\rangle )\).

Proof. Let \(U_{t}=({\cal E}(X))_{t}\) and \(V_{t}=({\cal E}(Y))_{t}\). Then the integration by parts formula gives that
\[U_{t}V_{t}-1=\int_{0+}^{t}U_{s-}dV_{s}+\int_{0+}^{t}V_{s-}dU_{s}+\langle U,V\rangle _{t}.\]
Using that \(U\) and \(V\) are exponentials, this is equivalent to
\[=\int_{0+}^{t}U_{s-}V_{s-}dY_{s}+\int_{0+}^{t}U_{s-}V_{s-}dX_{s}+\int_{0+}^{t}U_{s-}V_{s-}d\langle X,Y\rangle _{s};\]
letting \(W_{t}=U_{t}V_{t}\), we deduce
\[W_{t}=1+\int_{0}^{t}W_{s-}d(X+Y+\langle X,Y\rangle )_{s},\]
and so \(W={\cal E}(X+Y+\langle X,Y\rangle )\), which was to be shown. \(\blacksquare\)

Corollary. Let \(X\) be a continuous semimartingale with \(X_{0}=0\). Then \({\cal E}(X)^{-1}={\cal E}(-X+\langle X\rangle )\).

Proof. By Proposition \ref{prot237}, we have
\[{\cal E}(X)\cdot {\cal E}(-X+\langle X\rangle )={\cal E}(X+(-X+\langle X\rangle )+\langle X,-X\rangle ),\]
since \(\langle -X,\langle X\rangle \rangle =0\). However \({\cal E}(0)=1\), and this completes the proof. \(\sharp\)

\begin{equation}{\label{prot238}}\tag{50}\mbox{}\end{equation}

Proposition \ref{prot238}. A stochastic process \(X=\{X_{t}\}_{t\geq 0}\) is a standard Brownian motion if and only if it is a continuous local martingale with \(\langle X\rangle _{t}=t\). \(\sharp\)

Observe that if \(M\) and \(N\) are two continuous martingales such that \(MN\) is a martingale, then \(\langle M,N\rangle =0\). Therefore if \({\bf W}_{t}=(W_{t}^{(1)},\cdots ,W_{t}^{(n)})\) is an \(n\)-dimensional standard Borwnian motion, \(W_{t}^{(i)}\cdot W_{t}^{(j)}\) is a martingale for \(i\neq j\), and we have that
\[\langle W^{(i)},W^{(j)}\rangle _{t}=\left\{\begin{array}{ll}
t & \mbox{if \(i=j\)}\\
0 & \mbox{if \(i\neq j\)}.
\end{array}\right .\]
Proposition \ref{prot238} then has a multidimensional version.

Proposition. Let \({\bf X}=(X^{(1)},\cdots ,X^{(n)})\) be continuous local martingales such that
\[\langle X^{(i)},X^{(j)}\rangle _{t}=\left\{\begin{array}{ll}
t & \mbox{if \(i=j\)}\\
0 & \mbox{if \(i\neq j\).}
\end{array}\right .\]
Then \({\bf X}\) is a standard \(n\)-dimensional Brownian motion. \(\sharp\)

 

 

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *