Dual Space

August Albert Zimmermann (1808-1888) was a German painter.

We have sections

\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}

Dual Spaces.

Let \(V\) be a vector space over the scalar field \({\cal F}\). The linear mapping \(f:V\rightarrow {\cal F}\) from \(V\) into \({\cal F}\) is also called a linear functional.

Example. Let \({\cal F}\) be a scalar field, and let \(\lambda_{1},\cdots ,\lambda_{n}\) be scalars in \({\cal F}\). We define a function \(f\) on \({\cal F}^{n}\) by
\[f(x_{1},\cdots ,x_{n})=\lambda_{1}x_{1}+\cdots +\lambda_{n}x_{n}.\]
Then \(f\) is a linear functional. \(\sharp\)

Example. Let \({\cal F}\) be a scalar field, and let \({\cal M}^{n\times n}\) denote the space of all \(n\times n\) matrices with entries in \({\cal F}\). Given any \(A\in{\cal M}_{n\times n}\), the trace of \(A\) is defined by
\[\mbox{tr}(A)=a_{11}+a_{22}+\cdots +a_{nn}.\]
The trace function is a linear functional on \({\cal M}^{n\times n}\), since
\begin{align*} \mbox{tr}(cA+B) & =\sum_{i=1}^{n}\left (ca_{ii}+b_{ii}\right )\\ & =c\sum_{i=1}^{n}a_{ii}+\sum_{i=1}^{nb}b_{ii}\\ & =c\cdot\mbox{tr}(A)+\mbox{tr}(B).\end{align*}

Example. Let \([a,b]\) be a closed interval in \(\mathbb{R}\), and let \(C([a,b])\) denote the space of all continuous function on \([a,b]\). Then \(L:C([a,b])\rightarrow\mathbb{R}\) defined by
\[L(g)=\int_{a}^{b}g(t)dt\]
is a linear functional on \(C([a,b])\). \(\sharp\)

If \(V\) is a vector space over the scalar field, the collection of all linear functionals on \(V\) is the vector space \({\cal L}(V,{\cal F})\) over \({\cal F}\). In this case, we simply write \(V^{*}={\cal L}(V,{\cal F})\), and call \(V^{*}\) the dual space of \(V\).

\begin{equation}{\label{lap84}}\mbox{}\tag{1}\end{equation}

Lemma \ref{lap84}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(W\) be an \(m\)-dimensional vector space over the same scalar field \({\cal F}\). Then, the space \({\cal L}(V,W)\) is finite-dimensional with dimension \(mn\).

Proof. Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{m}\}\) be the ordered bases for \(V\) and \(W\), respectively. For integers \(p\in\{1,2,\cdots ,m\}\) and \(q\in\{1,2,\cdots ,n\}\), we define a linear mapping \(T_{pq}\) by
\[T_{pq}(v_{i})=\left\{\begin{array}{ll}
0 & \mbox{if \(i\neq q\)}\\
w_{p} & \mbox{if \(i=q\)}.
\end{array}\right .\]
According to Theorem \ref{lat10}, there is a unique linear mapping from \(V\) into \(W\) satisfying these conditions. We shall claim that
\begin{equation}{\label{laeq75}}\tag{2}
\{T_{pg}:p=1,\cdots ,m\mbox{ and }q=1,\cdots ,n\}
\end{equation}
form a basis for \({\cal L}(V,W)\). Let \(T\) be any linear mapping from \(V\) into \(W\). For each \(j=1,\cdots ,n\), let \(\lambda_{1j},\cdots ,\lambda_{mj}\) be the coordinates of the vector \(T(v_{j})\) in the ordered basis \(\mathfrak{B}_{W}\), i.e.,
\[T(v_{j})=\sum_{p=1}^{m}\lambda_{pj}w_{p}.\]
Let
\[\widehat{T}=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}.\]
Then, we have
\begin{equation}{\label{laeq76}}\tag{3}
\widehat{T}(v_{j})=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}(v_{j})=\sum_{p=1}^{m}\lambda_{pj}w_{p}=T(v_{j}),
\end{equation}
which shows
\[T=\widehat{T}=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}.\]
Therefore, the space \({\cal L}(V,W)\) is spanned by the set in (\ref{laeq75}). We remain to show the independence. Let \(\widehat{T}\) be the zero mapping. Then \(\widehat{T}(v_{j})=\theta_{W}\) for all \(j=1,\cdots ,n\), which also says
\[\sum_{p=1}^{m}\lambda_{pj}w_{p}=\theta_{W}\]
by (\ref{laeq76}). The independence of the set \(\{w_{1},\cdots ,w_{n}\}\) says \(\lambda_{pj}=0\) for all \(p\) and \(j\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lat10}}\mbox{}\tag{4}\end{equation}

Lemma \ref{lat10}. Let \(V\) be an \(n\)-dimensional vector spaces over the scalar field \({\cal F}\), and let \(\{v_{1},v_{2},\cdots ,v_{n})\) be a basis of \(V\). Let \(W\) be another vector space over the same scalar field \({\cal F}\), and let \(w_{1},w_{2},\cdots ,w_{n}\) be any elements in \(W\). Then, there exists a unique linear mapping \(T:V\rightarrow W\) such that \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\).

Proof. Given any element \(v\in V\), there exist \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\) satisfying
\[x=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
Therefore, we can define a mapping \(T:V\rightarrow W\) by
\[T(v)=T\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )
=\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\in W.\]
We are going to show that \(T\) is linear.

Let \(u\) be another element of \(V\). There exist \(\beta_{1},\cdots ,\beta_{n}\in {\cal F}\) satisfying
\[u=\beta_{1}v_{1}+\beta_{2}v_{2}+\cdots +\beta_{n}v_{n}.\]
Therefore, we have
\[v+u=(\alpha_{1}+\beta_{1})v_{1}+(\alpha_{2}+\beta_{2})v_{2}+\cdots +(\alpha_{n}+\beta_{n})v_{n}.\]
By the definition of \(T\), we obtain
\begin{align*}
T(v+u) & =(\alpha_{1}+\beta_{1})w_{1}+(\alpha_{2}+\beta_{2})w_{2}+\cdots +(\alpha_{n}+\beta_{n})w_{n}\\
& =\left (\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\right )+\left (\beta_{1}w_{1}+\beta_{2}w_{2}+\cdots +\beta_{n}w_{n}\right )\\
& =T(v)+T(y).
\end{align*}
Given any \(\alpha\in {\cal F}\), we have
\[\alpha v=(\alpha\alpha_{1})v_{1}+(\alpha\alpha_{2})v_{2}+\cdots +(\alpha\alpha_{n})v_{n}.\]
By the definition of \(T\), we obtain
\begin{align*}
T(\alpha v) & =(\alpha\alpha_{1})w_{1}+(\alpha\alpha_{2})w_{2}+\cdots +(\alpha\alpha_{n})w_{n}\\
& =\alpha\left (\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\right )=\alpha T(v).
\end{align*}

It is easy to see that this mapping satisfies \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Next, we want to claim that this mapping is unique. Suppose there exists another linear mapping \(\widehat{T}\) satisfying \(\widehat{T}(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Then, we have
\begin{align*}
\widehat{T}(v) & =\widehat{T}\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )\\
& =\alpha_{1}\widehat{T}(v_{1})+\alpha_{2}\widehat{T}(v_{2})+\cdots +\alpha_{n}\widehat{T}(v_{n})\\
& =\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\\
& =T(v).
\end{align*}
This shows that \(\widehat{T}=T\), and the proof is complete. \(\blacksquare\)

The above Lemmas \ref{lap84} and \ref{lat10} can refer to the page Linear Mappings.

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). Then \(\dim (V)^{*}=\dim (V)\).

Proof. The result follows from Lemma \ref{lap84} immediately.

Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) be a basis for \(V\). According to Lemma \ref{lat10}, there exists a unique linear functional on \(V\) satisfying \(f_{i}(v_{j})=\delta_{ij}\) for \(i,j=1,\cdots ,n\). We shall claim that the \(n\) distinct linear functionals \(f_{1},\cdots ,f_{n}\) on \(V\) are linearly independent. Suppose that
\begin{equation}{\label{laeq85}}\tag{5}
f=\sum_{i=1}^{n}\lambda_{i}f_{i}
\end{equation}
for \(\lambda_{i}\in {\cal F}\), \(i=1,\cdots ,n\). Then, we have
\begin{equation}{\label{laeq86}}\tag{6}
f(v_{j})=\sum_{i=1}^{n}\lambda_{i}f(v_{j})=\sum_{i=1}^{n}\lambda_{i}\delta_{ij}=\lambda_{j}.
\end{equation}
In particular, if \(f\) is a zero functional, then \(c_{j}=f(v_{j})=0\) for all \(j\), This shows that \(f_{1},\cdots ,f_{n}\) are linearly independent. Since \(\dim (V)^{*}=n\), the set \(\{f_{1},\cdots ,f_{n}\}\) must be a basis for \(V^{*}\). The basis \(\mathfrak{B}^{*}=\{f_{1},\cdots ,f_{n}\}\) is called that dual basis of \(\mathfrak{B}\).

\begin{equation}{\label{lap95}}\mbox{}\tag{7}\end{equation}

Proposition \ref{lap95}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) be a basis for \(V\). Then, there is a unique dual basis \(\mathfrak{B}^{*}=\{f_{1},\cdots ,f_{n}\}\) for \(V^{*}\) satisfying \(f_{i}(v_{j})=\delta_{ij}\) for \(i,j=1,\cdots ,n\).

(i) For each linear functional \(f\) on \(V\), we have
\begin{equation}{\label{laeq87}}\tag{8}
f=\sum_{i=1}^{n}f(v_{i})f_{i}.
\end{equation}

(ii) For each \(v\in V\), we have
\begin{equation}{\label{laeq88}}\tag{9}
v=\sum_{i=1}^{n}f_{i}(v)v_{i}.
\end{equation}

Proof. We have shown that there is a unique dual basis for \(V^{*}\). The expression (\ref{laeq87}) can be obtained by (\ref{laeq85}) and (\ref{laeq86}). Now, for \(v\in V\), there exists \(\lambda_{1},\cdots ,\lambda_{n}\) in \({\cal F}\) satisfying \(v=\sum_{i=1}^{n}\lambda_{i}v_{i}\). Then, we have
\begin{equation}{\label{laeq89}}\tag{10}
f_{j}(v)=\sum_{i=1}^{n}\lambda_{i}f_{j}(v_{i})=\sum_{i=1}^{n}\lambda_{i}\delta_{ij}=\lambda_{j},
\end{equation}
which shows the expression (\ref{laeq88}). This completes the proof. \(\blacksquare\)

Example. Let \(\mathfrak{B}=\{(2,1),(3,1)\}\) be an ordered basis for \(\mathbb{R}^{2}\). Suppose that the dual basis is given by \(\mathfrak{B}^{*}=\{f_{1},f_{2}\}\). We want to obtain the formulas \(f_{1}\) and \(f_{2}\). Since
\begin{align*} f_{1}(x,y) & =f_{1}(x{\bf e}_{1}+y{\bf e}_{2})\\ & =xf_{1}({\bf e}_{1})+yf_{1}({\bf e}_{2}),\end{align*}
we need to find \(f_{1}({\bf e}_{1})\) and \(f_{1}({\bf e}_{2})\). Let \(v_{1}=(2,1)\) and \(v_{2}=(3,1)\). From Proposition \ref{lap95}, we have \(f_{i }(v_{j})=\delta_{ij}\) for \(i,j=1,2\). Therefore, we obtain
\begin{align*}
1 & =\delta_{11}=f_{1}(v_{1})=f_{1}(2,1)\\ & =f_{1}(2{\bf e}_{1}+{\bf e}_{2})\\ & =2f_{1}({\bf e}_{1})+f_{1}({\bf e}_{2})\end{align*}

and

\begin{align*} 0 & =\delta_{12}=f_{1}(v_{2})=f_{1}(3,1)\\ & =f_{1}(3{\bf e}_{1}+{\bf e}_{2})\\ & =3f_{1}({\bf e}_{1})+f_{1}({\bf e}_{2}).
\end{align*}
Now, we can solve to obtain \(f_{1}({\bf e}_{1})=-1\) and \(f_{1}({\bf e}_{2})=3\). This says \(f_{1}(x,y)=-x+3y\). We can similarly obtain \(f_{2}(x,y)=x-2y\). \(\sharp\)

Example. Let \(V\) be the space of all polynomial functions from \(\mathbb{R}\) into \(\mathbb{R}\) which have degree less than or equal to \(2\). Let \(c_{1},c_{2},c_{3}\) be three distinct real numbers, and define the linear functionals on \(V\) by \(L_{i}(p)=p(t_{i})\) for \(i=1,2,3\). We first claim that these three linear functionals are independent. Suppose that
\[0=L=\lambda_{1}L_{1}+\lambda_{2}L_{2}+\lambda_{3}L_{3}\]
for some real numbers \(\lambda_{1},\lambda_{2},\lambda_{3}\). Since \(L(p)=0\) for any \(p\in V\), we take \(p(x)=1\), \(p(x)=x\) and \(p(x)=x{2}\). Then, we have \(L_{i}(1)=1\), \(L_{i}(x)=c_{i}\) and \(L_{i}(x^{2})=c_{i}^{2}\) for \(i=1,2,3\), which says
\begin{align*}
0 & =c_{1}+c_{2}+c_{3}\\
0 & =\lambda_{1}c_{1}+\lambda_{2}c_{2}+\lambda_{3}c_{3}\\
0 & =\lambda_{1}c_{1}^{2}+\lambda_{2}c_{2}^{2}+\lambda_{3}c_{3}^{2}.
\end{align*}
Since \(c_{1},c_{2},c_{3}\) are distinct real numbers, the matrix
\[\left [\begin{array}{ccc}
1 & 1 & 1\\ c_{1} & c_{2} & c_{3}\\ c_{1}^{2} & c_{2}^{2} & c_{3}^{2}
\end{array}\right ]\]
is invertible. Therefore, we must have \(\lambda_{1}=\lambda_{2}=\lambda_{3}=0\). Since \(\dim (V)=3\), we also have \(\dim (V)^{*}=3\). This says that the set \(\{L_{1},L_{2},L_{3}\}\) forms a basis for \(V^{*}\). We shall find a basis \(\{p_{1},p_{2}.p_{3}\}\) for \(V\) such that \(\{L_{1},L_{2},L_{3}\}\) is a dual basis of \(\{p_{1},p_{2}.p_{3}\}\). In this case, we must have \(L_{i}(p_{j})=\delta_{ij}\), i.e., \(p_{j}(c_{i})=\delta_{ij}\). These polynomial functions can be seen to be
\begin{align*}
p_{1}(x) & =\frac{(x-c_{2})(x-c_{3})}{(c_{1}-c_{2})(c_{1}-c_{3})}\\
p_{2}(x) & =\frac{(x-c_{1})(x-c_{3})}{(c_{2}-c_{1})(c_{2}-c_{3})}\\
p_{3}(x) & =\frac{(x-c_{1})(x-c_{2})}{(c_{3}-c_{1})(c_{3}-c_{2})}
\end{align*}
According to (\ref{laeq89}), for any \(p\in V\), we have
\[p=p(c_{1})\cdot p_{1}+p(c_{2})\cdot p_{2}+p(c_{3})\cdot p_{3}.\]

\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}

Double Dual Spaces.

In the sequel, we shall consider the dual space of \(V^{*}\), which is denoted by \(V^{**}\) and is called the double dual space of \(V\). The main purpose is to prove that \(V\) and \(V^{**}\) are isomorphic.

Let \(v\in V\). Then \(v\) can induces a linear functional \(L_{v}:V^{*}\rightarrow {\cal F}\) defined by \(L_{v}(f)=f(v)\) for \(f\in V^{*}\). The linearity of \(L_{v}\) can be realized below:
\begin{align*} L_{v}(\lambda f+g) & =(cf+g)(v)\\ & =(cf)(v)+g(v)\\ & =cf(v)+g(v)\\ & =cL_{v}(f)+L_{v}(g).\end{align*}

\begin{equation}{\label{lar93}}\mbox{}\tag{11}\end{equation}

Remark \ref{lar93}. Suppose that \(V\) is finite-dimensional vector space over the scalar field \({\cal F}\). Given any \(\widehat{v}\in V\) with \(v\neq\theta\), we can choose an order basis \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) for \(V\) satisfyingt \(v_{1}=v\). For any \(v\in V\), there exists \(\lambda_{i}\in {\cal F}\) for \(i=1,\cdots ,n\) satisfying
\begin{align*} v & =\lambda_{1}v_{1}+\cdots +\lambda_{n}v_{n}\\ & =\lambda_{1}\widehat{v}+\cdots +\lambda_{n}v_{n}.\end{align*}
Therefore, we define a linear functional \(f\) on \(V\) by \(f(v)=\lambda_{1}\); that is, it assign each vector in \(V\) its first coordinate in the ordered basis \(\mathfrak{B}\). Then, we have \(f(\widehat{v})=1\). This says that, given any \(\widehat{v}\neq\theta\), there exists a linear functional on \(V\) satisfying \(f(\widehat{v})\neq 0\). In other words, if \(v\neq\theta\), then \(L_{v}\neq 0\). If \(v=\theta\), then it is clear to see \(L_{v}=0\). Therefore, we can conclude that \(L_{v}=0\) if and only if \(v=\theta\). \(\sharp\)

\begin{equation}{\label{laeq94}}\mbox{}\tag{12}\end{equation}

Lemma \ref{laeq94}. Let \(V\) and \(W\) be the finite-dimensional vector spaces over the same scalar field \({\cal F}\) satisfying \(n=\dim (V)=\dim (W)\), and let \(T\) be a linear mapping from \(V\) into \(W\). Then, the following statements are equivalent:

(a) \(T\) is invertible;

(b) \(T\) is nonsingular;

(c) \(T\) is onto;

(d) If \(\{v_{1},\cdots ,v_{n}\}\) is a basis for \(V\), then \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a basis for \(W\).

Proof. If \(T\) is invertible, then \(T\) is non-singular. Therefore, (a) implies (b). To prove that (b) implies (c), let \(\{v_{1},\cdots ,v_{n}\}\) be a basis for \(V\), Proposition \ref{lap77} says that \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a linearly independent set of vectors in \(W\). Since \(\dim (W)=n\), it follows that \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a basis for \(W\). Given any \(w\in W\), there exists \(\lambda_{1},\cdots ,\lambda_{n}\) in \({\cal F}\) satisfying
\[w=\lambda_{1}T(v_{1})+\cdots +\lambda_{n}T(v_{n})=T\left (\lambda_{1}v_{1}+\cdots +\lambda_{n}v_{n}\right ),\]
which says that \(w\) is in the image of \(T\). To prove that (c) implies (d), we assume that \(T\) is onto. If \(\{v_{1},\cdots ,v_{n}\}\) is a basis for \(V\), then \(\{T(v_{1}),\cdots ,T(v_{n})\}\) spans the image of \(T\). Since \(T(V)=W\), the set \(\{T(v_{1}),\cdots ,T(v_{n})\}\) also spans \(W\). Since \(\dim (W)=n\), it follows that \(T(v_{1}),\cdots ,T(v_{n})\) are linearly independent, i.e., \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a basis for \(W\). To prove that (d) implies (a), since \(\{T(v_{1}),\cdots ,T(v_{n})\}\) spans \(W\), we have \(T(V)=W\). For \(v\in\mbox{Ker}(T)\), there exist \(\lambda_{1},\cdots ,\lambda_{n}\) in \({\cal F}\) satisfying
\[v=\lambda_{1}v_{1}+\cdots +\lambda_{n}v_{n}\mbox{ and }T(v)=\theta_{W},\]
which shows
\[\lambda_{1}T(v_{1})+\cdots +\lambda_{n}T(v_{n})=\theta_{W}.\]
The independence of \(T(v_{1}),\cdots ,T(v_{n})\) says \(\lambda_{i}=0\) for all \(i=1,\cdots ,n\). Therefore, we obtain \(v=\theta_{V}\). This shows that \(T\) is non-singular, i.e., one-to-one. Therefore, \(T\) is invertible. This completes the proof. \(\blacksquare\)

The above Lemma \ref{laeq94} can refer to the page Linear Mappings.

\begin{equation}{\label{lat97}}\mbox{}\tag{13}\end{equation}

Theorem \ref{lat97}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). For each \(v\in V\), we define \(L_{v}(f)=f(v)\) for \(f\in V^{*}\). Then, the mapping \(v\mapsto L_{v}\) is an isomorphism from \(V\) onto \(V^{**}\).

Proof. Given any \(u,v\in V\) and \(\lambda\in {\cal F}\), we have
\begin{align*} L_{\lambda v+u}(f) & =f(\lambda v+u)\\ & =\lambda f(v)+f(u)\\ & =\lambda L_{v}(f)+L_{u}(f),\end{align*}
which says  \(L_{\lambda v+u}=\lambda L_{v}+L_{v}\) and the mapping \(v\mapsto L_{v}\) is linear. Since \(L_{v}=0\) if and only if \(v=\theta\) in Remark \ref{lar93}, it also says that the mapping \(v\mapsto L_{v}\) is non-singular. Since
\[\dim (V)^{**}=\dim (V)^{*}=\dim (V),\]
Lemma \ref{laeq94} says that the mapping \(v\mapsto L_{v}\) is invertible. This completes the proof. \(\blacksquare\)

Theorem \ref{lat97} says that each \(v\) in \(V\) can be identified with \(L_{v}\) in \(V^{*}\). In other words, the space \(V^{**}\) can be identified with \(V\). In this case, we can also say that the dual of \(V^{*}\) is \(V\). Similarly, the dual of \(V^{**}\) is \(V^{*}\).

\begin{equation}{\label{lac96}}\mbox{}\tag{14}\end{equation}

Remark \ref{lac96}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). If \(L\) is a linear functional on \(V^{*}\), it is not difficult to show that there is a unique \(v\in V\) satisfying \(L(f)=f(v)\) for all \(f\in V^{*}\). \(\sharp\)

Corollary. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). Each basis for \(V^{*}\) is the dual of some basis for \(V\).

Proof. Let \(\mathfrak{B}^{*}=\{f_{1},\cdots ,f_{n}\}\) be a basis for \(V^{*}\). From Proposition \ref{lap95}, there is a basis \(\{L_{1},\cdots ,L_{n}\}\) for \(V^{**}\) satisfying \(L_{i}(f_{j})=\delta_{ij}\). By Remark \ref{lac96}, for each \(i\), there exists \(v_{i}\in V\) satisfying \(L_{i}(f)=f(v_{i})\) for any \(f\in V^{*}\), i.e., \(L_{i}=L_{v_{i}}\). It follows that \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) is a basis for \(V\). We also see that \(\mathfrak{B}^{*}\) is the dual of \(\mathfrak{B}\), and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}

Annihilator.

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a subset of \(V\). The annihilator of \(S\) is defined by
\[S^{\circ}=\left\{f\in V^{*}:f(v)=0\mbox{ for all }v\in S\right\}.\]

We have the following observations.

  • We can show that \(S^{\circ}\) is a subspace of \(V^{*}\), although \(S\) may not be a subspace of \(V\).
  • If \(S=\{\theta\}\) is the zero subspace of \(V\), then \(S^{\circ}=V^{*}\).
  • If \(S=V\), then \(S^{\circ}\) is the zero subspace of \(V^{*}\).

\begin{equation}{\label{lat91}}\mbox{}\tag{15}\end{equation}

Theorem \ref{lat91}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(W\) be a subspace of \(V\). Then, we have
\[\dim (W)+\dim (W)^{\circ}=\dim (V).\]

Proof. Let \(\dim (W)=r\), and let \(\{v_{1},\cdots ,v_{r}\}\) be an ordered basis for \(W\). Let \(\dim (V)=n\). We can choose some suitable vectors \(v_{k+1},\cdots ,v_{n}\) in \(V\) such that \(\{v_{1},\cdots ,v_{n}\}\) is an ordered basis for \(V\). Let \(\{f_{1},\cdots ,f_{n}\}\) be a dual basis for \(\{v_{1},\cdots ,v_{n}\}\). We are going to claim that \(\{f_{r+1},\cdots ,f_{n}\}\) is a basis for \(W^{\circ}\). By the definition of dual basis, we have \(f_{i}(v_{j})=\delta_{ij}\). Since \(\delta_{ij}=0\) for \(i\geq r+1\) and \(j\leq r\), we have \(f_{i}(v)=0\) for \(i\geq r+1\) and \(v\) is a linear combination of \(v_{1},\cdots ,v_{r}\). This says that \(f_{i}\in W^{\circ}\) for \(i\geq k+1\). Since \(\{f_{1},\cdots ,f_{n}\}\) is a basis for \(V^{*}\), it also says that \(f_{r+1},\cdots ,f_{n}\) are linearly independent. Therefore, we remain to show that they span \(W^{\circ}\). For any \(f\in V^{*}\), by (\ref{laeq87}), we have
\begin{equation}{\label{laeq90}}\tag{16}
f=\sum_{i=1}^{n}f(v_{i})f_{i}.
\end{equation}
For \(f\in W^{\circ}\), we have \(f(v_{i})=0\) for \(i\leq r\). Therefore, from (\ref{laeq90}), we obtain
\[f=\sum_{i=r+1}^{n}f(v_{i})f_{i}.\]
This says that the set \(\{f_{r+1},\cdots ,f_{n}\}\) spans \(W^{\circ}\). Therefore, we obtain \(\dim (W)^{\circ}=n-r\). This completes the proof. \(\blacksquare\)

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). Let \(W_{1}\) and \(W_{2}\) be two subspaces of \(V\). Then \(W_{1}=W_{2}\) if and only if \(W_{1}^{\circ}=W_{2}^{\circ}\).

Proof. If \(W_{1}=W_{2}\), then it is clear to see \(W_{1}^{\circ}=W_{2}^{\circ}\). For the converse, we assume \(W_{1}\neq W_{2}\). Then, there is \(w\in W_{2}\) and \(w\not\in W_{1}\). According to the proof of Proposition \ref{lap92}, there is a linear functional \(f\) satisfying \(f(v)=0\) for all \(v\in W_{1}\) and \(f(w)\neq 0\). This says that \(f\in W_{1}^{\circ}\) and \(f\not\in W_{2}^{\circ}\), which contradicts \(W_{1}^{\circ}=W_{2}^{\circ}\). This completes the proof. \(\blacksquare\)

If \(S^{*}\) is a subset of \(V^{*}\), then the annihilator \((S^{*})^{\circ}\) is a subset of \(V^{**}\). Since we identify \(V^{**}\) with \(V\), the annihilator \((S^{*})^{\circ}\) can be regarded as a subspace of \(V\) that is given by
\[(S^{*})^{\circ}=\left\{v\in V:f(v)=0\mbox{ for all }f\in S^{*}\right\}.\]

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(S\) be a subset of \(V\). Then \(S^{\circ\circ}\) is the subspace spanned by \(S\).

Proof. Let \(W\) be a subspace spanned by \(S\). We want to prove \(S^{\circ\circ}=W\). It is clear to see \(W^{\circ}=S^{\circ}\). Therefore, we need to prove
$W^{\circ\circ}=W$. Using Theorem \ref{lat91}, we have
\begin{align*} \dim (W)+\dim (W)^{\circ} & =\dim (V)\mbox{ and }\dim (W)^{\circ}+\dim (W)^{\circ\circ}\\ & =\dim (V)^{*}.\end{align*}
Since \(\dim (V)=\dim (V)^{*}\), we immediately have \(\dim (W)=\dim (W)^{\circ\circ}\). Since \(W\) is a subspace of \(W^{\circ\circ}\), we must have \(W^{\circ\circ}=W\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{d}}\tag{D}\mbox{}\end{equation}

Hyperspaces.

Lemma \ref{lat71}. Let \(V\) and \(W\) be the vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping. Suppose that \(V\) is finite-dimensional. Then, we have
\begin{equation}{\label{laeq11}}\tag{17}
\dim (V)=\dim\mbox{Ker}(T)+\dim\mbox{Im}(T).
\end{equation}

Proof. We write \(\dim\mbox{Ker}(T)=q\) and \(\dim\mbox{Im}(T)=s\). If \(\mbox{Im}(T)=\{\theta_{W}\}\), i.e., \(s=0\), then \(\mbox{Ker}(T)=V\). This says that the equality (\ref{laeq11}) holds true. Now, we assume \(s>0\). Since \(\mbox{Im}(T)\) is a subspace of \(W\), it means that itself is a vector space. Therefore, we take \(\{w_{1},w_{2},\cdots ,w_{s}\}\) as a basis of \(\mbox{Im}(T)\). Let \(v_{i}\in V\) for \(i=1,\cdots ,s\) satisfying \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,s\). Suppose that \(\mbox{Ker}(T)\neq\{\theta_{V}\}\), i.e., \(q>0\). Since \(\mbox{Ker}(T)\) is a subspace of \(V\), we take \(\{u_{1},u_{2},\cdots ,u_{q}\}\) as the basis of \(\mbox{Ker}(T)\). We are going to show that \(\{u_{1},u_{2},\cdots ,u_{q},w_{1},w_{2},\cdots ,w_{s}\}\) is a basis of \(V\). Given any element \(x\in V\), since \(T(x)\in\mbox{Im}(T)\), there exist \(\alpha_{1},\alpha_{2},\cdots ,\alpha_{s}\) satisfying
\[T(x)=\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{s}w_{s}.\]
By the linearity of \(T\), we also have
\[\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{s}w_{s}=T(\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{s}v_{s}),\]
which says
\[T(x)=T(\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{s}v_{s}).\]
It follows
\[T(x-\alpha_{1}v_{1}-\alpha_{2}v_{2}-\cdots -\alpha_{s}v_{s})=\theta_{W},\]
which also says
\[x-\alpha_{1}v_{1}-\alpha_{2}v_{2}-\cdots -\alpha_{s}v_{s}\in\mbox{Ker}(T).\]
Therefore, there exist \(\beta_{1},\beta_{2},\cdots ,\beta_{q}\in {\cal F}\) satisfying
\[x-\alpha_{1}v_{1}-\alpha_{2}v_{2}-\cdots -\alpha_{s}v_{s}=\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{q}u_{q}.\]
It follows
\[x=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{s}v_{s}+\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{q}u_{q}.\]
This shows that the elements \(\{u_{1},u_{2},\cdots ,u_{q},w_{1},w_{2},\cdots ,w_{s}\}\) generate \(V\). Next, we want to claim that those elements are also linearly independent. Suppoose that
\begin{equation}{\label{laeq12}}\tag{18}
\gamma_{1}v_{1}+\gamma_{2}v_{2}+\cdots +\gamma_{s}v_{s}
+\gamma_{s+1}u_{1}+\gamma_{s+2}u_{2}+\cdots +\gamma_{q}u_{s+q}=\theta_{V},
\end{equation}
where \(\gamma_{1},\gamma_{2},\gamma_{s+q}\in {\cal F}\). By applying \(T\) to this equality, since \(T(u_{j})=\theta_{W}\) for \(j=1,\cdots ,q\), we have
\[\gamma_{1}w_{1}+\gamma_{2}w_{2}+\cdots +\gamma_{s}w_{s}=
\gamma_{1}T(v_{1})+\gamma_{2}T(v_{2})+\cdots +\gamma_{s}T(v_{s})=\theta_{W},\]
which says \(\gamma_{i}=0\) for \(i=1,\cdots ,s\) by the linear independence of \(\{w_{1},w_{2},\cdots ,w_{s}\}\). From (\ref{laeq12}), we also obtain
\[\gamma_{s+1}u_{1}+\gamma_{s+2}u_{2}+\cdots +\gamma_{s+q}u_{q}=\theta_{V},\]
which implies \(\gamma_{j}=0\) for \(j=s+1,\cdots ,s+q\) by the linear independence of \(\{u_{1},u_{2},\cdots ,u_{q}\}\). Therefore, we obtain \(\gamma_{j}=0\) for \(j=1,2,\cdots ,s+q\). This shows that \(\{u_{1},u_{2},\cdots ,u_{q},w_{1},w_{2},\cdots ,w_{s}\}\) is indeed a basis of \(V\). Therefore, the equality (\ref{laeq11}) holds true. Finally, we assume that \(\mbox{Ker}(T)=\{\theta_{V}\}\), i.e., \(q=0\). Then, the above arguments are still valid without considering \(\{u_{1},u_{2},\cdots ,u_{q}\}\) to prove \(\dim (V)=\dim\mbox{Im}(T)\). This completes the proof. \(\blacksquare\)

The above Lemma \ref{lat71} can refer to the page Linear Mappings.

Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\). Any subspace of \(V\) with dimension \(n-1\) is called the hyperspace. If \(f\) is non-zero linear functional, then \(\mbox{Im}(f)={\cal F}\), i.e., \(\dim \mbox{Im}(f)=1\). Since we have \[\dim (V)=\dim\mbox{Ker}(T)+\dim\mbox{Im}(T),\] it follows \(\dim\mbox{Ker}(f)=n-1\), which says that the null space of \(f\) is a hyperspace in \(V\).

\begin{equation}{\label{lap92}}\mbox{}\tag{19}\end{equation}

Proposition \ref{lap92}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(W\) be a \(r\)-dimensional subspace of \(V\). Then \(W\) is the intersection of \(n-r\) hyperspaces in \(V\)

Proof. From the proof of Theorem \ref{lat91}, we see that \(W\) is the space of vectors \(v\) satisfying \(f_{i}(v)=0\) for \(i\geq r+1\). In other words, \(W\) is the intersection of null spaces of \(f_{i}\) for \(i\geq r+1\). Since the null space of each non-zero linear functional is a hyperspace in \(V\), this completes the proof. \(\blacksquare\)

Now, if \(V\) is not a finite-dimensional vector space, then we cannot define the hyperspace according to the dimension. In this case, the hyperspace will be defined based on the concept of maximal proper subspace.

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(M\) be a proper subspace of \(V\). We say that \(M\) is a maximal proper subspace of \(V\) when, for any subspace \(W\) of \(V\) satisfying \(M\subseteq W\), either \(W=M\) or \(W=V\). A hyperspace in \(V\) is a maximal proper subspace of \(V\). \(\sharp\)

Theorem. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(f\) be a non-zero linear functional on \(V\). Then the null space \(\mbox{Ker}(f)\) is a hyperspace in \(V\). Conversely, every hyperspace in \(V\) is the null space of a (not unique) non-zero linear functional on \(V\).

Proof. Let \(f\) be a non-zero linear functional on \(V\). Then \(\mbox{Ker}(f)\) is a proper subspace of \(V\). Let \(W\) be a subspace contains \(\mbox{Ker}(f)\) with \(W\neq\mbox{Ker}(f)\). Let \(v_{0}\in V\) and \(v_{0}\not\in\mbox{Ker}(f)\). Now, we take \(W\) as the subspace spanned by \(\{v_{0}\}\) and \(\mbox{Ker}(f)\). Then, we see that \(W\) is the smallest subspace containing \(\{v_{0}\}\cup\mbox{Ker}(f)\). Therefore, if \(W=V\), then \(\mbox{Ker}(f)\) must be a maximal proper subspace of \(V\), i.e., a hyperspace in \(V\). Now, we remain to show that every vector in \(V\) is in the subspace spanned by \(\{v_{0}\}\) and \(\mbox{Ker}(f)\). Every vector in the subspace spanned by \(\{v_{0}\}\) and \(\mbox{Ker}(f)\) has the form of \(u+\lambda v_{0}\) for some \(u\in\mbox{Ker}(f)\) and \(\lambda\in {\cal F}\). Now, given any \(v\in V\), we define \(\lambda =f(v)/f(v_{0})\), which is well-defined, since \(f(v_{0})\neq 0\). Now, we have
\begin{align*} f(v-\lambda v_{0}) & =f(v)-\lambda f(v_{0})\\ & =f(v)-\frac{f(v)}{f(v_{0})}\cdot f(v_{0})=0,\end{align*}
which says \(v-\lambda v_{0}\in\mbox{Ker}(f)\). Therefore, we have \(v=u+\lambda v_{0}\) for some \(u\in\mbox{Ker}(f)\), which says that \(v\) is indeed in the subspace spanned by \(\{v_{0}\}\) and \(\mbox{Ker}(f)\).

For the converse, let \(M\) be a hyperspace in \(V\), and let \(v_{0}\) be some fixed vector which is not in \(M\). Since \(M\) is a maximal proper subspace of \(V\), the subspace spanned by \(\{v_{0}\}\) and \(M\) must be the entire vector space \(V\). This also says that every vector in \(V\) has the form of \(u+\lambda v_{0}\) for some \(u\in M\) and \(\lambda\in {\cal F}\). Therefore, for any \(v\in V\), we can define a mapping \(f:V\rightarrow {\cal F}\) by \(f(v)=\lambda\), where \(v=u+\lambda v_{0}\) for some \(u\in M\). We need to show that \(f\) is well-defined. Suppose that there also exist \(\widehat{u}\in M\) and \(\widehat{\lambda}\in {\cal F}\) satisfying \(v=\widehat{u}+\widehat{\lambda}v_{0}\). We want to claim \(u=\widehat{u}\) and \(\lambda =\widehat{\lambda}\). Since
\begin{align*} v & =u+\lambda v_{0}\\ & =\widehat{u}+\widehat{\lambda}v_{0},\end{align*}
we have
\[(\widehat{\lambda}-\lambda )v_{0}=u-\widehat{u}\in M.\]
If \(\widehat{\lambda}-\lambda\neq 0\), then \(v_{0}\in M\). This contradiction says \(\lambda =\widehat{\lambda}\), which also implies \(u=\widehat{u}\). In other words, given any \(v\in V\), there exists \(u\in M\) and \(\lambda\in {\cal F}\) such that \(v\) is uniquely determined by the form \(u+\lambda v_{0}\). Therefore, the functional \(f\) defined by \(f(v)=\lambda\) is well-defined. It is easy to check that \(f\) is linear. We also have \(\mbox{Ker}(f)=M\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{e}}\tag{E}\mbox{}\end{equation}

Adjoint of Linear Mappings.

Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping. We are going to induce a linear mapping from \(W^{*}\) into \(V^{*}\) as follows. Let \(g:W\rightarrow {\cal F}\) be a linear functional on \(W\). We define a mapping \(f:V\rightarrow {\cal F}\) by
\begin{equation}{\label{laeq98}}\tag{20}
f(v)=g(T(v))\mbox{ for }v\in V.
\end{equation}
Since \(g\) and \(T\) are linear, it follows that \(f\) is a linear functional on \(V\). This also says that we can induce a mapping \(T^{*}:W^{*}\rightarrow V^{*}\) defined by \(T^{*}(g)=f\), where \(f\) is defined by (\ref{laeq98}). Now, given any \(g_{1},g_{2}\in W^{*}\) and \(\lambda\in {\cal F}\), we have
\begin{align*}\left (T^{*}\left (\lambda g_{1}+g_{2}\right )\right )(v)
& =\left (\lambda g_{1}+g_{2}(T(v)\right )
\\ & =\lambda g_{1}(T(v))+g_{2}(T(v))\\ & =\lambda (T^{*}(g_{1}))(v)+(T^{*}(g_{2}))(v),\end{align*}
which says
\[T^{*}\left (\lambda g_{1}+g_{2}\right )=\lambda T^{*}(g_{1})+T^{*}(g_{2}).\]
Therefore, we conclude that \(T^{*}\) is a unique linear mapping from \(W^{*}\) into \(V^{*}\). According to (\ref{laeq98}), we have
\begin{equation}{\label{laeq99}}\tag{21}
(T^{*}(g))(v)=g(T(v))\mbox{ for }v\in V.
\end{equation}
In this case, we shall call \(T^{*}\) the adjoint of \(T\).

\begin{equation}{\label{lat71}}\mbox{}\tag{22}\end{equation}

Proposition. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping. Then \(\mbox{Ker}(T^{*})\) is the annihilator of \(\mbox{Im}(T)\), i.e.,
\[\mbox{Ker}(T^{*})=\left (\mbox{Im}(T)\right )^{\circ}.\]
If we further assume that \(V\) and \(W\) are finite-dimensional, then
\begin{equation}{\label{laeq100}}\tag{23}
\mbox{rank}(T^{*})=\mbox{rank}(T)
\end{equation}
and \(\mbox{Im}(T^{*})\) is the annihilator of \(\mbox{Ker}(T)\), i.e.,
\[\mbox{Im}(T^{*})=\left (\mbox{\Ker}(T)\right )^{\circ}.\]

Proof. If \(g\in\mbox{Ker}(T^{*})\), then \((T^{*}(g))(v)=0\) for all \(v\in V\). According to (\ref{laeq99}), we have \(g(T(v))=0\) for all \(v\in V\). This shows that \(\mbox{Ker}(T^{*})\) is indeed the annihilator of \(\mbox{Im}(T)\). Now, we assume that \(V\) and \(W\) are finite-dimensional with \(\dim (V)=n\) and \(\dim (W)=m\). Let \(r=\mbox{rank}(T)\), i.e., \(r=\dim\mbox{Im}(T)\). Using Theorem \ref{lat91}. the annihilator of \(\mbox{Im}(T)\) has dimension \(m-r\). Since \(\mbox{Ker}(T^{*})\) is the annihilator of \(\mbox{Im}(T)\) by the first statement, the dimension of \(\mbox{Ker}(T^{*})\) must be \(m-r\). Since \(\dim (W)^{*}=\dim (W)=m\), Lemma \ref{lat71} says  \(\dim\mbox{Im}(T^{*})=m-(m-r)=r\), i.e., \(\mbox{rank}(T^{*})=r\), which shows the equality (\ref{laeq100}). Suppose \(f\in\mbox{Im}(T^{*})\), i.e., \(f=T^{*}(g)\) for some \(g\in W^{*}\). Then, given any \(v\in\mbox{Ker}(T)\), we have
\[f(v)=(T^{*}(g))(v)=g(T(v))=g(\theta_{W})=0,\]
which says \(\mbox{Im}(T^{*})\subseteq\left (\mbox{Ker}(T)\right )^{\circ}\). Using Theorem \ref{lat91} and Lemma \ref{lat71}, we obtain
\begin{align*} \dim\left (\mbox{Ker}(T)\right )^{\circ} & =n-\dim\mbox{Ker}(T)\\ & =
\dim\mbox{Im}(T)\\ & =\mbox{rank}(T)=\mbox{rank}(T^{*})\\ & =\dim\mbox{Im}(T^{*}),\end{align*}
which says that we must have the equality \(\mbox{Im}(T^{*})=\left (\mbox{Ker}(T)\right )^{\circ}\). This completes the proof. \(\blacksquare\)

Theorem. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\). Let \(\mathfrak{B}_{V}\) be an ordered basis for \(V\) with dual basis \(\mathfrak{B}_{V}^{*}\) for \(V^{*}\), and let \(\mathfrak{B}_{W}\) be an ordered basis for \(W\) with dual basis \(\mathfrak{B}_{W}^{*}\) for \(W^{*}\). Let \(T:V\rightarrow W\) be a linear mapping, and let \(A\) be the matrix of \(T\) relative to \(\mathfrak{B}_{V}\) and \(\mathfrak{B}_{W}\). Let \(B\) be the matrix of \(T^{*}\) relative to \(\mathfrak{B}_{W}^{*}\) and \(\mathfrak{B}_{V}^{*}\). Then \(B=A^{\top}\).

Proof. We first have \([T(v)]_{\mathfrak{B}_{W}}=A[v]_{\mathfrak{B}_{V}}\) and \([T^{*}(f)]_{\mathfrak{B}_{W}^{*}}=B[f]_{\mathfrak{B}_{V}^{*}}\). Let \(\dim (V)=n\) and \(\dim (W)=m\). Then \(\dim (V)^{*}=n\) and \(\dim (W)^{*}=m\). Let
\begin{align*}
\mathfrak{B}_{V}=\left\{v_{1},\cdots ,v_{n}\right\}, & \mathfrak{B}_{W}=\left\{w_{1},\cdots ,w_{m}\right\}\\
\mathfrak{B}_{V}^{*}=\left\{f_{1},\cdots ,f_{n}\right\}, & \mathfrak{B}_{W}^{*}=\left\{g_{1},\cdots ,g_{m}\right\}.
\end{align*}
By definition, we have
\[T(v_{j})=\sum_{i=1}^{m}a_{ij}w_{i}\mbox{ for }j=1,\cdots ,n\]
and
\begin{equation}{\label{laeq102}}\tag{24}
T^{*}(g_{j})=\sum_{i=1}^{n}b_{ij}f_{i}\mbox{ for }j=1,\cdots ,m.
\end{equation}
By the definition of adjoint, we also have
\begin{align}
(T^{*}(g_{j}))(v_{i}) & =g_{j}(T(v_{i}))\nonumber\\ & =g_{j}\left (\sum_{k=1}^{m}a_{ki}w_{k}\right )
\nonumber \\ & =\sum_{k=1}^{m}a_{ki}g_{j}(w_{k})\nonumber \\ & =\sum_{k=1}^{m}a_{ki}\delta_{jk}=a_{ji}.\label{laeq104}\tag{25}
\end{align}
For any \(f\in V^{*}\), by (\ref{laeq87}), we have
\begin{equation}{\label{laeq101}}\tag{26}
f=\sum_{i=1}^{n}f(v_{i})f_{i}.
\end{equation}
By taking \(f\) as \(T^{*}(g_{j})\) in (\ref{laeq101}) and using (\ref{laeq104}), we have
\begin{equation}{\label{laeq103}}\tag{27}
T^{*}(g_{j})=\sum_{i=1}^{n}(T^{*}(g_{j}))(v_{i})f_{i}=\sum_{i=1}^{n}a_{ji}f_{i}.
\end{equation}
Comparing (\ref{laeq102}) and (\ref{laeq103}), we obtain \(b_{ij}=a_{ji}\). This completes the proof. \(\blacksquare\)

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *