Fredrik Marinus Kruseman (1816-1882 ) was a Dutch painter.
We have sections
\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}
Linear Mappings.
We say that \(T:S\rightarrow S’\) is a mapping from \(S\) into \(S’\) if, for each element in \(x\in S\), there is a corresponding element \(y\) in \(S’\) such that \(x\) can be mapped into \(y\). In this case, we write \(y=T(x)\).
Example. Let \(S\) be a set over the scalar field \({\cal F}\), and let \(V\) be the set of all mappings from \(S\) into \({\cal F}\) with the addition and scalar multiplication defined by
\[(T_{1}+T_{2})(x)=T_{1}(x)+T_{2}(x)\]
and
\[(\alpha T)(x)=\alpha T(x)\mbox{ for any }\alpha\in {\cal F}.\]
Then \(V\) is a vector space over the scalar field \({\cal F}\). \(\sharp\)
We say that \(T\) is injective or one-to-one when \(x\neq y\) implies \(T(x)\neq T(y)\). Equivalently, the mapping \(T\) is injective if and only if \(T(x)=T(y)\) implies \(x=y\). We say that the mapping \(T\) is surjective or onto when, given any \(y\in S’\), there exists \(x\in S\) satisfying \(T(x)=y\). We say that \(T\) is bijective when \(T\) is both injective and surjective, i.e., \(T\) is one-to-one and onto.
Let \(S\) be any set. The identity mapping on \(S\) is denoted by \(I_{S}:S\rightarrow S\) and is defined by \(I_{S}(x)=x\). Let \(F:S\rightarrow S’\) be a mapping from \(S\) into \(S’\). We say that \(F\) has an inverse when there exists a mapping \(G:S’\rightarrow S\) from \(S’\) into \(S\) satisfying
\[F(G(x))=I_{S}(x)\mbox{ for all }x\in S\]
and
\[G(F(x))=I_{S’}(x)\mbox{ for all }x\in S’.\]
We can show that the inverse \(G\) of \(F\) is unique. Therefore, we also write \(G\) as \(F^{-1}\). Then, we have the following interesting properties.
- If \(F\) is invertible, then \(F^{-1}\) is also invertible in the sense of \((F^{-1})^{-1}=F\);
- If \(F_{1}\) and \(F_{2}\) are invertible such that the composition \(F_{1}\circ F_{2}\) is well-defined, then \(F_{1}\circ F_{2}\) is invertible with \((F_{1}\circ F_{2})^{-1}=F_{2}^{-1}\circ F_{1}^{-1}\).
Proposition. A mapping \(T\) has an inverse mapping if and only if \(T\) is both injective and surjective.
Definition. Let \(V\) and \(W\) be two vector space over the same scalar filed \({\cal F}\). The mapping \(T:V\rightarrow W\) is said to be linear when the following conditions are satisfied.
- For any \(x,y\in V\), we have \[T(x+y)=T(x)+T(y).\]
- For any \(\alpha\in {\cal F}\) and \(x\in V\), we have \[T(\alpha x)=\alpha T(x).\]
Let \(V\) and \(W\) be two vector space over the same scalar filed \({\cal F}\). We have some interesting observations.
- The mapping \(T:V\rightarrow W\) is linear if and only if
\[T(\alpha x+y)=\alpha T(x)+T(y)\]
for any \(x,y\in V\) and \(\alpha\in {\cal F}\). This also says that \(T(x-y)=T(x)-T(y)\) for all \(x,y\in V\). - The mapping \(T:V\rightarrow W\) is linear if and only if, for any \(x_{1},\cdots ,x_{n}\in V\) and \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\),
\[T\left (\sum_{i=1}^{n}\alpha_{i}x_{i}\right )=\sum_{i=1}^{n}\alpha_{i}T(x_{i}).\] - If the mapping \(T:V\rightarrow W\) is linear, then
\[T(\theta )=T(\theta +\theta )=T(\theta )+T(\theta ),\]
which says that \(T(\theta_{V})=\theta_{W}\), where \(\theta_{V}\) and \(\theta_{W}\) are the zero elements of \(V\) and \(W\), respectively.
Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\). The linear mapping \(T:V\rightarrow V\) from \(V\) into itself is called a linear operator on \(V\).
Example. Let \(V\) and \(W\) be the vector space of all \(m\times n\) matrices and \(n\times m\) matrices over the scalar field \({\cal F}\), respectively. The mapping \(T:V\rightarrow V\) defined by \(T(A)=A^{\top}\), where \(A^{\top}\) is the transpose of \(A\), is a linear mapping. \(\sharp\)
Example. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\) with a given basis \(\{x_{1},\cdots ,x_{n}\}\). Given any \(x\in V\), there exist \(\alpha_{i}\in {\cal F}\) for \(i=1,\cdots ,n\) satisfying
\[x=\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}.\]
We define a mapping \(T:V\rightarrow {\cal F}^{n}\) by
\[T(v)=\left (\alpha_{1},\alpha_{2},\cdots ,\alpha_{n}\right ).\]
We are going to claim that this mapping \(T\) is linear. Given any \(y\in V\), there also exist \(\beta_{i}\in {\cal F}\) for \(i=1,\cdots ,n\) satisfying
\[y=\beta_{1}x_{1}+\beta_{2}x_{2}+\cdots +\beta_{n}x_{n}.\]
Then, we have
\[x+y=(\alpha_{1}+\beta_{1})x_{1}+(\alpha_{2}+\beta_{2})x_{2}+\cdots +(\alpha_{n}+\beta_{n})x_{n}\]
and
\[\gamma x=(\gamma\alpha_{1})x_{1}+(\gamma\alpha_{2})x_{2}+\cdots +(\gamma\alpha_{n})x_{n}),\]
which say
\begin{align*} T(x+y) & =\left (\alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2},\cdots ,\alpha_{n}+\beta_{n}\right )\\ & =\left (\alpha_{1},\alpha_{2},\cdots ,\alpha_{n}\right )+\left (\beta_{1},\beta_{2},\cdots ,\beta_{n}\right )\\ & =T(x)+T(y)\end{align*}
and
\[T(\gamma x)=\left (\gamma\alpha_{1},\cdots ,\gamma\alpha_{2},\cdots ,\gamma\alpha_{n}\right ).\]
This proves that \(T\) is a linear mapping. \(\sharp\)
Example. Given any \(n,r\in\mathbb{N}\) with \(r<n\), the projection mapping \(T:{\cal F}^{n}\rightarrow {\cal F}^{r}\) defined by
\[T\left (\alpha_{1},\alpha_{2},\cdots ,\alpha_{n}\right )=\left (\alpha_{i_{1}},\alpha_{i_{2}},\cdots ,\alpha_{i_{r}}\right ),\]
where \(i_{j}\in\{1,2,\cdots ,n\}\) for \(j=1,\cdots ,r\), is a linear mapping. \(\sharp\)
Example. Let \(A\) be an \(m\times n\) matrix in the field \({\cal F}\). The mapping \(T:{\cal F}^{n}\rightarrow {\cal F}^{m}\) defined by \(T(\alpha)=A\alpha\), where \(\alpha=(\alpha_{1},\alpha_{2},\cdots ,\alpha_{n})\), is a linear mapping. \(\sharp\)
Example. Let \(P\) be a fixed \(m\times m\) matrix over the scalar field \({\cal F}\), and let \(Q\) be a fixed \(n\times n\) matrix over \({\cal F}\). We define a mapping \(T:{\cal F}^{m\times n}\rightarrow {\cal F}^{m\times n}\) by \(T(A)=PAQ\). Then \(T\) is a linear mapping, since
\begin{align*} T(cA+B) & =P(cA+B)Q\\ & =(cPA+PB)Q\\ & =cPAQ+PBQ\\ & =cT(A)+T(B).\end{align*}
Example. Let \(V\) and \(W\) be the vector spaces of degrees \(n\) and \(n-1\) polynomials, respectively. The mapping \(T:V\rightarrow W\) defined by \(T(f)=f’\), where \(f’\) denotes the derivative of \(f\), is a linear mapping. \(\sharp\)
Example. Let \(V\) be the vector space of all continuous real-valued functions on \(\mathbb{R}\). Given any fixed \(a,b\in\mathbb{R}\) with \(a<b\). The mapping \(T:V\rightarrow\mathbb{R}\) defined by
\[T(f)=\int_{a}^{b}f(t)dt\]
is a linear mapping. \(\sharp\)
Let \(T:V\rightarrow V’\) be a linear mapping. Given any \(x,y,z\in V\), we have
\begin{align*} T(x+y+z) & =T((x+y)+z)\\ & =T(x+y)+T(z)\\ & =T(x)+T(y)+T(z).\end{align*}
By induction, given any \(x_{1},\cdots ,x_{n}\in V\), we have
\[T(x_{1}+x_{2}+\cdots +x_{n})=T(x_{1})+T(x_{2})+\cdots +T(x_{n}).\]
In general, if \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\), then we obtain
\begin{align*}
T\left (\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}\right )
& =T(\alpha_{1}x_{1})+T(\alpha_{2}x_{2})+\cdots +T(\alpha_{n}x_{n})\\
& =\alpha_{1}T(x_{1})+\alpha_{2}T(x_{2})+\cdots +\alpha_{n}T(x_{n}).
\end{align*}
\begin{equation}{\label{lat10}}\tag{1}\mbox{}\end{equation}
Theorem \ref{lat10}. Let \(V\) be an \(n\)-dimensional vector spaces over the scalar field \({\cal F}\), and let \(\{v_{1},v_{2},\cdots ,v_{n})\) be a basis of \(V\). Let \(W\) be another vector space over the same scalar field \({\cal F}\), and let \(w_{1},w_{2},\cdots ,w_{n}\) be any elements in \(W\). Then, there exists a unique linear mapping \(T:V\rightarrow W\) such that \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\).
Proof. Given any element \(v\in V\), there exist \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\) satisfying
\[x=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
Therefore, we can define a mapping \(T:V\rightarrow W\) by
\[T(v)=T\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )
=\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\in W.\]
We are going to show that \(T\) is linear.
- Let \(u\) be another element of \(V\). There exist \(\beta_{1},\cdots ,\beta_{n}\in {\cal F}\) satisfying
\[u=\beta_{1}v_{1}+\beta_{2}v_{2}+\cdots +\beta_{n}v_{n}.\]
Therefore, we have
\[v+u=(\alpha_{1}+\beta_{1})v_{1}+(\alpha_{2}+\beta_{2})v_{2}+\cdots +(\alpha_{n}+\beta_{n})v_{n}.\]
By the definition of \(T\), we obtain
\begin{align*}
T(v+u) & =(\alpha_{1}+\beta_{1})w_{1}+(\alpha_{2}+\beta_{2})w_{2}+\cdots +(\alpha_{n}+\beta_{n})w_{n}\\
& =\left (\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\right )+\left (\beta_{1}w_{1}+\beta_{2}w_{2}+\cdots +\beta_{n}w_{n}\right )\\
& =T(v)+T(y).
\end{align*} - Given any \(\alpha\in {\cal F}\), we have
\[\alpha v=(\alpha\alpha_{1})v_{1}+(\alpha\alpha_{2})v_{2}+\cdots +(\alpha\alpha_{n})v_{n}.\]
By the definition of \(T\), we obtain
\begin{align*}
T(\alpha v) & =(\alpha\alpha_{1})w_{1}+(\alpha\alpha_{2})w_{2}+\cdots +(\alpha\alpha_{n})w_{n}\\
& =\alpha\left (\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\right )=\alpha T(v).
\end{align*}
It is easy to see that this mapping satisfies \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Next, we want to claim that this mapping is unique. Suppose there exists another linear mapping \(\widehat{T}\) satisfying \(\widehat{T}(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Then, we have
\begin{align*}
\widehat{T}(v) & =\widehat{T}\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )\\
& =\alpha_{1}\widehat{T}(v_{1})+\alpha_{2}\widehat{T}(v_{2})+\cdots +\alpha_{n}\widehat{T}(v_{n})\\
& =\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\\
& =T(v).
\end{align*}
This shows that \(\widehat{T}=T\), and the proof is complete. \(\blacksquare\)
Corollary. Let \(V\) be a finite-dimensional vector spaces over the scalar field \({\cal F}\), with basis \(\{v_{1},v_{2},\cdots ,v_{n})\). Let \(W\) be another vector space over the same scalar field \({\cal F}\). If \(T,U:V\rightarrow W\) are linear mappings satisfying \(T(v_{i})=U(v_{i})\) for \(i=1,\cdots ,n\), then \(T=U\).
In Theorem \ref{lat10}, although the linear mapping \(T\) just maps the elements of basis into the corresponding elements, the linear mapping \(T\) can maps any element in \(V\) into \(W\). Indeed, given any \(v\in V\), there exist \(\alpha_{1},\alpha_{2},\cdots ,\alpha_{n}\in {\cal F}\) satisfying
\[x=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
The linearity of \(T\) shows
\begin{align*}
T(v) & =T\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )\\
& =\alpha_{1}T(v_{1})+\alpha_{2}T(v_{2})+\cdots +\alpha_{n}T(v_{n})\\
& =\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\in W.
\end{align*}
Example. The vectors \(v_{1}=(1,2)\) and \(v_{2}=(3,4)\) can form a basis for \(\mathbb{R}^{2}\). According to Theorem \ref{lat10}, there is a unique linear mapping \(T\) from \(\mathbb{R}^{2}\) into \(\mathbb{R}^{3}\) satisfying
\[T(v_{1})=(3,2,1)\mbox{ and }T(v_{2})=(6,5,4).\]
We are going to find \(T(1,0)\). Since
\[(1,0)=-2(1,2)+(3,4)=-2v_{1}+v_{2},\]
we have
\begin{align*} T(1,0) & =T(-2v_{1}+v_{2})\\ & =-2T(v_{1})+T(v_{2})\\ & =-2(3,2,1)+(6,5,4)=(0,1,2).\end{align*}
\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}
Kernel and Image.
Let \(V\) and \(W\) be two vector spaces over the same scalar field \({\cal F}\). We write \(\theta_{V}\) and \(\theta_{W}\) to denote the zero elements of \(V\) and \(W\), respectively.
Definition. Let \(V\) and \(W\) be two vector spaces over the same scalar field \({\cal F}\). The kernel of a linear mapping \(T:V\rightarrow W\) is denoted and defined by
\[\mbox{Ker}(T)=\left\{x\in V:T(x)=\theta_{W}\right\}.\]
Proposition. Let \(T:V\rightarrow W\) be a linear mapping. Then \(\mbox{Ker}(T)\) is a subspace of \(V\).
Proof. It is obvious that \(T(\theta_{V})=\theta_{W}\). This says \(\theta_{V}\in\mbox{Ker}(T)\). Given any \(x,y\in\mbox{Ker}(T)\), the linearity of \(T\) says
\begin{align*} T(x+y) & =T(x)+T(y)\\ & =\theta_{W}+\theta_{W}\\ & =\theta_{W},\end{align*}
which says \(x+y\in\mbox{Ker}(T)\). Finally, given any \(\alpha\in {\cal F}\), we have
\begin{align*} T(\alpha x) & =\alpha T(x)\\ & =\alpha\theta_{W}\\ & =\theta_{W},\end{align*}
which also says \(\alpha x\in\mbox{Ker}(T)\). This completes the proof. \(\blacksquare\)
Since \(\mbox{Ker}(T)\) is a subspace of \(V\), it is also called a null space of \(T\).
\begin{equation}{\label{lap13}}\mbox{}\tag{2}\end{equation}
Proposition \ref{lap13}. Let \(T:V\rightarrow W\) be a linear mapping. Then \(\mbox{Ker}(T)=\{\theta_{V}\}\) if and only if \(T\) is injective.
Proof. Suppose that \(\mbox{Ker}(T)=\{\theta_{V}\}\). Given any \(x,y\in V\) with \(T(x)=T(y)\), we have
\[T(x-y)=T(x)-T(y)=\theta_{W},\]
which says \(x-y\in\mbox{Ker}(T)=\{\theta_{V}\}\), i.e., \(x-y=\theta_{V}\). Therefore, we obtain \(x=y\), which says that \(T\) is injective. For the converse, the injectivity of \(T\) says that \(T(\theta_{V})=T(x)=\theta_{W}\) implies \(x=\theta_{V}\), which says that \(\mbox{Ker}(T)=\{\theta_{V}\}\). The proof is complete. \(\blacksquare\)
Proposition. Let \(T:V\rightarrow W\) be a linear mapping with \(\mbox{Ker}(T)=\{\theta_{V}\}\). If \(x_{1},\cdots ,x_{n}\) are linearly independent vectors of \(V\), then \(T(x_{1}),\cdots ,T(x_{n})\) are also linearly independent vectors of \(W\).
Proof. Suppose that there exist \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\) satisfying
\[\alpha_{1}T(x_{1})+\alpha_{2}T(x_{2})+\cdots +\alpha_{n}T(x_{n})=\theta_{W}.\]
We are going to claim \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). The linearity of \(T\) says
\[T(\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n})=\theta_{W},\]
which implies
\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}\in\mbox{Ker}(T)=\{\theta_{V}\}.\]
Therefore, we obtain
\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}=\theta_{V},\]
which implies \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\) by the linear independence of \(x_{1},\cdots ,x_{n}\). This completes the proof. \(\blacksquare\)
Let \(T:V\rightarrow W\) be a mapping, where \(V\) and \(W\) are two vector spaces over the same scalar field \({\cal F}\). The image of \(T\) is denoted and defined by
\[\mbox{Im}(T)=\left\{T(x):x\in V\right\}.\]
If \(T\) is onto, then \(\mbox{Im}(T)=W\).
Proposition. Let \(T:V\rightarrow W\) be a linear mapping. Then \(\mbox{ Im}(T)\) is a subspace of \(W\).
Proof. Since \(T(\theta_{V})=\theta_{W}\), it says t$\theta_{W}\in\mbox{Im}(T)$. Given any \(w_{1},w_{2}\in\mbox{Im}(T)\), there exists \(v_{1},v_{2}\in V\) satisfying \(T(v_{1})=w_{1}\) and \(T(v_{2})=w_{2}\). The linearity of \(T\) says that there exists \(v_{1}+v_{2}\in V\) satisfying
\[T(v_{1}+v_{2})=T(v_{1})+T(v_{2})=w_{1}+w_{2},\]
which says$w_{1}+w_{2}\in\mbox{Im}(T)$. Finally, given any \(\alpha\in {\cal F}\), there exists \(\alpha v_{1}\in V\) satisfying
\[T(\alpha v_{1})=\alpha T(v_{1})=\alpha w_{1},\]
which says \(\alpha w_{1}\in\mbox{Im}(T)\). This completes the proof. \(\blakcsquare\)
Proposition. Let \(T:V\rightarrow W\) be a linear mapping. If \(\mathfrak{B}\) is a basis for \(V\), then
\[\mbox{Im}(T)=\mbox{span}\left (\left\{T(v):v\in\mathfrak{B}\right\}\right ).\]
Proof. Since \(T(v)\in\mbox{Im}(T)\), it says that \(\mbox{span}(\{T(v):v\in\mathfrak{B})\subseteq\mbox{Im}(T)\). Now, suppose that \(w\in\mbox{Im}(T)\). Then, there exists \(v\in V\) satisfying \(T(v)=w\). Since \(V=\mbox{span}(\mathfrak{B})\), Proposition \ref{lap58} says that there exists \(v_{i}\in V\) and \(\alpha_{i}\in {\cal F}\) for \(i=1,\cdots ,n\) satisfying
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
The linearity of \(T\) shows
\begin{align*} w & =T(v)\\ & =\alpha_{1}T(v_{1})+\alpha_{2}T(v_{2})+\cdots +\alpha_{n}T(v_{n})\\ & \in\mbox{span}\left (\left\{T(v):v\in\mathfrak{B}\right\}\right ),\end{align*}
i.e., \(\mbox{Im}(T)\subseteq\mbox{span}(\{T(v):v\in\mathfrak{B})\). This completes the proof. \(\blacksquare\)
\begin{equation}{\label{lac160}}\mbox{}\tag{3}\end{equation}
Corollary \ref{lac160}. Let \(T:V\rightarrow W\) be a linear mapping. If \(V\) is finite-dimensional with basis \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\), then
\[\mbox{Im}(T)=\mbox{span}(T(\mathfrak{B}))=\mbox{span}\left (\left\{T(v_{1}),\cdots ,T(v_{n})\right\}\right ).\]
Example. Let \(V\) be the vector space consisting of all polynomials with coefficients in \(\mathbb{R}\) and degrees that are less than or equal to \(2\), and let \(W\) be the vector space consisting of all \(2\times 2\) matrices over \(\mathbb{R}\). Let \(T:V\rightarrow W\) be a linear mapping defined by
\[T(f(x))=\left [\begin{array}{cc}
f(1)-f(2) & 0\\ 0 & f(0)
\end{array}\right ].\]
Since \(\{1,x,x^{2}\}\) is a basis for \(V\), Corollary \ref{lac160} says
\begin{align*}
\mbox{Im}(T) & =\mbox{span}\left (\left\{1,x,x^{2}\right\}\right )\\
& =\mbox{span}\left (\left\{
\left [\begin{array}{cc} 0 & 0\\ 0 & 1\end{array}\right ],\left [\begin{array}{cc} -1 & 0\\ 0 & 0\end{array}\right ],
\left [\begin{array}{cc} -3 & 0\\ 0 & 1\end{array}\right ]\right\}\right )\\
& =\mbox{span}\left (\left\{\left [\begin{array}{cc} 0 & 0\\ 0 & 1\end{array}\right ],
\left [\begin{array}{cc} -1 & 0\\ 0 & 0\end{array}\right ]\right\}\right )
\end{align*}
This says that the basis for \(\mbox{Im}(T)\) is
\[\left\{\left [\begin{array}{cc} 0 & 0\\ 0 & 1\end{array}\right ],
\left [\begin{array}{cc} -1 & 0\\ 0 & 0\end{array}\right ]\right\}.\]
Therefore, we also have \(\dim (\mbox{Im}(T))=2\). \(\sharp\)
\begin{equation}{\label{lat71}}\mbox{}\tag{4}\end{equation}
Theorem \ref{lat71}. Let \(V\) and \(W\) be the vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping. Suppose that \(V\) is finite-dimensional. Then, we have
\begin{equation}{\label{laeq11}}\tag{5}
\dim (V)=\dim\mbox{Ker}(T)+\dim\mbox{Im}(T).
\end{equation}
Proof. We write \(\dim\mbox{Ker}(T)=q\) and \(\dim\mbox{Im}(T)=s\). If \(\mbox{Im}(T)=\{\theta_{W}\}\), i.e., \(s=0\), then \(\mbox{Ker}(T)=V\). This says that the equality (\ref{laeq11}) holds true. Now, we assume \(s>0\). Since \(\mbox{Im}(T)\) is a subspace of \(W\), it means that itself is a vector space. Therefore, we take \(\{w_{1},w_{2},\cdots ,w_{s}\}\) as a basis of \(\mbox{Im}(T)\). Let \(v_{i}\in V\) for \(i=1,\cdots ,s\) satisfying \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,s\). Suppose that \(\mbox{Ker}(T)\neq\{\theta_{V}\}\), i.e., \(q>0\). Since \(\mbox{Ker}(T)\) is a subspace of \(V\), we take \(\{u_{1},u_{2},\cdots ,u_{q}\}\) as the basis of \(\mbox{Ker}(T)\). We are going to show that \(\{u_{1},u_{2},\cdots ,u_{q},w_{1},w_{2},\cdots ,w_{s}\}\) is a basis of \(V\). Given any element \(x\in V\), since \(T(x)\in\mbox{Im}(T)\), there exist \(\alpha_{1},\alpha_{2},\cdots ,\alpha_{s}\) satisfying
\[T(x)=\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{s}w_{s}.\]
By the linearity of \(T\), we also have
\[\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{s}w_{s}=T(\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{s}v_{s}),\]
which says
\[T(x)=T(\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{s}v_{s}).\]
It follows
\[T(x-\alpha_{1}v_{1}-\alpha_{2}v_{2}-\cdots -\alpha_{s}v_{s})=\theta_{W},\]
which also says
\[x-\alpha_{1}v_{1}-\alpha_{2}v_{2}-\cdots -\alpha_{s}v_{s}\in\mbox{Ker}(T).\]
Therefore, there exist \(\beta_{1},\beta_{2},\cdots ,\beta_{q}\in {\cal F}\) satisfying
\[x-\alpha_{1}v_{1}-\alpha_{2}v_{2}-\cdots -\alpha_{s}v_{s}=\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{q}u_{q}.\]
It follows
\[x=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{s}v_{s}+\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{q}u_{q}.\]
This shows that the elements \(\{u_{1},u_{2},\cdots ,u_{q},w_{1},w_{2},\cdots ,w_{s}\}\) generate \(V\). Next, we want to claim that those elements are also linearly independent. Suppoose that
\begin{equation}{\label{laeq12}}\tag{6}
\gamma_{1}v_{1}+\gamma_{2}v_{2}+\cdots +\gamma_{s}v_{s}
+\gamma_{s+1}u_{1}+\gamma_{s+2}u_{2}+\cdots +\gamma_{q}u_{s+q}=\theta_{V},
\end{equation}
where \(\gamma_{1},\gamma_{2},\gamma_{s+q}\in {\cal F}\). By applying \(T\) to this equality, since \(T(u_{j})=\theta_{W}\) for \(j=1,\cdots ,q\), we have
\[\gamma_{1}w_{1}+\gamma_{2}w_{2}+\cdots +\gamma_{s}w_{s}=
\gamma_{1}T(v_{1})+\gamma_{2}T(v_{2})+\cdots +\gamma_{s}T(v_{s})=\theta_{W},\]
which says \(\gamma_{i}=0\) for \(i=1,\cdots ,s\) by the linear independence of \(\{w_{1},w_{2},\cdots ,w_{s}\}\). From (\ref{laeq12}), we also obtain
\[\gamma_{s+1}u_{1}+\gamma_{s+2}u_{2}+\cdots +\gamma_{s+q}u_{q}=\theta_{V},\]
which implies \(\gamma_{j}=0\) for \(j=s+1,\cdots ,s+q\) by the linear independence of \(\{u_{1},u_{2},\cdots ,u_{q}\}\). Therefore, we obtain \(\gamma_{j}=0\) for \(j=1,2,\cdots ,s+q\). This shows that \(\{u_{1},u_{2},\cdots ,u_{q},w_{1},w_{2},\cdots ,w_{s}\}\) is indeed a basis of \(V\). Therefore, the equality (\ref{laeq11}) holds true. Finally, we assume that \(\mbox{Ker}(T)=\{\theta_{V}\}\), i.e., \(q=0\). Then, the above arguments are still valid without considering \(\{u_{1},u_{2},\cdots ,u_{q}\}\) to prove \(\dim (V)=\dim\mbox{Im}(T)\). This completes the proof. \(\blacksquare\)
Definition. Let \(V\) and \(W\) be the vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping. If \(\mbox{Im}(T)\) and \(\mbox{Ker}(T)\) are finite-dimensional, then the rank of \(T\) is defined as
\[\mbox{rank}(T)=\dim\left (\mbox{Im}(T)\right ),\]
and the nullity of \(T\) is defined as
\[\mbox{nullity}(T)=\dim\left (\mbox{Ker}(T)\right ).\]
If \(V\) is finite-dimensional, then Theorem \ref{lat71} says
\begin{equation}{\label{laeq72}}\tag{7}
\dim (V)=\mbox{rank}(T)+\mbox{nullity}(T).
\end{equation}
\begin{equation}{\label{lac161}}\mbox{}\tag{8}\end{equation}
Lemma \ref{lac161}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\), and let \(W\) be a subspace of \(V\) with \(\dim (W)=n\). Then \(V=W\)
Proof. A basis for \(W\) must be a basis for \(V\). This complete the proof. \(\blacksquare\)
Proposition. Let \(V\) and \(W\) be finite-dimensional vector spaces over the same scalar field \({\cal F}\) with \(\dim (V)=\dim (W)\), and let \(T:V\rightarrow W\) be a linear mapping. Then, the following statements are equivalent:
- \(T\) is injective;
- \(T\) is surjective;
- \(\dim (V)=\mbox{rank}(T)\).
Proof. Proposition \ref{lap13} says that \(T\) is injective if and only if \(\mbox{Ker}(T)=\{\theta_{V}\}\). By (\ref{laeq72}), we see that \(T\) is injective if and only if \(\dim (V)=\mbox{rank}(T)\). On the other hand, since \(\dim (V)=\dim (W)\), we have that \(\dim (V)=\mbox{rank}(T)\) if and only if \(\dim (W)=\mbox{rank}(T)=\dim (\mbox{Im}(T))\). Form Lemma \ref{lac161}, we also see that \(\dim (W)=\dim (\mbox{Im}(T))\) if and only if \(W=\mbox{Im}(T)\). This completes the proof. \(\blacksquare\)
Example. Let \({\cal F}\) be a scalar field, and let \(T:{\cal F}^{2}\rightarrow {\cal F}^{2}\) be a linear mapping defined by
\[T\left (\alpha_{1},\alpha_{2}\right )=\left (\alpha_{1}+\alpha_{2},\alpha_{1}\right ).\]
We can show that \(\mbox{Ker}(T)=\{0\}\), i.e., \(\mbox{nullity}(T)=0\). Therefore, \(T\) is injective. From (\ref{laeq72}), we also obtain \(\dim (V)=\mbox{rank}(T)=2\), which says that \(T\) is surjective. \(\sharp\)
Example. Let \(V\) and \(W\) be the vector spaces consisting of all polynomials with coefficients in \(\mathbb{R}\) and degrees that are less than or equal to \(2\) and \(3\), respectively. Then \(\dim (V)=3\) and \(\dim (W)=4\). Let \(T:V\rightarrow W\) be a linear mapping defined by
\[T(f(x))=2f'(x)+\int_{0}^{x}3f(t)dt.\]
Then, we have
\begin{align*} \mbox{Im}(T) & =\mbox{span}\left (\left\{1,x,x^{2}\right\}\right )\\ & =\mbox{span}\left (\left\{3x,2+\frac{3}{2}x^{2},4x+x^{3}\right\}\right ).\end{align*}
Since \(\{3x,2+\frac{3}{2}x^{2},4x+x^{3}\}\) is linearly independent, it follows
\[\dim (\mbox{Im}(T))=3<4=\dim (W),\]
which says that \(T\) is not surjective. However, from (\ref{laeq72}), we obtain \(\mbox{nullity}(T)=0\), which says that \(\mbox{Ker}(T)=\{\theta_{V}\}\). Therefore, \(T\) is injective. \(\sharp\)
\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}
Operations of Linear Mappings.
Let \(V\) and \(W\) be two vector spaces over the same scalar field \({\cal F}\). We denote by \({\cal L}(V,W)\) the set of all linear mappings from \(V\) into \(W\). In order to make \({\cal L}(V,W)\) into a vector space, we need to define the addition and scalar multiplication in \({\cal L}(V,W)\).
- Given any two \(T_{1},T_{2}\in {\cal L}(V,W)\), we define the addition \(T_{1}+T_{2}:V\rightarrow W\) by
\[(T_{1}+T_{2})(x)=T_{1}(x)+T_{2}(x).\]
We want to claim that \(T_{1}+T_{2}\) is linear. Now, we have
\begin{align*}
(T_{1}+T_{2})(x+y) & =T_{1}(x+y)+T_{2}(x+y)\\
& =T_{1}(x)+T_{2}(y)+T_{2}(x)+T_{2}(y)\mbox{ (by the linearity of \(T_{1}\) and \(T_{2}\))}\\
& =(T_{1}+T_{2})(x)+(T_{1}+T_{2})(y)
\end{align*}
and
\begin{align*}
(T_{1}+T_{2})(\alpha x) & =T_{1}(\alpha x)+T_{2}(\alpha x)\\
& =\alpha T_{1}(x)+\alpha T_{2}(x)\\
& =\alpha (T_{1}(x)+T_{2}(x))\\
& =\alpha (T_{1}+T_{2})(x).
\end{align*}
This shows \(T_{1}+T_{2}\in {\cal L}(V,W)\). - Given any \(\gamma\in {\cal F}\) and \(T\in {\cal L}(V,W)\), we are going to show that \(\gamma T\) is linear. Now, we have
\[(\gamma T)(x+y)=\gamma T(x+y)=\gamma (T(x)+T(y))=\gamma T(x)+\gamma T(y)=(\gamma T)(x)+(\gamma T)(y)\]
and
\[(\gamma T)(\alpha x)=\gamma T(\alpha x)=\gamma\alpha T(x)=\alpha (\gamma T)(x).\]
This shows \(\gamma T\in {\cal L}(V,W)\).
Let \(\theta^{\prime}\) be the zero element of \(W\). The zero mapping \(\Theta :V\rightarrow W\) defined by \(\Theta (x)=\theta^{\prime}\) for all \(x\in V\) is the zero element of \({\cal L}(V,W)\), i.e.,
\[T+\Theta =\Theta +T=T\]
for any \(T\in {\cal L}(V,W)\). Given any \(T\in {\cal L}(V,W)\), we have
\[((-1)T+T)(x)=(-1)T(x)+T(x)=-T(x)+T(x)=\theta^{\prime},\]
which says the inverse element of \(T\) is \((-1)T\), i.e., \(-T=(-1)T\). We can check that \({\cal L}(V,W)\) is indeed a vector space over the same scalar field \({\cal F}\).
\begin{equation}{\label{lap84}}\mbox{}\tag{9}\end{equation}
Proposition \ref{lap84}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(W\) be an \(m\)-dimensional vector space over the same scalar field \({\cal F}\). Then, the space \({\cal L}(V,W)\) is finite-dimensional with dimension \(mn\).
Proof. Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{m}\}\) be the ordered bases for \(V\) and \(W\), respectively. For integers \(p\in\{1,2,\cdots ,m\}\) and \(q\in\{1,2,\cdots ,n\}\), we define a linear mapping \(T_{pq}\) by
\[T_{pq}(v_{i})=\left\{\begin{array}{ll}
0 & \mbox{if \(i\neq q\)}\\
w_{p} & \mbox{if \(i=q\)}.
\end{array}\right .\]
According to Theorem \ref{lat10}, there is a unique linear mapping from \(V\) into \(W\) satisfying these conditions. We shall claim that
\begin{equation}{\label{laeq75}}\tag{10}
\{T_{pg}:p=1,\cdots ,m\mbox{ and }q=1,\cdots ,n\}
\end{equation}
form a basis for \({\cal L}(V,W)\). Let \(T\) be any linear mapping from \(V\) into \(W\). For each \(j=1,\cdots ,n\), let \(\lambda_{1j},\cdots ,\lambda_{mj}\) be the coordinates of the vector \(T(v_{j})\) in the ordered basis \(\mathfrak{B}_{W}\), i.e.,
\[T(v_{j})=\sum_{p=1}^{m}\lambda_{pj}w_{p}.\]
Let
\[\widehat{T}=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}.\]
Then, we have
\begin{equation}{\label{laeq76}}\tag{11}
\widehat{T}(v_{j})=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}(v_{j})=\sum_{p=1}^{m}\lambda_{pj}w_{p}=T(v_{j}),
\end{equation}
which shows
\[T=\widehat{T}=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}.\]
Therefore, the space \({\cal L}(V,W)\) is spanned by the set in (\ref{laeq75}). We remain to show the independence. Let \(\widehat{T}\) be the zero mapping. Then \(\widehat{T}(v_{j})=\theta_{W}\) for all \(j=1,\cdots ,n\), which also says
\[\sum_{p=1}^{m}\lambda_{pj}w_{p}=\theta_{W}\]
by (\ref{laeq76}). The independence of the set \(\{w_{1},\cdots ,w_{n}\}\) says \(\lambda_{pj}=0\) for all \(p\) and \(j\). This completes the proof. \(\blacksquare\)
Proposition. Let \(U,V,W\) be vector spaces over the same scalar field \({\cal F}\). Let \(T_{1}:U\rightarrow V\) and \(T_{2}:V\rightarrow W\) be linear mappings. Then, the composition \(T_{1}\circ T_{2}:U\rightarrow W\) is also a linear mapping.
Proof. Given any \(\alpha\in {\cal F}\) and \(x,y\in U\), we have
\begin{align*}
(T_{1}\circ T_{2})(x+y) & =T_{1}(T_{2}(x+y))=T_{1}(T_{2}(x)+T_{2}(y))\\
& =T_{1}(T_{2}(x))+T_{1}(T_{2}(y))=(T_{1}\circ T_{2})(x)+(T_{1}\circ T_{2})(y)
\end{align*}
and
\begin{align*} (T_{1}\circ T_{2})(\alpha x) & =T_{1}(T_{2}(\alpha x))\\ & =T_{1}(\alpha T_{2}(x))\\ & =\alpha T_{1}(T_{2}(x))\\ & =\alpha (T_{1}\circ T_{2})(x).\end{align*}
This completes the proof. \(\blacksquare\)
Let \(U,V,W\) be vector space over the same scalar field \({\cal F}\). We have the following observations.
- Given any \(\alpha\in {\cal F}\), let \(T_{1}:U\rightarrow V\) and \(T_{2}:V\rightarrow W\) be two linear mappings. Then, we have
\[(\alpha T_{2})\circ T_{1}=\alpha (T_{2}\circ T_{1}).\] - Let \(T_{1}:U\rightarrow V\) be a linear mapping, and let \(T_{2},T_{3}:V\rightarrow W\) be also two linear mappings. Then, we have
\[(T_{2}+T_{3})\circ T_{1}=T_{2}\circ T_{1}+T_{3}\circ T_{1}.\] - Let \(T_{1},T_{2}:U\rightarrow V\) be two linear mappings, and let \(T_{3}:V\rightarrow W\) be a linear mapping. Then, we have
\[T_{3}\circ (T_{1}+T_{2})=T_{3}\circ T_{1}+T_{3}\circ T_{2}.\]
\begin{equation}{\label{lap184}}\mbox{}\tag{12}\end{equation}
Proposition \ref{lap184}. Let \(U,V,W\) be finite-dimensional vector spaces over the same scalar field \({\cal F}\). Let \(T_{1}:U\rightarrow V\) and \(T_{2}:V\rightarrow W\) be linear mappings. Then, we have
\[\mbox{rank}\left (T_{2}\circ T_{1}\right )\leq\mbox{rank}\left (T_{2}\right ).\]
Proof. We have
\[\mbox{Im}\left (T_{2}\circ T_{1}\right )=T_{2}(T_{1}(V))=T_{2}(\mbox{Im}(T_{1}))\subseteq T_{2}(V)=\mbox{Im}(T_{2}),\]
which says
\[\mbox{rank}\left (T_{2}\circ T_{1}\right )=\dim\left (\mbox{Im}\left (T_{2}\circ T_{1}\right )\right )
\leq\dim\left (\mbox{Im}(T_{2})\right )=\mbox{rank}\left (T_{2}\right ).\]
This completes the proof. \(\blacksquare\)
\begin{equation}{\label{lap14}}\mbox{}\tag{13}\end{equation}
Proposition \ref{lap14}. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping with an inverse mapping \(T^{-1}:W\rightarrow V\). Then \(T^{-1}\) is a linear mapping.
Proof. Given any \(w_{1},w_{2}\in W\), let \(T^{-1}(w_{1})=v_{1}\) and \(T^{-1}(w_{2})=v_{2}\). This also says \(T(v_{1})=w_{1}\) and \(T(v_{2})=w_{2}\). By the linearity of \(T\), we have
\[T(v_{1}+v_{2})=T(v_{1})+T(v_{2})=w_{1}+w_{2},\]
which says
\[T^{-1}(w_{1}+w_{2})=v_{1}+v_{2}=T^{-1}(w_{1})+T^{-1}(w_{2}).\]
Given any \(\alpha\in {\cal F}\) and \(w\in W\), let \(T^{-1}(w)=v\). This says \(T(v)=w\). Therefore, we have
\[T(\alpha v)=\alpha T(v)=\alpha w,\]
which also says
\[T^{-1}(\alpha w)=\alpha v=\alpha T^{-1}(w).\]
This completes the proof. \(\blacksquare\)
\begin{equation}{\label{lac34}}\mbox{}\tag{14}\end{equation}
Corollary \ref{lac34}. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping such that it is surjective and \(\mbox{Ker}(T)=\{\theta_{U}\}\). Then \(T\) has an inverse linear mapping.
Proof. By Proposition \ref{lap13}, we see that \(T\) is injective and surjective. This says that \(T\) has an inverse. By Proposition \ref{lap14}, this inverse mapping is linear, and the proof is complete. \(\blacksquare\)
\begin{equation}{\label{lap168}}\mbox{}\tag{15}\end{equation}
Proposition \ref{lap168}. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be an invertible linear mapping. Then \(V\) is finite-dimensional if and only if \(W\) is finite-dimensional. In this case, we have \(\dim (V)=\dim (W)\).
Proof. Suppose that \(V\) is finite-dimensional with a basis \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\). By Corollary \ref{lac160}, we have
\[W=\mbox{Im}(T)=\mbox{span}(T(\mathfrak{B}))=\mbox{span}\left (\left\{T(v_{1}),\cdots ,T(v_{n})\right\}\right ),\]
which says that \(W\) is finite-dimensional. For the converse, considering \(T^{-1}\), we can similarly show that if \(W\) is finite-dimensional, then \(V\) is finite-dimensional. Finally, since \(T\) is bijective, we have \(\mbox{nullity}(T)=0\) and
\[\mbox{rank}(T)=\dim (\mbox{Im}(T))=\dim (W).\]
Using (\ref{laeq72}), it follows \(\dim (V)=\dim (W)\). This completes the proof. \(\blacksquare\)
Definition. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\). The linear mapping \(T:V\rightarrow W\) from \(V\) into \(W\) is called non-singular when \(T(v)=\theta_{W}\) implies \(v=\theta_{V}\). \(\sharp\)
Example. Let \({\cal F}\) be a scalar field, and let \(T\) be a linear operator on \({\cal F}^{2}\) defined by
\[T(v_{1},v_{2})=\left (v_{1}+v_{2},v_{1}\right ).\]
If \(T(v_{1},v_{2})=(0,0)\), then we have \(v_{1}=v_{2}=0\). This says that \(T\) is non-singular. We want to claim that \(T\) is onto. Given any \((u_{1},u_{2})\in {\cal F}^{2}\), we need to find \((v_{1},v_{2})\in {\cal F}^{2}\) satisfying \(T(v_{1},v_{2})=(u_{1},u_{2})\); that is, we need to solve
\[\left\{\begin{array}{l}
v_{1}+v_{2}=u_{1}\\
v_{1}=u_{2}.
\end{array}\right .\]
Therefore, we can obtain
\[v_{1}=u_{2}\mbox{ and }v_{2}=u_{1}-u_{2},\]
which can also give the inverse of \(T\) given by
\[T^{-1}(u_{1},u_{2})=(v_{1},v_{2})=(u_{2},u_{1}-u_{2}).\]
We have the following observations.
- The linear mapping \(T\) is non-singular if and only if the null space \(\mbox{Ker}(T)=\{\theta_{V}\}\).
- The linear mapping \(T\) is non-singular if and only if \(T\) is one-to-one.
Another important property for the non-singular linear mapping is to preserve the linear independence, which will be formally described below.
\begin{equation}{\label{lap77}}\mbox{}\tag{16}\end{equation}
Proposition \ref{lap77}. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\). The linear mapping \(T:V\rightarrow W\) is non-singular if and only if, given any linearly independent subset \(S\) of \(V\), the image of \(S\) under \(T\) is linearly independent. In other words, the linear mapping \(T:V\rightarrow W\) is non-singular if and only if \(T\) carries each linear independent subset of \(V\) onto a linearly independent subset of \(W\).
Proof. Suppose that \(T\) is non-singular. Given any elements \(v_{1},\cdots ,v_{p}\) in \(S\), we shall prove that \(T(v_{1}),\cdots ,T(v_{p})\) are linearly independent. For any \(\lambda_{1},\cdots ,\lambda_{k}\) in \({\cal F}\), we consider
\[\theta =\lambda_{1}T(v_{1})+\cdots +\lambda_{p}T(v_{p})=T\left (\lambda_{1}v_{1}+\cdots +\lambda_{p}v_{p}\right ).\]
Since \(T\) is non-singular, we must have
\[\lambda_{1}v_{1}+\cdots +\lambda_{p}v_{p}=\theta ,\]
which implies \(\lambda_{i}=0\) for \(i=1,\cdots ,p\) by the independence of \(S\).
For the converse, given \(v\neq\theta\), we take \(S=\{v\}\). Then \(\{T(v)\}\) must be independent by the hypothesis, which also says \(T(v)\neq\theta\). Therefore, we obtain \(\mbox{Ker}(T)=\{\theta_{V}\}\), and the proof is complete. \(\blacksquare\)
\begin{equation}{\label{laeq94}}\mbox{}\tag{17}\end{equation}
Theorem \ref{laeq94}. Let \(V\) and \(W\) be the finite-dimensional vector spaces over the same scalar field \({\cal F}\) satisfying \(n=\dim (V)=\dim (W)\), and let \(T\) be a linear mapping from \(V\) into \(W\). Then, the following statements are equivalent:
(a) \(T\) is invertible;
(b) \(T\) is nonsingular;
(c) \(T\) is onto;
(d) If \(\{v_{1},\cdots ,v_{n}\}\) is a basis for \(V\), then \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a basis for \(W\).
Proof. If \(T\) is invertible, then \(T\) is non-singular. Therefore, (a) implies (b). To prove that (b) implies (c), let \(\{v_{1},\cdots ,v_{n}\}\) be a basis for \(V\), Proposition \ref{lap77} says that \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a linearly independent set of vectors in \(W\). Since \(\dim (W)=n\), it follows that \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a basis for \(W\). Given any \(w\in W\), there exists \(\lambda_{1},\cdots ,\lambda_{n}\) in \({\cal F}\) satisfying
\[w=\lambda_{1}T(v_{1})+\cdots +\lambda_{n}T(v_{n})=T\left (\lambda_{1}v_{1}+\cdots +\lambda_{n}v_{n}\right ),\]
which says that \(w\) is in the image of \(T\). To prove that (c) implies (d), we assume that \(T\) is onto. If \(\{v_{1},\cdots ,v_{n}\}\) is a basis for \(V\), then \(\{T(v_{1}),\cdots ,T(v_{n})\}\) spans the image of \(T\). Since \(T(V)=W\), the set \(\{T(v_{1}),\cdots ,T(v_{n})\}\) also spans \(W\). Since \(\dim (W)=n\), it follows that \(T(v_{1}),\cdots ,T(v_{n})\) are linearly independent, i.e., \(\{T(v_{1}),\cdots ,T(v_{n})\}\) is a basis for \(W\). To prove that (d) implies (a), since \(\{T(v_{1}),\cdots ,T(v_{n})\}\) spans \(W\), we have \(T(V)=W\). For \(v\in\mbox{Ker}(T)\), there exist \(\lambda_{1},\cdots ,\lambda_{n}\) in \({\cal F}\) satisfying
\[v=\lambda_{1}v_{1}+\cdots +\lambda_{n}v_{n}\mbox{ and }T(v)=\theta_{W},\]
which shows
\[\lambda_{1}T(v_{1})+\cdots +\lambda_{n}T(v_{n})=\theta_{W}.\]
The independence of \(T(v_{1}),\cdots ,T(v_{n})\) says \(\lambda_{i}=0\) for all \(i=1,\cdots ,n\). Therefore, we obtain \(v=\theta_{V}\). This shows that \(T\) is non-singular, i.e., one-to-one. Therefore, \(T\) is invertible. This completes the proof. \(\blacksquare\)
Definition. Let \(V\) and \(W\) be two vector spaces over the same scalar field \({\cal F}\). We say that \(V\) and \(W\) are isomorphic when there exists a bijective linear mapping \(T:V\rightarrow W\). In this case, the invertible linear mapping is also called an isomorphism.
Proposition \ref{lap14} says that if \(T:V\rightarrow W\) is an isomorphism, then \(T^{-1}:W\rightarrow V\) is also an isomorphism, since \(T^{-1}\) is linear. It is easy to verify that if \(U\) and \(V\) are isomorphic, and \(V\) and \(W\) are isomorphic, then \(U\) and \(W\) are isomorphic. This shows that the isomorphism is an equivalence relation on the class of vector spaces.
\begin{equation}{\label{lat170}}\mbox{}\tag{18}\end{equation}
Theorem \ref{lat170}. Let \(V\) and \(W\) be finite-dimensional vector space over the same scalar field \({\cal F}\). Then \(V\) and \(W\) are isomorphic if and only if \(\dim (V)=\dim (W)\).
Proof. Suppose that \(V\) and \(W\) are isomorphic. Then, there exists an invertible linear mapping \(T:V\rightarrow W\). By Proposition \ref{lap168}, we have \(\dim (V)=\dim (W)\). For the converse, suppose that \(\dim (V)=\dim (W)=n\). Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{n}\}\) be the bases for \(V\) and \(W\), respectively. By Theorem \ref{lat10}, there exists a unique linear mapping \(T:V\rightarrow W\) such that \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Using Corollary \ref{lac160}, we have
\[\mbox{Im}(T)=\mbox{span}\left (\left\{T(v_{1}),\cdots ,T(v_{n})\right\}\right )
=\mbox{span}\left (\left\{w_{1},\cdots ,w_{n}\right\}\right )=W,\]
which says that \(T\) is surjective. Finally, using Theorem \ref{laeq94}, we conclude that \(T\) is invertible. This completes the proof. \(\blacksquare\)
Let \(V\) be a vector space over the scalar field \({\cal F}\). Then \(V\) is isomorphic to \({\cal F}^{n}\) if and only if \(\dim (V)=n\).


