Matrices and Linear Mappings

Anders Askevold (1834-1900) was a Norwegian painter.

The association between matrices and linear mappings will be studied. Let \(A\) be an \(m\times n\) matrix in \({\cal F}\). The mapping \(T_{A}:{\cal F}^{n}\rightarrow {\cal F}^{m}\) associated with \(A\) is defined by
\begin{equation}{\label{fsseq172}}\tag{1}
T_{A}(X)=AX
\end{equation}
for every column vector \({\cal X}\) in \({\cal F}^{n}\).

  • Given any \(X\) and \(Y\) in \({\cal F}^{n}\), we have
    \begin{align*} T_{A}(X+Y) & =A(X+Y)\\ & =AX+AY\\ & =T_{A}(X)+T_{A}(Y).\end{align*}
  • Given any \(\alpha\in {\cal F}\) and column vector \(X\) in \({\cal F}^{n}\), we have
    \begin{align*} T_{A}(\alpha X) & =A(\alpha X)\\ & =\alpha AX\\ & =\alpha T_{A}(X).\end{align*}

Therefore, we conclude that \(T_{A}\) is a linear mapping.

Let \(T:{\cal F}^{n}\rightarrow {\cal F}\) be a linear mapping. We shall prove that there exists a vector \(A\) in \({\cal F}^{n}\) satisfying \(T=T_{A}\). In other words, there exists a vector \(A\) satisfying \(T(X)=AX=T_{A}(X)\). Let \(E_{1},E_{2},\cdots ,E_{n}\) be the unit column vectors of \({\cal F}^{n}\). Then, any element \(X=(x_{1},x_{2},\cdots ,x_{n})\in {\cal F}^{n}\) can be written as
\[X=x_{1}E_{1}+x_{2}E_{2}+\cdots +x_{n}E_{n}.\]
Let \(a_{i}=T(E_{i})\) for \(i=1,\cdots ,n\). We take the vector \(A\) that consists of the elements \(a_{i}\) for \(i=1,\cdots ,n\). By the linearity of \(T\), we have
\begin{align*} T(X) & =x_{1}T(E_{1})+x_{2}T(E_{2})+\cdots +x_{n}T(E_{n})\\ & =x_{1}a_{1}+x_{2}a_{2}+\cdots +x_{n}a_{n}\\ & =AX=T_{A}(X).\end{align*}

Next, we shall generalize this idea to the linear mapping \(T:{\cal F}^{n}\rightarrow {\cal F}^{m}\). Let \(E_{1},\cdots ,E_{n}\) be the unit vectors of \({\cal F}^{n}\), and let \(F_{1},\cdots ,F_{m}\) be the unit vectors of \({\cal F}^{m}\). Given any \(X=(x_{1},\cdots ,x_{n})\in {\cal F}^{n}\), we can write
\[X=x_{1}E_{1}+x_{2}E_{2}+\cdots +x_{n}E_{n}.\]
By the linearity of \(T\), we have
\[T(X)=x_{1}T(E_{1})+x_{2}T(E_{2})+\cdots +x_{n}T(E_{n}).\]
Since \(T(E_{i})\in {\cal F}^{m}\) for \(i=1\cdots ,n\), there exist \(a_{ij}\in {\cal F}\) satisfying
\[T(E_{i})=a_{1i}F_{1}+a_{2i}F_{2}+\cdots +a_{mi}F_{m}.\]
Now, we take the \(m\times n\) matrix \(A\) that consists of elements \(a_{ij}\) given by
\[A=\left [\begin{array}{ccccc}
a_{11} & a_{12} & a_{13} & \cdots & a_{1n}\\
a_{21} & a_{22} & a_{23} & \cdots & a_{2n}\\
\vdots & \vdots & \vdots & \cdots & \vdots\\
a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn}
\end{array}\right ].\]
Then, we have
\begin{align*}
L(X) & =x_{1}\left (a_{11}F_{1}+a_{21}F_{2}+\cdots +a_{m1}F_{m}\right )+\cdots +x_{n}\left (a_{1n}F_{1}+a_{2n}F_{2}+\cdots +a_{mn}F_{m}\right )\\
& =\left (a_{11}x_{1}+\cdots +a_{1n}x_{n}\right )F_{1}+\cdots +\left (a_{m1}x_{1}+\cdots +a_{mn}x_{n}\right )F_{m}\\
& =\left [\begin{array}{c}
a_{11}x_{1}+\cdots +a_{1n}x_{n}\\
\vdots\\
a_{m1}x_{1}+\cdots +a_{mn}x_{n}
\end{array}\right ]
=\left [\begin{array}{ccccc}
a_{11} & a_{12} & a_{13} & \cdots & a_{1n}\\
\vdots & \vdots & \vdots & \cdots & \vdots\\
a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn}
\end{array}\right ]
\left [\begin{array}{c}
x_{1}\\
\vdots\\
x_{n}
\end{array}\right ]\\
& =AX=T_{A}(X).
\end{align*}
This shows \(T=T_{A}\). The next proposition says that this matrix \(A\) is uniquely determined.

Proposition. Let \(A\) and \(B\) be two \(m\times n\) matrices. If \(T_{A}=T_{B}\), then \(A=B\).

Proof. Let \(A_{i\cdot}\) and \(B_{i\cdot}\) denote the \(i\)th rows of \(A\) and \(B\), respectively. By definition, we have \(A_{i\cdot}X=B_{i\cdot}X\), which says  \((A_{i\cdot}-B_{i\cdot})X=0\) for all \(i\) and all \(X\). Therefore, we must have \(A_{i\cdot}-B_{i\cdot}={\bf 0}\), i.e., \(A_{i\cdot}=B_{i\cdot}\) for all \(i\). This proves \(A=B\). \(\blacksquare\)

We have the following observations.

  • Let \(A\) and \(B\) be two \(m\times n\) matrices. Then \(T_{A+B}=T_{A}+T_{B}\).
  • For any \(\alpha\in {\cal F}\) and \(m\times n\) matrix \(A\), we have \(T_{\alpha A}=\alpha T_{A}\).
  • Suppose that \(A\) is an \(n\times n\) matrix over \({\cal F}\). If \(\mbox{Ker}(T_{A})=\{\theta\}\), then \(A\) is invertible.
  • Let \(T:{\cal F}^{n}\rightarrow {\cal F}^{m}\) and \(L:{\cal F}^{m}\rightarrow {\cal F}^{s}\) be two linear mappings. Let \(A\) be a \(m\times n\) matrix and let \(B\) be a \(s\times m\) matrix satisfying \(T_{A}(X)=AX\) and \(L_{B}(X)=BX\). Then, we have \((L\circ T)(X)=(BA)X\).

\begin{equation}{\label{lap173}}\mbox{}\tag{2}\end{equation}

Remark \ref{lap173}. Let \(V\) and \(W\) be the finite-dimensional vector spaces over the same scalar field \({\cal F}\) with \(\dim (V)=n\) and \(\dim (W)=m\). Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{m}\}\) be the ordered bases of \(V\) and \(W\), respectively. Let \(A=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\). Then, we have \(\mbox{rank}(T)=\mbox{rank}(T_{A})\) and \(\mbox{nullity}(T)=\mbox{nullity}(T_{A})\). The proof of these equalities are left as exercises. \(\sharp\)

Let \(A\) be an \(m\times n\) matrix in \({\cal F}\). The mapping \(T_{A}:{\cal F}^{n}\rightarrow {\cal F}^{m}\) associated with \(A\) is defined in (\ref{fsseq172}). The rank of \(A\) is defined to be the rank of \(T_{A}\). In other words, we have
\[\mbox{rank}(A)=\mbox{rank}(T_{A})=\dim (\mbox{Im}(T_{A})).\]

Proposition. Let \(V\) and \(W\) be the finite-dimensional vector spaces over the same scalar field \({\cal F}\) with \(\dim (V)=n\) and \(\dim (W)=m\). Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{m}\}\) be the ordered bases of \(V\) and \(W\), respectively. Let \(A=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\). Then, we have \(\mbox{rank}(T)=\mbox{rank}(A)\).

Proof. The result follows from Remark \ref{lap173} immediately. \(\blacksquare\)

\begin{equation}{\label{lap175}}\mbox{}\tag{3}\end{equation}

Proposition \ref{lap175}. Let \(A\) be an \(m\times n\) matrices over the scalar field \({\cal F}\). Let \(P\) and \(Q\) be invertible \(m\times m\) and \(n\times n\) matrices, respectively. Then, we have
\begin{align*} \mbox{rank}(AQ) & =\mbox{rank}(A)\\ & =\mbox{rank}(PA)\\ & =\mbox{rank}(PAQ).\end{align*}

Proof. Since \(T_{Q}\) is surjective, we can show
\begin{align*} \mbox{Im}\left (T_{AQ}\right ) & =\mbox{Im}\left (T_{A}\circ T_{Q}\right )\\ & =T_{A}\left (T_{Q}\left ({\cal F}^{n}\right )\right )\\ & =T_{A}({\cal F}^{n})\\ & =\mbox{Im}(T_{A}),\end{align*}
which says
\begin{align*} \mbox{rank}(AQ) & =\dim\left (\mbox{Im}\left (T_{AQ}\right )\right )\\ & =\dim\left (\mbox{Im}\left (T_{A}\right )\right )\\ & =\mbox{rank}(A).\end{align*}
On the other hand, applying Remark \ref{lap173} by taking \(T=T_{P}\), we can show that \(\mbox{rank}(A)=\mbox{rank}(PA)\). Moreover, using the above results, we have
\[\mbox{rank}(PAQ)=\mbox{rank}(PA)=\mbox{rank}(A).\]
This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lap176}}\mbox{}\tag{4}\end{equation}

Lemma \ref{lap176}. Let \(A\) be an \(m\times n\) matrix over the scalar field \({\cal F}\). Suppose that \(B\) is obtained from \(A\) by performing one elementary row \((\)resp. column$)$ operation. Then, there exists an \(m\times m\) \((\)resp. \(n\times n)\) elementary matrix \(E\) satisfying \(B=EA\) \((\)resp. \(B=AE)\). As a matter of fact, the elementary matrix \(E\) is obtained from \(I_{m}\) \((\)resp. \(I_{n})\) by performing the same elementary row \((\)resp. column$)$ operation as that which was performed on \(A\) to obtain \(B\). Conversely, if \(E\) is an \(m\times m\) \((\)resp. \(n\times n)\) elementary matrix, then \(EA\) \((\)resp. \(AE)\) is the matrix obtained from \(A\) by performing the same elementary row \((\)resp. column$)$ operation as that which produces \(E\) from \(I_{m}\) \((\)resp. \(I_{n})\).

Proof. We shall give a proof for an operation of type (2). The other two types of operations are left as exercises. Given the identity matrix \(I_{m}\), suppose that we perform the operation “replacement of the \(r\)th row of \(A\) by row \(r\) plus \(c\) times row \(s\)”, where \(c\) is any non-zero scalar and \(r\neq s\). Then, the \(ik\)th entry of \(E\) obtained from \(I_{m}\) is

\[e_{ik}=\left\{\begin{array}{ll}
\delta_{ik}, & \mbox{if \(i\neq r\)}\\
\delta_{rk}+c\cdot\delta_{sk}, & \mbox{if \(i=r\)}.
\end{array}\right .\]

Therefore, the \(ij\)th entry of \(EA\) is given by

\[(EA)_{ij}=\sum_{k=1}^{m}e_{ik}a_{kj}=\left\{\begin{array}{ll}
a_{ij}, & \mbox{if \(i\neq r\)}\\
a_{rj}+c\cdot a_{sj}, & \mbox{if \(i=r\)}.
\end{array}\right .\]

This shows that \(EA\) is indeed a matrix obtained from \(A\) by performing type (2) row elementary operation. \(\blacksquare\)

\begin{equation}{\label{lap178}}\mbox{}\tag{5}\end{equation}

Lemma \ref{lap178}. Elementary matrices are invertible. Moreover, its inverse is also an elementary matrix of the same type.

Proof. Let \(E\) be an \(n\times n\) elementary matrix. By definition, \(E\) can be obtained from \(I_{n}\) by elementary row operation. We can also reverse the steps to transform \(E\) back to \(I_{n}\). This means that \(I_{n}\) can be obtained from \(E\) by performing elementary row operations of the same type. Proposition \ref{lap176} says that there exist elementary matrices \(E_{1},\cdots ,E_{p}\) such that \(I_{n}=E_{p}E_{p-1}\cdots E_{1}E\). Let \(\widehat{E}=E_{p}E_{p-1}\cdots E_{1}\). Then \(I_{n}=\widehat{E}E\), and \(\widehat{E}\) is also an elementary matrix, which also says that \(E^{-1}=\widehat{E}\) by part (ii) of Proposition \ref{lap177}. This completes the proof. \(\blacksquare\)

The above Lemmas \ref{lap176} and \ref{lap178} can refer to the page Arithmetic Operations of Matrices.

Proposition. Let \(A\) be an \(m\times n\) matrix over the scalar field \({\cal F}\). If \(B\) is an \(m\times n\) matrix obtained from \(A\) by applying the elementary row and column operations, then \(\mbox{rank}(B)=\mbox{rank}(A)\).

Proof. If \(B\) is obtained from \(A\) by applying the elementary row operation, then, using Lemmas \ref{lap176} and \ref{lap178}, there exists an invertible matrix satisfying \(B=EA\). From Proposition \ref{lap175}, we have \(\mbox{rank}(B)=\mbox{rank}(A)\). If \(B\) is obtained from \(A\) by applying the elementary column operation, then we can similarly obtain \(\mbox{rank}(B)=\mbox{rank}(A)\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lat179}}\mbox{}\tag{6}\end{equation}

Theorem \ref{lat179}. Let \(A\) be an \(m\times n\) matrix over the scalar field \({\cal F}\) with rank \(r\). Then \(r\leq\min\{m,n\}\). Moreover, by means of a finite number of elementary row and column operations, \(A\) can be transformed into the following form
\[\left [\begin{array}{cc}
I_{r} & O_{1}\\
O_{2} & O_{3}
\end{array}\right ],\]
where \(O_{1},O_{2},O_{3}\) are zero matrices and \(I_{r}\) is a \(r\times r\) identity matrix.

\begin{equation}{\label{lap182}}\mbox{}\tag{7}\end{equation}

Proposition \ref{lap182}. Let \(A\) be an \(m\times n\) matrix over the scalar field \({\cal F}\) with rank \(r\). Then, there exist invertible \(m\times m\) matrix \(B\) and invertible \(n\times n\) matrix \(C\) satisfying
\[D\equiv\left [\begin{array}{cc}
I_{r} & O_{1}\\
O_{2} & O_{3}
\end{array}\right ]=BAC,\]
where \(O_{1},O_{2},O_{3}\) are zero matrices and \(I_{r}\) is a \(r\times r\) identity matrix.

Proof. Using Theorem \ref{lat179}, the matrix \(A\) can be transformed into matrix \(D\) by performing a finite number of elementary and row operations. From Lemmas \ref{lap176} and \ref{lap178}, there exist invertible \(m\times m\) elementary matrices \(E_{1},\cdots ,E_{p}\) and invertible \(n\times n\) matrices \(G_{1},\cdots ,G_{q}\) satisfying
\[D=E_{p}E_{p-1}\cdots E_{1}AG_{1}G_{2}\cdots G_{q}.\]
Let \(B=E_{p}E_{p-1}\cdots E_{1}\) and \(C=G_{1}G_{2}\cdots G_{q}\). Then \(B\) and \(C\) are also invertible by part (iii) of Proposition \ref{lap180}. This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lat71}}\mbox{}\tag{8}\end{equation}

Lemma \ref{lat71}. Let \(V\) and \(W\) be the vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be a linear mapping. Suppose that \(V\) is finite-dimensional. Then, we have
\begin{equation}{\label{laeq72}}\tag{9}\dim (V)=\dim\mbox{Ker}(T)+\dim\mbox{Im}(T).\end{equation}

The above Lemma \ref{lat71} can refer to the page Linear Mappings.

Let \(A\) be an \(m\times n\) matrix with entries in the scalar field \({\cal F}\). The subspace spanned by the columns of \(A\) is called the  column space of \(A\), and the subspace spanned by the rows of \(A\) is called the row space of \(A\). Therefore, the row rank of \(A\) is defined to be the dimension of row space of \(A\), and the column rank of \(A\) is defined to be the dimension of column space of \(A\). We shall apply Lemma \ref{lat71} to show that the row rank of \(A\) is equal to the column rank of \(A\).

\begin{equation}{\label{lap209}}\mbox{}\tag{10}\end{equation}

Proposition \ref{lap209}. Let \(A\) be an \(m\times n\) matrix with entries in the scalar field \({\cal F}\). Then, the row rank of \(A\) is equal to the column rank of \(A\).

Proof. Let \(T:{\cal F}^{n\times 1}\rightarrow {\cal F}^{m\times 1}\) be a linear mapping defined by \(T(X)=AX\). If \(A_{\cdot 1},\cdots ,A_{\cdot n}\) are the columns of \(A\), then
\[T(X)=AX=x_{1}A_{\cdot 1}+\cdots +x_{n}A_{\cdot n},\]
which says that \(\mbox{Im}(T)\) is spanned by the columns of \(A\), i.e., \(\mbox{Im}(T)\) is the column space of \(A\). This also says that \(\mbox{rank}(T)\) is equal to the column rank of \(A\). Let \(S\) be the solution space of the linear system \(AX=0\). Then \(S=\mbox{Ker} (T)\). According to (\ref{laeq72}), we have
\begin{equation}{\label{laeq73}}\tag{11}
\dim S+\mbox{column rank of \(A\)}=n.
\end{equation}
It is well-known that if \(r\) is the dimension of row space of \(A\), then the solution space \(S\) of \(A\) has a basis consisting of \(n-r\) vectors. Therefore, we have
\begin{equation}{\label{laeq74}}\tag{12}
\dim S=n-\mbox{row rank of \(A\)}.
\end{equation}
From (\ref{laeq73}) and (\ref{laeq74}), we obtain
\[\mbox{column rank of \(A\)}=\mbox{row rank of \(A\)}.\]

This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lac160}}\mbox{}\tag{13}\end{equation}

Lemma \ref{lac160}. Let \(T:V\rightarrow W\) be a linear mapping. If \(V\) is finite-dimensional with basis \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\), then
\begin{align*} \mbox{Im}(T) & =\mbox{span}(T(\mathfrak{B}))\\ & =\mbox{span}\left (\left\{T(v_{1}),\cdots ,T(v_{n})\right\}\right ).\end{align*}

The above Lemma \ref{lac160} can refer to the page Linear Mappings.

\begin{equation}{\label{lap181}}\mbox{}\tag{14}\end{equation}

Proposition \ref{lap181}. The rank of a matrix is equal to the maximum number of its linearly independent column vectors. Equivalently, the rank of a matrix is equal to its column rank.

Proof. Let \(A\) be an \(m\times n\) matrix over the scalar field \({\cal F}\). Then, we have
\begin{align*} \mbox{rank}(A) & =\mbox{rank}(T_{A})\\ & =\mbox{Im}\left (T_{A}\right ).\end{align*}
Let \(\mathfrak{B}_{{\cal F}^{n}}=\{{\bf e}_{1},\cdots ,{\bf e}_{n}\}\) be the standard ordered basis fir \({\cal F}^{n}\), i.e., \({\cal F}^{n}=\mbox{span}(\mathfrak{B}_{{\cal F}^{n}})\). By Lemma \ref{lac160}, we have
\begin{align*} \mbox{Im}(T_{A}) & =\mbox{span}(T_{A}(\mathfrak{B}_{{\cal F}^{n}}))\\ & =\mbox{span}
\left (\left\{T_{A}({\bf e}_{1}),\cdots ,T_{A}({\bf e}_{n})\right\}\right ).\end{align*}
Since \(T({\bf e}_{j})=A{\bf e}_{j}=A_{\cdot j}\) is the \(j\)th column, we have
\[\mbox{Im}(T_{A})=\mbox{span}\left (\left\{A_{\cdot 1},\cdots ,A_{\cdot n}\right\}\right ),\]
which says
\begin{align*} \mbox{rank}(A) & =\dim\left (\mbox{Im}\left (T_{A}\right )\right )\\ & =\dim\left (\mbox{span}
\left (\left\{A_{\cdot 1},\cdots ,A_{\cdot n}\right\}\right )\right ).\end{align*}
This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lap183}}\mbox{}\tag{15}\end{equation}

Proposition \ref{lap183}. Let \(A\) be an \(m\times n\) matrix. Then, we have \(\mbox{rank}(A^{\top})=\mbox{rank}(A)\).

Proof. From Proposition \ref{lap182}, there exist invertible \(m\times m\) matrix \(B\) and invertible \(n\times n\) matrix \(C\) satisfying \(D=BAC\), which also says \(D^{\top}=C^{\top}A^{\top}B^{\top}\). Since \(B^{\top}\) and \(C^{\top}\) are also invertible, using Proposition \ref{lap175}, we obtain
\begin{align*} \mbox{rank}\left (A^{\top}\right ) & =\mbox{rank}\left (C^{\top}A^{\top}B^{\top}\right )\\ & =\mbox{rank}\left (D^{\top}\right ).\end{align*}
Suppose that \(\mbox{rank}(A^{\top})=r\). From Proposition \ref{lap182} again, we have
\[D^{\top}\equiv\left [\begin{array}{cc}
I_{r} & O_{4}\\
O_{5} & O_{6}
\end{array}\right ],\]
where \(O_{4},O_{5},O_{6}\) are zero matrices. By Proposition \ref{lap181}, we have \(\mbox{rank}(D^{\top})=r\). Finally, obtain
\begin{align*} \mbox{rank}\left (A^{\top}\right ) & =\mbox{rank}\left (D^{\top}\right )\\ & =r=\mbox{rank}\left (A\right ).\end{align*}
This completes the proof. \(\blacksquare\)

Proposition. We have the following properties.

(i) The rank of a matrix is equal to the maximum number of its linearly independent row vectors. Equivalently, the rank of a matrix is equal to its row rank.

(ii) The row vectors and column vectors of any matrix generate subspaces that have the same dimension such that this dimension is equal to the rank of the matrix.

Proof. The results follow from Propositions \ref{lap181} and \ref{lap183} immediately. \(\blacksquare\)

Proposition. Every invertible matrix is a product of elementary matrices.

Proof. Let \(A\) be an \(n\times n\) invertible matrix. Then \(\mbox{rank}(A)=n\). From Theorem \ref{lat179} and Proposition \ref{lap182}, the matrix \(D\) can be taken as the identity matrix \(I_{n}\) satisfying \(I_{n}=BAC\), where \(B\) and \(C\) are invertible matrices. In the proof of Proposition \ref{lap182}, we also have \(B=E_{p}E_{p-1}\cdots E_{1}\) and \(C=G_{1}G_{2}\cdots G_{q}\), where \(E_{1},\cdots ,E_{p}\) are \(m\times m\) invertible elementary matrices and \(G_{1},\cdots ,G_{q}\) are \(n\times n\) invertible elementary matrices.
From \(I_{n}=BAC\), we can obtain
\[A=B^{-1}I_{n}C^{-1}=B^{-1}C^{-1}=E_{1}^{-1}\cdots E_{p}^{-1} G_{1}^{-1}\cdots G_{q}^{-1}.\]
Since the inverse of elementary matrices are also elementary matrices, we complete the proof. \(\blacksquare\)

\begin{equation}{\label{lap185}}\mbox{}\tag{16}\end{equation}

Proposition \ref{lap185}. Let \(A\) and \(B\) be two matrices such that \(AB\) is well-defined. Then, we have  \(\mbox{rank}(AB)\leq\mbox{rank}(A)\) and \(\mbox{rank}(AB)\leq\mbox{rank}(B)\).

Proof. We have
\begin{align*} \mbox{rank}(AB) & =\mbox{rank}(T_{AB})\\ & =\mbox{rank}(T_{A}\circ T_{B})\\ & \leq\mbox{rank}(T_{A})\\ & =\mbox{rank}(A).\end{align*}
Using Proposition \ref{lap183}, we also have
\begin{align*}\mbox{rank}(AB) & =\mbox{rank}((AB)^{\top})\\ & =\mbox{rank}(B^{\top}A^{\top})\\ & \leq\mbox{rank}(B^{\top})\\ & =\mbox{rank}(B).\end{align*}
This completes the proof. \(\blacjsquare\)

Proposition. Let \(AX={\bf 0}\) be a homogeneous system of \(m\) linear equations in \(n\) unknowns over the scalar field \({\cal F}\). Let \(S\) be the set of all solutions of \(AX={\bf 0}\). Then, we have \(S=\mbox{null}(T_{A})\) with
\begin{equation}{\label{laeq204}}\tag{17}
\dim (S)=n-\mbox{rank}(T_{A})=n-\mbox{rank}(A).
\end{equation}

Proof. It is obvious that \(S=\mbox{null}(T_{A})\). Formula (\ref{laeq204}) follows from the following equality

\[\dim (V)=\mbox{rank}(T)+\mbox{nullity}(T).\]
immediately. \(\blacksquare\)

\begin{equation}{\label{lap205}}\mbox{}\tag{18}\end{equation}

Proposition \ref{lap205}. If \(A\) is an \(m\times n\) matrix and \(m<n\), then the homogeneous system of linear equations \(AX={\bf 0}\) has a non-trivial solution.

Proof. Since \(m<n\), we have \(\mbox{rank}(A)=\mbox{rank}(T_{A})\leq m\). From (\ref{laeq204}), we also have
\[\dim (S)=n-\mbox{rank}(T_{A})=n-m>0,\]
which says \(\mbox{null}(T_{A})=S\neq\{{\bf 0}\}\). In other words, there exists a non-zero vector in \(S\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lat210}}\mbox{}\tag{19}\end{equation}

Theorem \ref{lat210}. Let \(AX=B\) be a system of linear equations. Then the system has a solution if and only if \(\mbox{rank}(A)=\mbox{rank}(A|B)\).

Proof. Let \(\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n}\}\) be the set of all columns of \(A\). From the proof of Proposition \ref{lap209}, we see that \(\mbox{Im}(T_{A})\) is spanned by the columns of \(A\), i.e.,
\[\mbox{Im}(T_{A})=\mbox{span}\left (\left\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n}\right\}\right ),\]
which says that \(AX=B\) has a solution if and only if \(B\in\mbox{span}(\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n}\})\). Since \(B\in\mbox{span}(\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n}\})\) if and only if
\[\mbox{span}\left (\left\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n}\right\}\right )=\mbox{span}\left (\left\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n},B\right\}\right ),\]
we obtain
\[\dim\left (\mbox{span}\left (\left\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n}\right\}\right )\right )=\dim\left (\mbox{span}\left (\left\{A_{\cdot 1},A_{\cdot 2},\cdots ,A_{\cdot n},B\right\}\right )\right ),\]
which is equivalent to saying \(\mbox{rank}(A)=\mbox{rank}(A|B)\) by Proposition \ref{lap209}. This completes the proof. \(\blacksquare\)

Example. We consider the following system of linear equations
\[\begin{array}{ccccccc}
x_{1} & + & x_{2} & + & x_{3} & = & 3\\
x_{1} & – & x_{2} & + & x_{3} & = & 3\\
x_{1} &&& + & x_{3} & = & 2.
\end{array}\]
Since \(\mbox{rank}(A)=2\) and \(\mbox{rank}(A|B)=3\), Theorem \ref{lat210} says that this system has no solutions. \(\sharp\)

The general solution of the linear system \(AX=B\) can be obtained by the Gaussian elimination. The following theorem describes the properties of general solution.

\begin{equation}{\label{lat206}}\mbox{}\tag{20}\end{equation}

Lemma \ref{lat206}. Let \(A\) be an \(m\times n\) matrix, and let \(S\) and \(S_{0}\) be set of all solutions of the nonhomogeneous linear system \(AX=B\) and homogeneous linear system \(AX={\bf 0}\), respectively. Then, for any solution \(s\) of \(AX=B\), we have

\[S=S_{0}+\{s\}=\left\{s+s_{0}:s_{0}\in S_{0}\right\}.\]

Proof. If \(t\in S\), then \(At=B\). Therefore, we have

\[A(t-s)=At-As=B-B={\bf 0},\]

which says that \(t-s\in S_{0}\), i.e., \(t\in S_{0}+\{s\}\). Therefore, we obtain the inclusion \(S\subseteq S_{0}+\{s\}\). On the other hand, suppose that \(t\in S_{0}+\{s\}\). Then \(t=s+u\) for some \(u\in S_{0}\). Since

\[At=A(s+u)=As+Au=B+{\bf 0}=B,\]

which says that \(t\in S\). Therefore, we obtain the inclusion \(S_{0}+\{s\}\subseteq S\). This completes the proof. \(\blacksquare\)

The above Lemma \ref{lat206} can refer to the page Arithmetic Operations of Matrices.

Theorem. Let \(AX=B\) be a system of \(m\) linear equations in \(n\) unknowns, and let \((A’|B’)\) be the row-reduced echelon matrix of \((A|B)\). Suppose that \(r\) is the non-zero rows in \((A’|B’)\) and \(\mbox{rank}(A’)=\mbox{rank }(A’|B’)\). Then \(\mbox{rank}(A’)=r\). Moreover, if the general solution has the following form
\[{\bf x}={\bf x}_{0}+t_{1}{\bf u}_{1}+t_{2}{\bf u}_{2}+\cdots +t_{n-r}{\bf u}_{n-r},\]
then \(\{{\bf u}_{1},\cdots ,{\bf u}_{n-r}\}\) is a basis for the solution set of the corresponding homogeneous system \(AX={\bf 0}\), and \({\bf x}_{0}\) is a solution of the non-homogeneous system \(AX=B\).

Proof. Since there are \(r\) non-zero rows in \((A’|B’)\), it also says that these rows are linearly independent. Therefore, we have \(\mbox{rank }(A’|B’)=r\), which also says \(\mbox{rank }(A’)=r\). Let \(S\) be the solution set for \(AX=B\), and let \(S_{0}\) be the solution set for \(AX={\bf 0}\). Taking \(t_{1}=t_{2}=\cdots =t_{n-r}=0\), we have \({\bf x}={\bf x}_{0}\in S\). From Lemma \ref{lat206}, we have \(S=\{{\bf x}_{0}\}+S_{0}\). Therefore, we obtain
\[S_{0}=\{-{\bf x}_{0}\}+S=\mbox{span}\left (\left\{{\bf u}_{1},\cdots ,{\bf u}_{n-r}\right\}\right ).\]
Since \(\mbox{rank }(A’)=r\), we have \(\dim (S_{0})=n-r\), which shows that \(\{{\bf u}_{1},\cdots ,{\bf u}_{n-r}\}\) is a basis for \(S_{0}\).
This completes the proof. \(\blacksquare\)

Matrices Associated with Linear Mappings.

Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\) with an ordered basis \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\), and let \(W\) be an \(m\)-dimensional vector space over the scalar field \({\cal F}\) with an ordered basis \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{n}\}\). Given any \(v\in V\), there exists \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\) satisfying
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
In this case, we call \(X=(\alpha_{1},\alpha_{2},\cdots ,\alpha_{n})\in {\cal F}^{n}\) the coordinate vector of \(v\) with respect to the ordered basis \(\mathfrak{B}_{V}\). For each \(v\in V\), we also denote by \([v]_{\mathfrak{B}_{V}}\) the coordinate vector of \(v\) with respect to the ordered basis \(\mathfrak{B}_{V}\). Let \(T:V\rightarrow W\) be a linear mapping. Since \(T(v_{i})\in W\) for \(i=1,\cdots ,m\), there exists \(a_{ij}\in {\cal F}\) satisfying
\[T(v_{i})=a_{1i}w_{1}+a_{2i}w_{2}+\cdots +a_{mi}w_{m}.\]
We take the \(m\times n\) matrix \(A\) that consists of elements \(a_{ij}\) given by
\[A=\left [\begin{array}{ccccc}
a_{11} & a_{12} & a_{13} & \cdots & a_{1n}\\
a_{21} & a_{22} & a_{23} & \cdots & a_{2n}\\
\vdots & \vdots & \vdots & \cdots & \vdots\\
a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn}
\end{array}\right ].\]
Given \(v\in V\), let \(X\) be the coordinate vector of \(v\) with respect to the ordered basis \(\mathfrak{B}_{V}\). Then, we have
\begin{align*}
T(v) & =T(\alpha_{1}v_{1}+\cdots +\alpha_{n}v_{n})\\
& =\alpha_{1}T(v_{1})+\cdots +\alpha_{n}T(v_{n})\\
& =\alpha_{1}\left (a_{11}w_{1}+a_{21}w_{2}+\cdots +a_{m1}w_{m}\right )+\cdots+\alpha_{n}\left (a_{1n}w_{1}+a_{2n}w_{2}+\cdots +a_{mn}w_{m}\right )\\
& =\left (a_{11}x_{1}+\cdots +a_{1n}x_{n}\right )w_{1}+\cdots +\left (a_{m1}\alpha_{1}+\cdots +a_{mn}\alpha_{n}\right )w_{m}\\
& =(A_{1\cdot}X)w_{1}+\cdots +(A_{m\cdot}X)w_{m}.
\end{align*}
Since the matrix \(A\) depends on the effect of the linear mapping \(T\) and the ordered bases \(\mathfrak{B}_{V}\) and \(\mathfrak{B}_{W}\), we also write \(A=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\). The above results can be formally summarized below.

\begin{equation}{\label{lap78}}\mbox{}\tag{21}\end{equation}

Proposition \ref{lap78}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\) with an ordered basis \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\), and let \(W\) be an \(m\)-dimensional vector space over the scalar field \({\cal F}\) with an ordered basis \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{n}\}\). For each linear mapping \(T:V\rightarrow W\), there is an \(m\times n\) matrix \(A\) with entries in \({\cal F}\) satisfying
\[[T(v)]_{\mathfrak{B}_{W}}=A[v]_{\mathfrak{B}_{V}}\]
for each \(v\in V\), where \(A=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\). Furthermore, \(T\mapsto A\) is a one-to-one correspondence between the set of all linear mappings from \(V\) into \(W\) and the set of all \(m\times n\) matrices over the scalar field \({\cal F}\). \(\sharp\)

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with an ordered basis \(\mathfrak{B}\), and let \(T\) be a linear operator on \(V\). In this case, we simply write \([T]_{\mathfrak{B}}\) to denote \([T]_{\mathfrak{B}}^{\mathfrak{B}}\). Therefore, from Proposition \ref{lap78}, we have the following formula
\[[T(v)]_{\mathfrak{B}}=[T]_{\mathfrak{B}}[v]_{\mathfrak{B}}\]
for each \(v\in V\).

Example. Let \(V\) and \(W\) be two vector space over the same scalar field \({\cal F}\) with \(\dim (V)=2\) and \(\dim (W)=3\). Let \(\mathfrak{B}_{V}=\{v_{1},v_{2}\}\) and \(\mathfrak{B}_{W}=\{w_{1},w_{2},w_{3}\}\). Let \(T:V\rightarrow W\) be a linear mapping satisfying
\[T(v_{1})=3w_{1}-w_{2}+6w_{3}\]

and

\[T(v_{2})=2w_{1}+7w_{2}-5w_{3}.\]
Then, we obtain
\[[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}=\left [\begin{array}{rr}
3 & 2\\ -1 & 7\\ 6 & -5
\end{array}\right ].\]
Therefore, for any \(v\in V\) with coordinate vector \(X\), we have
\begin{align*}
T(v) & =(A_{1\cdot}X)w_{1}+(A_{2\cdot}X)w_{2}+(A_{3\cdot}X)w_{3}\\
& =(3\alpha_{1}+2\alpha_{2})w_{1}+(-\alpha_{1}+7\alpha_{2})w_{2}+(6\alpha_{1}-5\alpha_{2})w_{3}.
\end{align*}

\begin{equation}{\label{laex165}}\mbox{}\tag{22}\end{equation}

Example \ref{laex165}. Let \(V\) and \(W\) be the vector spaces consisting of all polynomials with coefficients in \(\mathbb{R}\) and degrees that are less than or equal to \(2\) and \(3\), respectively. Let \(T:W\rightarrow V\) be a linear mapping defined by \(T(f(x))=f'(x)\). Let \(\mathfrak{B}_{V}=\{1,x,x^{2}\}\) and \(\mathfrak{B}_{W}=\{1,x,x^{2},x^{3}\}\) be the ordered bases for \(V\) and \(W\), respectively. Then, we have
\begin{align*}
T(1) & =0=0\cdot 1+0\cdot x+0\cdot x^{2}\\
T(x) & = 1=1\cdot 1+0\cdot x+0\cdot x^{2}\\
T(x^{2}) & =2x=0\cdot 1+2\cdot x+0\cdot x^{2}\\
T(x^{3}) & =3x^{2}=0\cdot 1+0\cdot x+3\cdot x^{2}
\end{align*}
which says
\[[T]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}
=\left [\begin{array}{cccc}
0 & 1 & 0 & 0\\
0 & 0 & 2 & 0\\
0 & 0 & 0 & 3
\end{array}\right ].\]
Let \(v=f(x)=2-4x+x^{2}+8x^{3}\). Then \(T(v)=T(f(x))=-4+2x+24x^{2}\). Therefore, we obtain
\[[v]_{\mathfrak{B}_{W}}=\left [\begin{array}{r}2\\ -4\\ 1\\ 8\end{array}\right ]\]

and

\[[T(v)]_{\mathfrak{B}_{V}}=\left [\begin{array}{r}-4\\ 2\\ 24\end{array}\right ].\]
According to Proposition \ref{lap78}, we have
\[[T(v)]_{\mathfrak{B}_{V}}=[T]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}[v]_{\mathfrak{B}_{W}},\]
which can be verified by
\begin{align*} [T(v)]_{\mathfrak{B}_{V}} & =\left [\begin{array}{r}
-4\\ 2\\ 24\end{array}\right ]\\ & =\left [\begin{array}{cccc}
0 & 1 & 0 & 0\\
0 & 0 & 2 & 0\\
0 & 0 & 0 & 3
\end{array}\right ]\left [\begin{array}{r}
2\\ -4\\ 1\\ 8\end{array}\right ]
\\ & =[T]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}[v]_{\mathfrak{B}_{W}}.\end{align*}

\begin{equation}{\label{laex166}}\mbox{}\tag{23}\end{equation}

Example \ref{laex166}. Let \(V\) and \(W\) be the vector spaces consisting of all polynomials with coefficients in \(\mathbb{R}\) and degrees that are less than or equal to \(2\) and \(3\), respectively. Let \(T:V\rightarrow W\) be a linear mapping defined by
\[T(f(x))=\int_{0}^{x}f(t)dt.\]
Let \(\mathfrak{B}_{V}=\{1,x,x^{2}\}\) and \(\mathfrak{B}_{W}=\{1,x,x^{2},x^{3}\}\) be the ordered bases for \(V\) and \(W\), respectively. Then, we have
\begin{align*}
T(1) & =\int_{0}^{x}dt=x=0\cdot 1+1\cdot x+0\cdot x^{2}+0\cdot x^{3}\\
T(x) & =\int_{0}^{x}tdt=\frac{x^{2}}{2}=0\cdot 1+0\cdot x+\frac{1}{2}\cdot x^{2}+0\cdot x^{3}\\
T(x^{2}) & =\int_{0}^{x}t^{2}dt=\frac{x^{3}}{3}=0\cdot 1+0\cdot x+0\cdot x^{2}+\frac{1}{3}\cdot x^{3}
\end{align*}
which says
\[[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}
=\left [\begin{array}{ccc}
0 & 0 & 0\\
1 & 0 & 0\\
0 & \frac{1}{2} & 0\\
0 & 0 & \frac{1}{3}
\end{array}\right ].\]

\begin{equation}{\label{lap15}}\mbox{}\tag{24}\end{equation}

Proposition \ref{lap15}. Let \(V\) and \(W\) be two finite-dimensional vector spaces over the same scalar field \({\cal F}\). Let \(\mathfrak{B}_{V}\) and \(\mathfrak{B}_{W}\) be the ordered bases of \(V\) and \(W\), respectively. Let \(T_{1}\) and \(T_{2}\) be two linear mappings from \(V\) into \(W\). Then, we have
\begin{equation}{\label{laeq163}}\tag{25}
[T_{1}+T_{2}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}=[T_{1}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}+[T_{2}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}.
\end{equation}
If \(\alpha\in {\cal F}\) and \(T:V\rightarrow W\) is a linear mapping, then
\begin{equation}{\label{laeq164}}\tag{26}
[\alpha T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}=\alpha [T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}.
\end{equation}

Proof. Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{m}\}\) be the ordered base for \(V\) and \(W\), respectively. There exist unique vectors scalars \(\alpha_{ij}\) and \(\beta_{ij}\) for \(i=1,\cdots ,m\) and \(j=1,\cdots ,n\) satisfying
\[T_{1}(v_{j})=\sum_{i=1}^{m}\alpha_{ij}w_{i}\] and
\[T_{2}(v_{j})=\sum_{i=1}^{m}\beta_{ij}w_{i}\]
for \(j=1,\cdots ,n\), Therefore, we obtain
\[(T_{1}+T_{2})(v_{j})=T_{1}(v_{j})+T_{2}(v_{j})=\sum_{i=1}^{m}\left (\alpha_{ij}+\beta_{ij}\right )w_{i},\]
which shows that equality (\ref{laeq163}). We can similarly prove the equality (\ref{laeq164}), and the proof is complete. \(\blacksquare\)

We recall that \({\cal L}(V,W)\) denotes the space of all linear mappings from \(V\) into \(W\). Now, we denote by \({\cal M}_{m\times n}({\cal F})\) the space of all \(m\times n\) matrices over the scalar field \({\cal F}\).

\begin{equation}{\label{lap16}}\mbox{}\tag{27}\end{equation}

Proposition \ref{lap16}. Let \(V,W,U\) be finite-dimensional vector spaces over the same scalar field \({\cal F}\) with corresponding ordered bases \(\mathfrak{B}_{V}\), \(\mathfrak{B}_{W}\) and \(\mathfrak{B}_{U}\). Let \(T_{1}:V\rightarrow W\) and \(T_{2}:W\rightarrow U\) be two linear mappings. Then, we have
\[[T_{2}\circ T_{1}]_{\mathfrak{B}_{U}}^{\mathfrak{B}_{V}}=[T_{2}]_{\mathfrak{B}_{U}}^{\mathfrak{B}_{W}} [T_{1}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}.\]

Proof. Let \(A=[T_{1}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\) and \(B=[T_{2}]_{\mathfrak{B}_{U}}^{\mathfrak{B}_{W}}\). Given any element \(v\in V\), let \(X\) denote its coordinate vector with respect to the basis \(\mathfrak{B}_{V}\). Then the coordinate vector of \(T_{1}(v)\) with respect to the basis \(\mathfrak{B}_{W}\) is \(AX\). By definition, the coordinate vector of \(T_{2}(T_{1}(v))\) with respect to \(\mathfrak{B}_{U}\) is \(B(AX)\). On the other hand, the coordinate vector of \((T_{2}\circ T_{1})(v)\) with respect to \(\mathfrak{B}_{U}\) is \((BA)X\). Since \(B(AX)=(BA)X\) and \(T_{2}(T_{1}(v))=(T_{2}\circ T_{1})(v)\), this completes the proof. \(\blacksquare\)

\begin{equation}{\label{lac18}}\mbox{}\tag{28}\end{equation}

Corollary \ref{lac18}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with two ordered bases \(\mathfrak{B}\) and \({\mathfrak{B}’}\). Let \(id:V\rightarrow V\) be the identity mapping. Then, the matrix \([id]_{\mathfrak{B}}^{\mathfrak{B}’}\) is invertible with the inverse matrix \([id]_{\mathfrak{B}’}^{\mathfrak{B}}\). In other words, we have
\[[id]_{\mathfrak{B}}^{\mathfrak{B}’}[id]_{\mathfrak{B}’}^{\mathfrak{B}}
=I=[id]_{\mathfrak{B}’}^{\mathfrak{B}}[id]_{\mathfrak{B}}^{\mathfrak{B}’}.\]

Proof. The result follows from Proposition \ref{lap16} by taking \(V=W=U\) and \(T_{1}=T_{2}=id\). \(\blacksquare\)

\begin{equation}{\label{lac17}}\mbox{}\tag{29}\end{equation}

Corollary \ref{lac17}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with two ordered bases \(\mathfrak{B}\) and \({\mathfrak{B}’}\). Let \(T\) be a linear operator on \(V\). Then, we have
\[[T]_{\mathfrak{B}’}=[id]_{\mathfrak{B}’}^{\mathfrak{B}}[T]_{\mathfrak{B}}[id]_{\mathfrak{B}}^{\mathfrak{B}’}.\]

Proof. The result follows immediately from Proposition \ref{lap16} and Corollary \ref{lac18}. \(\blacksquare\)

\begin{equation}{\label{lac167}}\mbox{}\tag{30}\end{equation}

Corollary \ref{lac167}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with an ordered basis \(\mathfrak{B}\), and let \(T\) be an invertible linear operator on \(V\). Then, we have
\[[T^{-1}]_{\mathfrak{B}}=[T]_{\mathfrak{B}}^{-1}.\]

Proof. The result follows immediately from Proposition\ref{lap16}, since \(T\circ T^{-1}=id=T^{-1}\circ T\). \(\blacksquare\)

Example. Let \(V\) be the vector space consisting of all polynomials with coefficients in \(\mathbb{R}\) and degrees that are less than or equal to \(2\). Let \(T:V\rightarrow V\) be a linear mapping defined by
\[T(f(x))=f(x)+f'(x)+f”(x).\]
Let \(\mathfrak{B}_{V}=\{1,x,x^{2}\}\) be the standard ordered bases for \(V\). Then, we have
\[[T]_{\mathfrak{B}}=\left [\begin{array}{rrr}
1 & 1 & 2\\
0 & 1 & 2\\
0 & 0 & 1
\end{array}\right ].\]
We can compute the inverse matrix
\[[T]_{\mathfrak{B}}^{-1}=\left [\begin{array}{rrr}
1 & -1 & 0\\
0 & 1 & -2\\
0 & 0 & 1
\end{array}\right ].\]
By Corollary \ref{lac167}, since \([T^{-1}]_{\mathfrak{B}}=[T]_{\mathfrak{B}}^{-1}\), let \(f(x)=a_{0}+a_{1}x+a_{2}x^{2}\). According to Proposition \ref{lap78}, we have
\begin{align*} [T^{-1}(f(x))]_{\mathfrak{B}} & =[T^{-1}]_{\mathfrak{B}}[f(x)]_{\mathfrak{B}}
\\ & =\left [\begin{array}{rrr}
1 & -1 & 0\\
0 & 1 & -2\\
0 & 0 & 1
\end{array}\right ]\left [\begin{array}{c}
a_{0}\\ a_{1}\\ a_{2}
\end{array}\right ]
\\ & =\left [\begin{array}{c}
a_{0}-a_{1}\\ a_{1}-2a_{2}\\ a_{2}
\end{array}\right ].\end{align*}
Therefore, the inverse linear mapping is given by
\[T^{-1}\left (a_{0}+a_{1}x+a_{2}x^{2}\right )=a_{0}-a_{1}+(a_{1}-2a_{2})x+a_{2}x^{2}.\]

Proposition. Let \(V,W,U\) be finite-dimensional vector spaces over the same scalar field \({\cal F}\), and let \(T_{1}:U\rightarrow V\) and \(T_{2}:V\rightarrow W\) be linear mappings. Then, we have
\[\mbox{rank}\left (T_{2}\circ T_{1}\right )\leq\mbox{rank}\left (T_{2}\right )\]

and
\[\mbox{rank}\left (T_{2}\circ T_{1}\right )\leq\mbox{rank}\left (T_{1}\right ).\]

Proof. Proposition \ref{lap184} says that \(\mbox{rank}(T_{2}\circ T_{1})\leq\mbox{rank}(T_{2})\). Therefore, we remain to prove another inequality. Let \(\mathfrak{B}_{V},\mathfrak{B}_{W},\mathfrak{B}_{U}\) be the ordered bases for \(V,W,U\), respectively. Let \(A=[T_{2}]_{\mathfrak{B}_{U}}^{\mathfrak{B}_{W}}\) and \(B=[T_{1}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\). By Proposition \ref{lap16}, we have
\[[T_{2}\circ T_{1}]_{\mathfrak{B}_{U}}^{\mathfrak{B}_{V}}=[T_{2}]_{\mathfrak{B}_{U}}^{\mathfrak{B}_{W}}
[T_{1}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}=AB.\]
By Remark \ref{lap173} and Proposition \ref{lap185}, we obtain
\begin{align*}\mbox{rank}\left (T_{2}\circ T_{1}\right ) & =\mbox{rank}(AB)\\ & \leq\mbox{rank}(B)\\ & =\mbox{rank}\left (T_{1}\right ).\end{align*}
This completes the proof. \(\blacksquare\)

Example. Continued from Examples \ref{laex165} and \ref{laex166}, let \(T_{1}:V\rightarrow W\) and \(T_{2}:W\rightarrow V\) be two linear mappings defined by
\[T_{1}(f(x))=\int_{0}^{x}f(t)dt\] and \[T_{2}(f(x))=f'(x).\]
Then \(T_{2}\circ T_{1}\) is an identity mapping on \(V\). Let \(\mathfrak{B}_{V}=\{1,x,x^{2}\}\) and \(\mathfrak{B}_{W}=\{1,x,x^{2},x^{3}\}\) be the standard ordered bases for \(V\) and \(W\), respectively. Then, by Proposition \ref{lap16}, we have
\begin{align*} [T_{2}\circ T_{1}]_{\mathfrak{B}_{V}} & =[T_{2}]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}[T_{1}]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\\ & =
\left [\begin{array}{cccc}
0 & 1 & 0 & 0\\
0 & 0 & 2 & 0\\
0 & 0 & 0 & 3
\end{array}\right ]\left [\begin{array}{ccc}
0 & 0 & 0\\
1 & 0 & 0\\
0 & \frac{1}{2} & 0\\
0 & 0 & \frac{1}{3}
\end{array}\right ]\\ & =\left [\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}\right ]\\ & =[id]_{\mathfrak{B}_{V}}.\end{align*}

\begin{equation}{\label{lap168}}\mbox{}\tag{31}\end{equation}

Lemma \ref{lap168}. Let \(V\) and \(W\) be vector spaces over the same scalar field \({\cal F}\), and let \(T:V\rightarrow W\) be an invertible linear mapping. Then \(V\) is finite-dimensional if and only if \(W\) is finite-dimensional. In this case, we have \(\dim (V)=\dim (W)\).

Proof. Suppose that \(V\) is finite-dimensional with a basis \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\). By Corollary \ref{lac160}, we have
\[W=\mbox{Im}(T)=\mbox{span}(T(\mathfrak{B}))=\mbox{span}\left (\left\{T(v_{1}),\cdots ,T(v_{n})\right\}\right ),\]
which says that \(W\) is finite-dimensional. For the converse, considering \(T^{-1}\), we can similarly show that if \(W\) is finite-dimensional, then \(V\) is finite-dimensional. Finally, since \(T\) is bijective, we have \(\mbox{nullity}(T)=0\) and
\[\mbox{rank}(T)=\dim (\mbox{Im}(T))=\dim (W).\]
Using (\ref{laeq72}), it follows \(\dim (V)=\dim (W)\). This completes the proof. \(\blacksquare\)

The above Lemma \ref{lap168} can refer to the page Linear Mappings. The more general result of Corollary \ref{lac167} is given below.

Theorem. Let \(V\) and \(W\) be finite-dimensional vector spaces over the same scalar field \({\cal F}\) with ordered bases \(\mathfrak{B}_{V}\) and \(\mathfrak{B}_{W}\), respectively. Let \(T:V\rightarrow W\) be a linear mapping from \(V\) into \(W\). Then \(T\) is invertible if and only if \([T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\) is invertible. Moreover, we have
\begin{equation}{\label{laeq169}}\tag{32}
[T^{-1}]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}=\left ([T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\right )^{-1}.
\end{equation}

Proof. Suppose that \(T\) is invertible. Using Lemma \ref{lap168}, we have \(\dim (V)=\dim (W)\). Let \(n=\dim (V)\). Then \([T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\) is an \(n\times n\) matrix. Let \(id_{V}\) and \(id_{W}\) be the identity mappings on \(V\) and \(W\), respectively. Using Proposition \ref{lap16}, we have
\[I_{n}=[id_{V}]_{\mathfrak{B}_{V}}=[T^{-1}\circ T]_{\mathfrak{B}_{V}}=[T^{-1}]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}
[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\]
and
\[I_{n}=[id_{W}]_{\mathfrak{B}_{W}}=[T\circ T^{-1}]_{\mathfrak{B}_{V}}=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}
[T^{-1}]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}},\]
which proves equality (\ref{laeq169}).

Conversely, suppose that \(A=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\) is invertible. By definition, there is an \(n\times n\) matrix \(B\) satisfying \(AB=BA=I_{n}\). Let \(\mathfrak{B}_{V}=\{v_{1},v_{2},\cdots ,v_{n})\) be a basis for \(V\), and let \(\mathfrak{B}_{W}=\{w_{1},w_{2},\cdots ,w_{n})\) be a basis for \(W\). Let
\[\widehat{v}_{j}=\sum_{i=1}^{n}b_{ij}v_{i}\]
for \(j=1,\cdots ,n\). Theorem \ref{lat10} says that there exists a unique linear mapping \(\widehat{T}:W\rightarrow V\) satisfying
\begin{align*} \widehat{T}(w_{j}) & =\widehat{v}_{j}\\ & =\sum_{i=1}^{n}b_{ij}v_{i}\end{align*}
for \(j=1,\cdots ,n\). It follows \(B=[\widehat{T}]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}\). Now, using Proposition \ref{lap16} again, we also have
\begin{align*} [\widehat{T}\circ T]_{\mathfrak{B}_{V}} & =[\widehat{T}]_{\mathfrak{B}_{V}}^{\mathfrak{B}_{W}}
[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\\ & =BA=I_{n}=[id_{V}]_{\mathfrak{B}_{V}},\end{align*}
which says \(\widehat{T}\circ T=id_{V}\). We can similarly show  \(T\circ\widehat{T}=id_{W}\). Therefore, we obtain \(\widehat{T}=T^{-1}\), and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lat170}}\mbox{}\tag{33}\end{equation}

Lemma \ref{lat170}. Let \(V\) and \(W\) be finite-dimensional vector space over the same scalar field \({\cal F}\). Then \(V\) and \(W\) are isomorphic if and only if \(\dim (V)=\dim (W)\).

Proof. Suppose that \(V\) and \(W\) are isomorphic. Then, there exists an invertible linear mapping \(T:V\rightarrow W\). By Proposition \ref{lap168}, we have \(\dim (V)=\dim (W)\). For the converse, suppose that \(\dim (V)=\dim (W)=n\). Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{n}\}\) be the bases for \(V\) and \(W\), respectively. By Theorem \ref{lat10}, there exists a unique linear mapping \(T:V\rightarrow W\) such that \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Using Corollary \ref{lac160}, we have
\[\mbox{Im}(T)=\mbox{span}\left (\left\{T(v_{1}),\cdots ,T(v_{n})\right\}\right )
=\mbox{span}\left (\left\{w_{1},\cdots ,w_{n}\right\}\right )=W,\]
which says that \(T\) is surjective. Finally, using Theorem \ref{laeq94}, we conclude that \(T\) is invertible. This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lap84}}\mbox{}\tag{34}\end{equation}

Lemma \ref{lap84}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(W\) be an \(m\)-dimensional vector space over the same scalar field \({\cal F}\). Then, the space \({\cal L}(V,W)\) is finite-dimensional with dimension \(mn\).

Proof. Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{m}\}\) be the ordered bases for \(V\) and \(W\), respectively. For integers \(p\in\{1,2,\cdots ,m\}\) and \(q\in\{1,2,\cdots ,n\}\), we define a linear mapping \(T_{pq}\) by
\[T_{pq}(v_{i})=\left\{\begin{array}{ll}
0 & \mbox{if \(i\neq q\)}\\
w_{p} & \mbox{if \(i=q\)}.
\end{array}\right .\]
According to Theorem \ref{lat10}, there is a unique linear mapping from \(V\) into \(W\) satisfying these conditions. We shall claim that
\begin{equation}{\label{laeq75}}\tag{35}
\{T_{pg}:p=1,\cdots ,m\mbox{ and }q=1,\cdots ,n\}
\end{equation}
form a basis for \({\cal L}(V,W)\). Let \(T\) be any linear mapping from \(V\) into \(W\). For each \(j=1,\cdots ,n\), let \(\lambda_{1j},\cdots ,\lambda_{mj}\) be the coordinates of the vector \(T(v_{j})\) in the ordered basis \(\mathfrak{B}_{W}\), i.e.,
\[T(v_{j})=\sum_{p=1}^{m}\lambda_{pj}w_{p}.\]
Let
\[\widehat{T}=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}.\]
Then, we have
\begin{equation}{\label{laeq76}}\tag{35}
\widehat{T}(v_{j})=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}(v_{j})=\sum_{p=1}^{m}\lambda_{pj}w_{p}=T(v_{j}),
\end{equation}
which shows
\[T=\widehat{T}=\sum_{p=1}^{m}\sum_{q=1}^{n}\lambda_{pq}T_{pq}.\]
Therefore, the space \({\cal L}(V,W)\) is spanned by the set in (\ref{laeq75}). We remain to show the independence. Let \(\widehat{T}\) be the zero mapping. Then \(\widehat{T}(v_{j})=\theta_{W}\) for all \(j=1,\cdots ,n\), which also says
\[\sum_{p=1}^{m}\lambda_{pj}w_{p}=\theta_{W}\]
by (\ref{laeq76}). The independence of the set \(\{w_{1},\cdots ,w_{n}\}\) says \(\lambda_{pj}=0\) for all \(p\) and \(j\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lat10}}\mbox{}\tag{37}\end{equation}

Lemma \ref{lat10}. Let \(V\) be an \(n\)-dimensional vector spaces over the scalar field \({\cal F}\), and let \(\{v_{1},v_{2},\cdots ,v_{n})\) be a basis of \(V\). Let \(W\) be another vector space over the same scalar field \({\cal F}\), and let \(w_{1},w_{2},\cdots ,w_{n}\) be any elements in \(W\). Then, there exists a unique linear mapping \(T:V\rightarrow W\) such that \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\).

Proof. Given any element \(v\in V\), there exist \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\) satisfying
\[x=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
Therefore, we can define a mapping \(T:V\rightarrow W\) by
\[T(v)=T\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )
=\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\in W.\]
We are going to show that \(T\) is linear.

Let \(u\) be another element of \(V\). There exist \(\beta_{1},\cdots ,\beta_{n}\in {\cal F}\) satisfying
\[u=\beta_{1}v_{1}+\beta_{2}v_{2}+\cdots +\beta_{n}v_{n}.\]
Therefore, we have
\[v+u=(\alpha_{1}+\beta_{1})v_{1}+(\alpha_{2}+\beta_{2})v_{2}+\cdots +(\alpha_{n}+\beta_{n})v_{n}.\]
By the definition of \(T\), we obtain
\begin{align*}
T(v+u) & =(\alpha_{1}+\beta_{1})w_{1}+(\alpha_{2}+\beta_{2})w_{2}+\cdots +(\alpha_{n}+\beta_{n})w_{n}\\
& =\left (\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\right )+\left (\beta_{1}w_{1}+\beta_{2}w_{2}+\cdots +\beta_{n}w_{n}\right )\\
& =T(v)+T(y).
\end{align*}
Given any \(\alpha\in {\cal F}\), we have
\[\alpha v=(\alpha\alpha_{1})v_{1}+(\alpha\alpha_{2})v_{2}+\cdots +(\alpha\alpha_{n})v_{n}.\]
By the definition of \(T\), we obtain
\begin{align*}
T(\alpha v) & =(\alpha\alpha_{1})w_{1}+(\alpha\alpha_{2})w_{2}+\cdots +(\alpha\alpha_{n})w_{n}\\
& =\alpha\left (\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\right )=\alpha T(v).
\end{align*}

It is easy to see that this mapping satisfies \(T(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Next, we want to claim that this mapping is unique. Suppose there exists another linear mapping \(\widehat{T}\) satisfying \(\widehat{T}(v_{i})=w_{i}\) for \(i=1,\cdots ,n\). Then, we have
\begin{align*}
\widehat{T}(v) & =\widehat{T}\left (\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\right )\\
& =\alpha_{1}\widehat{T}(v_{1})+\alpha_{2}\widehat{T}(v_{2})+\cdots +\alpha_{n}\widehat{T}(v_{n})\\
& =\alpha_{1}w_{1}+\alpha_{2}w_{2}+\cdots +\alpha_{n}w_{n}\\
& =T(v).
\end{align*}
This shows that \(\widehat{T}=T\), and the proof is complete. \(\blacksquare\)

The above Lemmas \ref{lat170}, \ref{lap84} and \ref{lat10} can refer to the page Linear Mappings.

\begin{equation}{\label{lat171}}\mbox{}\tag{38}\end{equation}

Theorem \ref{lat171}. Let \(V\) and \(W\) be finite-dimensional vector spaces over the same scalar field \({\cal F}\) with \(\dim (V)=n\) and \(\dim (W)=m\). Let \(\mathfrak{B}_{V}\) and \(\mathfrak{B}_{W}\) be ordered bases for \(V\) and \(W\), respectively. Let \({\cal M}_{m\times n}({\cal F})\) be the family of all \(m\times n\) matrices over \({\cal F}\). Then, the mapping \(\Phi :{\cal L}(V,W)\rightarrow {\cal M}_{m\times n}({\cal F})\) defined by \(\Phi (T)=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}\) for \(T\in {\cal L}(V,W)\) is an isomorphism. In other words, the spaces \({\cal L}(V,W)\) and \({\cal M}_{m\times n}({\cal F})\) are isomorphic.

Proof. Using Lemma \ref{lap84} the space \({\cal L}(V,W)\) is finite-dimensional with dimension \(mn\). It is also obvious that the dimension of the vector space \({\cal M}_{m\times n}({\cal F})\) is \(mn\). Using Lemma \ref{lat170}, we complete the proof.

Proof. (Alternate proof of Theorem \ref{lat171}) According to Proposition \ref{lap15}, the mapping \(\Phi\) is linear. Therefore, we remain to show that \(\Phi\) is bijective. Let \(\mathfrak{B}_{V}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}_{W}=\{w_{1},\cdots ,w_{n}\}\). Given any \(A\in {\cal M}_{m\times n}({\cal F})\), let
\[\widehat{w}_{j}=\sum_{i=1}^{n}a_{ij}w_{i}\]
for \(j=1,\cdots ,n\). Lemma \ref{lat10} says that there exists a unique linear mapping \(T:V\rightarrow W\) satisfying
\[T(v_{j})=\widehat{w}_{j}=\sum_{i=1}^{n}a_{ij}w_{i}\]
for \(j=1,\cdots ,n\). This says
\[A=[T]_{\mathfrak{B}_{W}}^{\mathfrak{B}_{V}}=\Phi (T).\]
Therefore, \(\Phi\) is indeed bijective. This completes the proof. \(\blacksquare\)

In the sequel, we would like to study the case when the ordered basis is changed. For the sake of simplicity, we shall consider the linear operator on a finite-dimensional vector space. More specifically, let \(V\) be a finite-dimensional vector space with two ordered bases \(\mathfrak{B}\) and \({\mathfrak{B}’}\), and let \(T\) be a linear operator on \(V\). We would like to study the relation between \([T]_{\mathfrak{B}}\) and \([T]_{\mathfrak{B}’}\).

\begin{equation}{\label{lat70}}\mbox{}\tag{39}\end{equation}

Lemma \ref{lat70}. Suppose that \(V\) is a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\), and that \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}’=\{v’_{1},\cdots ,v’_{n}\}\) are two ordered bases for \(V\). Then, there exists a unique \(n\times n\) invertible matrix \(P\) satisfying
\[[v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}\mbox{ and }[v]_{\mathfrak{B}’}=P^{-1}[v]_{\mathfrak{B}}\]
for any \(v\in V\), where the columns of \(P\) is given by \(P_{\cdot j}=[v’_{j}]_{\mathfrak{B}}\) for \(j=1,\cdots ,n\).

Proof. For each \(v’_{j}\), there exist unique \(p_{ij}\) for \(i,j=1,\cdots ,n\) satisfying
\begin{equation}{\label{laeq67}}\tag{40}
v’_{j}=\sum_{i=1}^{n}p_{ij}v_{i}.
\end{equation}
Let \((\alpha^{\prime}_{1},\cdots ,\alpha^{\prime}_{n})\) be the coordinate vector of a given vector \(v\in V\) in the ordered basis \(\mathfrak{B}’\). Then, using (\ref{laeq67}), we have
\begin{align*}
v & =\alpha^{\prime}_{1}v’_{1}+\alpha^{\prime}_{2}v’_{2}+\cdots +\alpha^{\prime}_{n}v’_{n}=\sum_{j=1}^{n}\alpha^{\prime}_{j}v’_{j}\\
& =\sum_{j=1}^{n}\alpha^{\prime}_{j}\left (\sum_{i=1}^{n}p_{ij}v_{i}\right )\\
& =\sum_{j=1}^{n}\sum_{i=1}^{n}p_{ij}\alpha^{\prime}_{j}v_{i}\\
& = \sum_{i=1}^{n}\left (\sum_{j=1}^{n}p_{ij}\alpha^{\prime}_{j}\right )v_{i}.
\end{align*}
Let \((\alpha_{1},\cdots ,\alpha_{n})\) be the coordinate vector of \(v\). The uniqueness says
\begin{equation}{\label{laeq68}}\tag{41}
\alpha_{i}=\sum_{j=1}^{n}p_{ij}\alpha^{\prime}_{j}
\end{equation}
for \(i=1,\cdots ,n\). Let \(P\) be the \(n\times n\) matric with entries \(p_{ij}\). Then, we have
\[[v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}.\]
Since \(\mathfrak{B}\) and \(\mathfrak{B}’\) are linearly independent sets, i.e., \([v]_{\mathfrak{B}}={\bf 0}\) implies \([v]_{\mathfrak{B}’}={\bf 0}\). Proposition \ref{lap69} says that \(P\) is invertible. Therefore, we obtain
\[[v]_{\mathfrak{B}’}=P^{-1}[v]_{\mathfrak{B}}.\]
This completes the proof. \(\blacksquare\)

The above Lemma \ref{lat70} can refer to the page Vector Spaces.

\begin{equation}{\label{lat83}}\mbox{}\tag{42}\end{equation}

Theorem \ref{lat83}. Let \(V\) be a finite-dimensional vector space over the scalar field with \(\dim (V)=n\), and let \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) and \({\mathfrak{B}’}=\{v’_{1},\cdots ,v’_{n}\}\) be two ordered basis for \(V\). If \(P\) is the \(n\times n\) matrix with columns \(P_{\cdot j} =[v’_{j}]_{\mathfrak{B}}\), then we have
\[[T]_{\mathfrak{B}’}=P^{-1}[T]_{\mathfrak{B}}P.\]
Alternatively, if \(U\) is an invertible operator on \(V\) defined by \(U(v_{j})=v’_{j}\) for \(j=1,\cdots ,n\), then we have
\[[T]_{\mathfrak{B}’}=[U]_{\mathfrak{B}}^{-1}[T]_{\mathfrak{B}}[U]_{\mathfrak{B}}.\]

Proof. Using Lemma \ref{lat70}, there exists a unique \(n\times n\) invertible matrix \(P\) satisfying
\begin{equation}{\label{laeq79}}\tag{43}
[v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}
\end{equation}
for any \(v\in V\), where the \(j\)th column \(P_{\cdot j}=[v’_{j}]_{\mathfrak{B}}\). By definition, we also have
\begin{equation}{\label{laeq80}}\tag{44}
[T(v)]_{\mathfrak{B}}=[T]_{\mathfrak{B}}[v]_{\mathfrak{B}}\mbox{ and }
[T(v)]_{\mathfrak{B}’}=[T]_{\mathfrak{B}’}[v]_{\mathfrak{B}’}.
\end{equation}
Substituting \(v\) by \(T(v)\) in (\ref{laeq79}), we obtain
\begin{equation}{\label{laeq81}}\tag{45}
[T(v)]_{\mathfrak{B}}=P[T(v)]_{\mathfrak{B}’}
\end{equation}
Combining (\ref{laeq79}), (\ref{laeq80}) and (\ref{laeq81}), we obtain
\[[T]_{\mathfrak{B}}P[v]_{\mathfrak{B}’}=P[T(v)]_{\mathfrak{B}’},\]
which also says
\begin{equation}{\label{laeq82}}\tag{46}
P^{-1}[T]_{\mathfrak{B}}P[v]_{\mathfrak{B}’}=[T(v)]_{\mathfrak{B}’}=[T]_{\mathfrak{B}’}[v]_{\mathfrak{B}’}.
\end{equation}
Since (\ref{laeq82}) holds for any \(v\in V\), we must have
\[P^{-1}[T]_{\mathfrak{B}}P=[T]_{\mathfrak{B}’}.\]
On the other hand, there is a unique linear operator \(U\) on \(V\) which carries \(\mathfrak{B}\) onto \(\mathfrak{B}’\) defined by \(U(v_{j})=v’_{j}\) for \(j=1,\cdots ,n\). This operator is invertible, since it carries a basis for \(V\) onto a basis for \(V\). We shall claim \(P=[U]_{\mathfrak{B}}\). Since \(P\) is defined by
\[v’_{j}=\sum_{i=1}^{n}p_{ij}v_{i}\]
and \(U(v_{j})=v’_{j}\), we have
\[U(v_{j})=\sum_{i=1}^{n}p_{ij}v_{i},\]
which says \(P=[U]_{\mathfrak{B}}\) by definition. This completes the proof. \(\blacksquare\)

Example. Let \(T\) be a linear operator on \(\mathbb{R}^{2}\) defined by \(T(x_{1},x_{2})=(x_{1},0)\). For the standard ordered basis \(\mathfrak{B}=\{{\bf e}_{1}=(1,0),{\bf e}_{2}=(0,1)\}\), we can obtain
\[[T]_{\mathfrak{B}}=\left [\begin{array}{cc}
1 & 0\\ 0 & 0\end{array}\right ].\]
Suppose that
\[\mathfrak{B}’=\left\{v’_{1}=(1,1),v’_{2}=(2,1)\right\}\]
is another ordered basis for \(\mathbb{R}^{2}\). Then, we have
\[v’_{1}={\bf e}_{1}+{\bf e}_{2}\] and \[v’_{2}=2{\bf e}_{1}+{\bf e}_{2},\]
which says
\[P=\left [\begin{array}{cc}
1 & 2\\ 1 & 1\end{array}\right ].\]
Therefore, we can obtain
\[P^{-1}=\left [\begin{array}{cc}
-1 & 2\\ 1 & -1\end{array}\right ].\]
Theorem \ref{lat83} says
\begin{align*} [T]_{\mathfrak{B}’} & =P^{-1}[T]_{\mathfrak{B}}P\\ & =\left [\begin{array}{cc} -1 & -2\\ 1 & 2\end{array}\right ].\end{align*}
This also says
\[T(v’_{1})=(1,0)=-v’_{1}+v’_{2}\] and \[T(v’_{2})=(2,0)=-2v’_{1}+2v’_{2}.\]

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). A basis \(\mathfrak{B}\) of \(V\) is said to diagonalize \(T\) if and only if \([T]_{\mathfrak{B}}\) is a diagonal matrix. If there exists a basis which diagonalizes \(T\), then we say that \(T\) is diagonalizable. Let \(A\) be an \(n\times n\) matrix in \({\cal F}\). We say that \(A\) can be diagonalized if and only the linear mapping \(T:{\cal F}^{n}\rightarrow {\cal F}^{n}\) represented by \(A\) can be diagonalized.

Corollary. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with basis \(\mathfrak{B}\), and let \(T\) be a linear operator on \(V\). Then \(T\) can be diagonalized if and only if there exists an invertible matrix \(N\) in \({\cal F}\) such that \(N^{-1}[T]_{\mathfrak{B}}N\) is a diagonal matrix.

Proof. The result follows immediately from Corollaries \ref{lac18} and \ref{lac17}. \(\blacksquare\)

\begin{equation}{\label{lat29}}\mbox{}\tag{47}\end{equation}

Lemma \ref{lat29}. We have the following properties.

(i) Let \(v,w\in\mathbb{R}^{2}\). The area of parallelogram spanned by \(v\) and \(w\) is \(|\det (v,w)|\).

(ii) Let \(u,v,w\in\mathbb{R}^{3}\). The volume of box spanned by \(u\), \(v\) and \(w\) is \(|\det (u,v,w)|\). \(\sharp\)

Part (i) of Lemma \ref{lat29} can be interpreted as linear mappings. Let \(E_{1}\) and \(E_{2}\) be the standard unit vectors. Given \(v,w\in\mathbb{R}^{2}\), there exists a unique linear mapping \(T:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\) satisfying \(T(E_{1})=v\) and \(T(E_{2})=w\). If \(v=aE_{1}+c_{2}\) and \(w=bE_{1}+dE_{2}\), then the matrix associated with \(T\) is given by
\[A_{T}=\left [\begin{array}{cc}
a & b\\ c & d
\end{array}\right ].\]
Let \(C\) be the unit rectangle spanned by \(E_{1}\) and \(E_{2}\), and let \(P\) be the parallelogram spanned by \(v\) and \(w\). Then \(P=T(C)\). Indeed, for any \(c\in C\), there exist \(t_{1},t_{2}\in\mathbb{R}\) satisfying \(c=t_{1}E_{1}+t_{2}E_{2}\). Therefore, we have
\begin{align*} T(c) & =T(t_{1}E_{1}+t_{2}E_{2})\\ & =t_{1}T(E_{1})+t_{2}T(E_{2})\\ & =t_{1}v+t_{2}w,\end{align*}
which claims \(T(C)=P\). Now, we define the determinate of \(T\) as the determinant of its associated matrix \(A_{T}\), i.e., \(\det (T)=\det (A_{T})\).

Theorem. Let \(P\) be a parallelogram spanned by two vectors in \(\mathbb{R}^{2}\), and let \(T:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\) be a linear mapping. Then, we have
\[\mbox{Area of \(T(P)\)}=|\det (T)|\cdot (\mbox{Area of \(P\)}).\]

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *