The Rational Normal Form

Max Nonnenbruch (1857-1922) was a German painter.

Here, we shall consider the linear operator whose characteristic polynomial cannot be factored completely.

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Given any \(v\in V\), the subspace
\[W=\mbox{span}\left (\left\{v,T(v),T^{2}(v),\cdots\right\}\right )\]
is called the \(T\)-cyclic subspace of \(V\) generated by \(v\).

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(W\) be a subspace of \(V\). We say that \(W\) is invariant under \(T\) or \(T\)-invariant when \(T(W)\subseteq W\).

The above two definitions can also refer to the page Eigenvalues and Eigenvectors.

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Given any vector \(v\) in \(V\), there is a smallest \(T\)-invariant subspace of \(V\) containing \(v\), which can be defined as the intersection of all \(T\)-invariant subspaces of \(V\) containing \(v\). Let \(W\) be a \(T\)-invariant subspace containing \(v\). Then, we have \(T(v)\in W\), which also says \(T^{2}(v)=T(T(v))\in W\). Therefore, the \(T\)-invariant subspace \(W\) must contain \(T^{2}(v),T^{3}(v)\), etc. In other words, \(W\) must contain \((f(T))(v)\) for every polynomial \(f\) over \({\cal F}\). If we take \(\widehat{W}\) as the set of all vectors of the form \((f(T))(v)\) for every polynomial \(f\) over \({\cal F}\), then it is obvious that \(\widehat{W}\) is the smallest \(T\)-invariant subspace of \(V\) containing \(v\).

According to Definition \ref{lad227}, the \(T\)-cyclic subspace generated by \(v\) is denoted by \(\langle v\rangle_{T}\) which consists of all vectors of the form \((f(T))(v)\) for every polynomial \(f\) over \({\cal F}\). In other words, we have
\[\langle v\rangle_{T}=\mbox{span}\left (\left\{v,T(v),T^{2}(v),\cdots\right\}\right ).\]

Definition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Given any vector \(v\in V\),
if \(\langle v\rangle_{T}=V\), then \(v\) is called a cyclic vector for \(T\). \(\sharp\)

Definition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(M(v,T)\) be the set consisting of all polynomials \(f\) satisfying \((f(T))(v)=\theta\). The \(T\)-annihilator of \(v\) is the unique monic polynomial \(p_{v}\) in \(M(v,T)\) such that each polynomial \(f\in M(v,T)\) can be expressed as \(f=p_{v}\cdot g\) for some polynomial \(g\) over \({\cal F}\), where \(g\) is not necessarily in \(M(v,T)\). \(\sharp\)

We can see that \(\deg p_{v}>0\) unless \(v\) is a zero vector.

\begin{equation}{\label{lap130}}\mbox{}\tag{1}\end{equation}
Proposition \ref{lap130}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Given any \(v\neq\theta\) in \(V\), let \(p_{v}\) be the \(T\)-annihilator of \(v\). Then, we have the following properties.

(i) We have \(\deg p_{v}=\dim\langle v\rangle_{T}\).

(ii) If \(\deg p_{v}=r\), then \(\{v,T(v),\cdots ,T^{r-1}(v)\}\) forms a basis for \(\langle v\rangle_{T}\).

(iii) If the linear operator \(U=T|_{\langle v\rangle_{T}}\) is the restriction of \(T\) on \(\langle v\rangle_{T}\), then the minimal polynomial for \(U\) is \(p_{v}\). \(\sharp\)

Theorem. (Cayley-Hamilton) Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). If \(P_{T}\) is the characteristic polynomial for \(T\), then \(P_{T}(T)\) is a zero mapping, i.e., \((P_{T}(T))(v)=\theta\) for all \(v\in V\).

The above Cayley-Hamilton Theorem can refer to the page Eigenvalues and Eigenvectors.

In Proposition \ref{lap130}, if \(v\) happens to a cyclic vector for \(T\), then the degree of minimal polynomial for \(T\) must be equal to \(\dim (V)\). Therefore, the Cayley-Hamilton Theorem says that the minimal polynomial for \(T\) is the characteristic polynomial for \(T\). We shall prove later that, given any operator \(T\), there exists \(v\in V\) such that the minimal polynomial of \(T\) is the \(T\)-annihilator of \(v\). It follows that \(T\) has a cyclic vector if and only if the minimal and characteristic polynomials for \(T\) are identical.

Suppose that \(\dim (V)=n\) and that \(T\) is a linear operator on \(V\) which has a cyclic vector \(v\). Then, Proposition \ref{lap130} says that \(\mathfrak{B}=\{v,T(v),\cdots ,T^{n-1}(v)\}\) is a basis for \(V\), and the annihilator \(p_{v}\) of \(v\) is the minimal and characteristic polynomial for \(T\). Since \(\deg p_{v}=n\) and \(p_{v}\) is a monic polynomial, we may write it as
\[p_{v}(x)=c_{0}+c_{1}x+c_{2}x^{2}+\cdots +c_{n-1}x^{n-1}+x^{n}\]
for some \(c_{i}\in {\cal F}\), \(i=0,1,\cdots ,n-1\). Let \(v_{i}=T^{i-1}(v)\) for \(i=1,\cdots ,n\). Then we have \(T(v_{i})=v_{i+1}\) for \(i=1,\cdots ,n-1\). Since \((p_{v}(T))(v)=\theta\), we also have
\begin{align*}
\theta & =c_{0}v+c_{1}T(v)+c_{2}T^{2}(v)+\cdots +c_{n-1}T^{n-1}(v)+T^{n}(v)\\
& =c_{0}v_{1}+c_{1}v_{2}+c_{2}v_{3}+\cdots +c_{n-1}v_{n}+T^{n}(v),
\end{align*}
which implies
\[T^{n}(v)=-c_{0}v_{1}-c_{1}v_{2}-c_{2}v_{3}-\cdots -c_{n-1}v_{n}.\]
Therefore, we obtain
\begin{equation}{\label{laeq131}}\tag{2}
A=[T]_{\mathfrak{B}}=\left [\begin{array}{cccccc}
0 & 0 & 0 & \cdots & 0 & -c_{0}\\
1 & 0 & 0 & \cdots & 0 & -c_{1}\\
0 & 1 & 0 & \cdots & 0 & -c_{2}\\
\vdots & \vdots & \vdots & & \vdots & \vdots\\
0 & 0 & 0 & \cdots & 1 & -c_{n-1}
\end{array}\right ].
\end{equation}
The matrix given in (\ref{laeq131}) is called the companion matrix of the annihilator \(p_{v}\) of \(v\). We can show that the characteristic polynomial of \(A\) is given by
\begin{equation}{\label{laeq247}}\tag{3}
\det (\lambda I_{n}-A)=(-1)^{n}\cdot\left (c_{0}+c_{1}\lambda +\cdots +c_{n-1}\lambda^{n-1}+\lambda^{n}\right ).
\end{equation}

\begin{equation}{\label{lap132}}\tag{4}\mbox{}\end{equation}
Proposition \ref{lap132}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Then \(T\) has a cyclic vector if and only if there exists an ordered basis for \(V\) such that \(T\) is represented by the companion matrix of the minimal polynomial for \(T\).

Proof. We have just shown that if \(T\) has a cyclic vector, then there exists an ordered basis for \(V\) such that \(T\) is represented by the companion matrix of the minimal polynomial for \(T\). Conversely, if there exists an ordered basis \(\{v_{1},v_{2},\cdots ,v_{n}\}\) for \(V\) such that \(T\) is represented by the companion matrix of the minimal polynomial for \(T\), then it is not hard to show that \(v_{1}\) is a cyclic vector for \(T\). This completes the proof. \(\blacksquare\)

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Given any \(v\in V\), if the linear operator \(U=T|_{\langle v\rangle_{T}}\) is the restriction of \(T\) on \(\langle v\rangle_{T}\), then \(v\) is the cyclic vector for \(U\). Proposition \ref{lap132} says that there exists an ordered basis for \(\langle v\rangle_{T}\) such that \(U\) is represented by the companion matrix of the minimal polynomial for \(U\). If \(W\) is any subspace of a finite-dimensional vector space \(V\) over the scalar field \({\cal F}\), then there exists a subspace \(W’\) satisfying \(V=W\oplus W’\). Then \(W’\) is called complementary to \(W\). There are many such complementaries \(W’\) to \(W\). We are interested in seeing that if \(W\) is \(T\)-invariant, then \(W’\) is also \(T\)-invariant. Now, suppose that \(V=W\oplus W’\) such that \(W\) and \(W’\) are both \(T\)-invariant. Given any \(v\in V\), there exist \(w\in W\) and \(w’\in W’\) such that \(v=w+w’\). If \(f\) is any polynomial over \({\cal F}\), then
\[(f(T))(v)=(f(T))(w)+(f(T))(w’).\]
Since \(W\) and \(W’\) are \(T\)-invariant, we have \((f(T))(w)\in W\) and \((f(T))(w’)\in W’\), which says that \((f(T))(v)\in W\) if and only if \((f(T))(w’)=\theta\). Therefore, if \((f(T))(v)\in W\), then \((f(T))(v)=(f(T))(w)\).

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). A subspace \(W\) of \(V\) is said to be \(T\)-admissible when the following conditions are satisfied.

  • \(W\) is \(T\)-invariant.
  • Given any polynomial \(f\), if \((f(T))(v)\in W\), then there exists \(w\in W\) satisfying \((f(T))(v)=(f(T))(w)\). \(\sharp\)

We have just shown that if \(W\) is \(T\)-invariant and has a complementary \(T\)-invariant subspace, then \(W\) is \(T\)-admissible.

\begin{equation}{\label{lat133}}\tag{5}\mbox{}\end{equation}
Theorem \ref{lat133}. (Cyclic Decomposition Theorem). Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(W_{0}\) be a proper \(T\)-admissible subspace of \(V\). Then, there exists non-zero vectors \(v_{1},\cdots ,v_{r}\) in \(V\) with respective \(T\)-annihilators \(p_{v_{1}},\cdots ,p_{v_{r}}\) such that

(a) \(V=W_{0}\oplus\langle v_{1}\rangle_{T}\oplus\langle v_{2}\rangle_{T}\oplus\cdots\oplus\langle v_{r}\rangle_{T}\)

(b) \(p_{v_{k}}\) divides \(p_{v_{k-1}}\) for \(k=2,\cdots ,r\).

Furthermore, the integer \(r\) and the annihilators \(p_{v_{1}},\cdots ,p_{v_{r}}\) are uniquely determined by (a) and (b), and the fact that no \(v_{k}\) is zero vector. \(\sharp\)

Corollary. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Then every \(T\)-admissible subspace has a complementary subspace which is also \(T\)-invariant.

Proof. Let \(W_{0}\) be an \(T\)-admissible subspace of \(V\). If \(W_{0}=V\), the complementary subspace is \(\{\theta\}\). If \(W_{0}\) is a proper subspace of \(V\), then applying Theorem \ref{lat133}, we take
\[W’_{0}=\langle v_{1}\rangle_{T}\oplus\langle v_{2}\rangle_{T}\oplus\cdots\oplus\langle v_{r}\rangle_{T}.\]
Then \(W’_{0}\) is \(T\)-invariant, and \(V=W_{0}\oplus W’_{0}\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lac138}}\tag{6}\mbox{}\end{equation}
Corollary \ref{lac138}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\),and let \(T\) be a linear operator on \(V\). Then, we have the following properties

(i) There exists a vector \(v\in V\) such that the \(T\)-annihilator of \(v\) is the minimal polynomial for \(T\).

(ii) \(T\) has a cyclic vector if and only if the characteristic and minimal polynomials for \(T\) are identical.

Proof.  To prove part (ii), according to the previous discussion, we have shown that if \(T\) has a cyclic vector, then the characteristic and minimal polynomials for \(T\) are identical. For the converse, we choose any vector \(v\) in part (i). Then, the degree of minimal polynomial for \(T\) is \(\dim (V)\), which says  \(V=\langle v\rangle_{T}\). This completes the proof. \(\blacksquare\)

Theorem.  (Generalized Cayley-Hamilton Theorem). Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(p\) and \(f\) be the minimal and characteristic polynomials for \(T\), respectively. Then, we have the following properties.

(i) \(p\) divides \(f\);

(ii) \(p\) and \(f\) have the same prime factors, except for multiplicities.

(iii) If \(p=f_{1}^{t_{1}}f_{2}^{t_{2}}\cdots f_{r}^{t_{r}}\) is the prime factorization of minimal polynomial \(p\), then the characteristic polynomial \(f\) can be expressed as \(f=f_{1}^{d_{1}}f_{2}^{d_{2}}\cdots f_{r}^{d_{r}}\), where
\[d_{i}=\frac{\dim\mbox{Ker}(f_{i}(T)^{t_{i}})}{\deg f_{i}}.\]

Corollary. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a nilpotent linear operator on \(V\). Then, the characteristic polynomial for \(T\) is \(x^{n}\). \(\sharp\)

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). According to Theorem \ref{lat133}, let \(d_{i}=\dim\langle v_{i}\rangle_{T}\), i.e., \(d_{i}=\deg p_{v_{i}}\), and let \(\mathfrak{B}_{v_{i}}=\{v_{i},T(v_{i}),\cdots ,T^{d_{i}-1}(v_{i})\}\) be the cyclic ordered basis for \(\langle v_{i}\rangle_{T}\). Let \(T_{i}=T|_{\langle v_{i}\rangle_{T}}\) be the restriction of \(T\) on \(\langle v_{i}\rangle_{T}\). Then \(A_{i}=[T_{i}]_{\mathfrak{B}_{v_{i}}}\) is the companion matrix of the polynomial \(p_{v_{i}}\) such that \(p_{v_{k}}\) divides \(p_{v_{k-1}}\) for \(k=2,\cdots ,r\). Now, let \(\mathfrak{B}=\bigcup_{i=1}^{r}\mathfrak{B}_{v_{i}}\) be the ordered basis for \(V\) arranged in the order \(\mathfrak{B}_{v_{1}},\cdots ,\mathfrak{B}_{v_{r}}\). Then, we have
\begin{equation}{\label{laeq140}}\tag{7}
A=[T]_{\mathfrak{B}}=\left [\begin{array}{cccc}
A_{1} & {\bf 0} & \cdots & {\bf 0}\\
{\bf 0} & A_{2} & \cdots & {\bf 0}\\
\vdots & \vdots && \vdots\\
{\bf 0} & {\bf 0} & \cdots & A_{r}
\end{array}\right ].
\end{equation}
This matrix \(A\) is said to be in rational normal form, and \(\mathfrak{B}\) is also called a rational normal basis for \(V\). From (\ref{laeq247}), we also see that the characteristic polynomial of \(T\) has the following form
\[P_{T}(\lambda )=(-1)^{n}f_{1}^{d_{1}}(\lambda )f_{2}^{d_{2}}(\lambda )\cdots f_{r}^{d_{r}}(\lambda ),\]
where the \(f_{i}\)’s, \(i=1,\cdots ,r\), are distinct irreducible monic polynomials. We define the subset \(V_{f_{i}}\) of \(V\) by
\[V_{f_{i}}=\left\{v\in V:f_{i}^{p}(v)=\theta\right\}\]

for some positive integer \(p\). We can show that \(V_{f_{i}}\) is a non-zero \(T\)-invariant subspace of \(V\). We also see that if \(f_{i}(\lambda )=\lambda -\lambda_{i}\) of degree \(1\), then \(V_{f_{i}}\) is a generalized eigenspace of \(T\) corresponding to the eigenvalue \(\lambda_{i}\).

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(v\in V\) with \(v\neq\theta\). Suppose that the \(T\)-annihilator of \(v\) is of the form \(f^{d}(t)\) for some irreducible monic polynomial \(f(t)\), where \(d\) is a positive integer. Then \(f(t)\) divides the minimal polynomial of \(T\), and \(v\in V_{f}\).

\begin{equation}{\label{lap248}}\tag{8}\mbox{}\end{equation}
Proposition \ref{lap248}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(v\in V\) with \(v\neq\theta\). Let \(\mathfrak{B}\) be an ordered basis for \(V\). Then \(\mathfrak{B}\) is a rational normal basis for \(V\) if and only if \(\mathfrak{B}\) is the disjoint union of cyclic ordered bases \(\mathfrak{B}_{i}=\{v_{i},T(v_{i}),\cdots ,T^{d_{i}-1}(v_{i})\}\), where \(v_{i}\) lies in \(V_{f_{i}}\) for some irreducible monic divisor \(f_{i}\) of the characteristic polynomial of \(T\). \(\sharp\)

Example. Let \(T\) be a linear operator on \(\mathbb{R}^{8}\), and let
\[\mathfrak{B}=\left\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6},v_{7},v_{8}\right\}\]
be a rational normal basis for \(T\) such that
\[A=[T]_{\mathfrak{B}}=\left [\begin{array}{cccccccc}
0 & 3 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & -2 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & -1\\ 0 & 0 & 0 &  0 & 0 & 0 & 1 & 0
\end{array}\right ]\]
is a rational normal form. The submatrices \(A_{1}\), \(A_{2}\) and \(A_{3}\) are the companion matrices of the corresponding polynomials \(f_{1}\), \(f_{2}^{2}\) and \(f_{2}\) given by
\[f_{1}(t)=t^{2}-t+3\mbox{ and }f_{2}(t)=t^{2}+1.\]
Using Proposition \ref{lap248}, it says that \(\mathfrak{B}\) is the disjoint union of the cyclic ordered bases given by
\begin{align*} \mathfrak{B} & =\mathfrak{B}_{v_{1}}\cup\mathfrak{B}_{v_{3}}\cup\mathfrak{B}_{v_{7}}
\\ & =\left\{v_{1},v_{2}\right\}\cup\left\{v_{3},v_{4},v_{5},v_{6}\right\}\cup\left\{v_{7},v_{8}\right\}.\end{align*}
The characteristic polynomial \(P_{T}\) of \(T\) is the product of the characteristic polynomials of the companion matrices given by
\begin{align*} P_{T}(\lambda ) & =f_{1}(\lambda )\cdot f_{2}^{2}(\lambda )\cdot f_{2}(\lambda )\\ & =
f_{1}(\lambda )\cdot f_{2}^{3}(\lambda )\\ & =(\lambda^{2}-\lambda +3)\cdot
(\lambda^{2}+1)^{3}.\end{align*}

\begin{equation}{\label{lap249}}\tag{9}\mbox{}\end{equation}
Proposition \ref{lap249}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Suppose that the minimal polynomial of \(T\) has the following form
\[p(t)=f_{1}^{d_{1}}(t)\cdot f_{2}^{d_{2}}(t)\cdots f_{r}^{d_{r}}(t),\]
where the \(d_{i}\) are positive integers and the \(f_{i}\)’s are the distinct irreducible monic factors of \(p(t)\). Then, we have the following properties.

(i) For each \(i=1,\cdots ,r\), \(V_{f_{i}}\) is a non-zero \(T\)-invariant subspace of \(V\).

(ii) If \(v\) is a non-zero vector in some \(V_{f_{i}}\), then the \(T\)-annihilator of \(v\) has the form of \(f_{i}^{p}\) for some positive integer \(p\).

(iii) For \(i\neq j\), we have \(V_{f_{i}}\cap V_{f_{j}}=\emptyset\).

(iv) For \(i\neq j\), \(V_{f_{i}}\) is invariant under \(f_{j}(T)\), and the restriction of \(f_{j}(T)\) to \(V_{f_{i}}\) is bijective.

(v) For each \(i=1,\cdots ,r\), we have
\[V_{f_{i}}=\mbox{Ker}\left (f_{i}^{d_{i}}(T)\right ).\]

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Suppose that the minimal polynomial of \(T\) has the following form
\[p(t)=f_{1}^{d_{1}}(t)\cdot f_{2}^{d_{2}}(t)\cdots f_{r}^{d_{r}}(t),\]
where the \(d_{i}\) are positive integers and the \(f_{i}\)’s are the distinct irreducible monic factors of \(p(t)\). Then, we have the following properties.

(i) Given \(v_{i}\in V_{f_{i}}\) for \(i=1,\cdots ,r\), if
\begin{equation}{\label{laeq250}}\tag{10}
v_{1}+v_{2}+\cdots +v_{r}=\theta ,
\end{equation}
then \(v_{i}=\theta\) for all \(i\).

(ii) Let \(S_{i}\) be the linearly independent subset of \(V_{f_{i}}\). Then \(S_{i}\cap S_{j}=\emptyset\) for \(i\neq j\) and
\begin{equation}{\label{laeq251}}\tag{11}
S_{1}\cup S_{2}\cup\cdots\cup S_{r}
\end{equation}
is linearly independent.

Proof. If \(k=1\), then the desired results are obvious. Therefore, we assume \(k>1\). To prove part (i), for \(i=1,\cdots ,r\), let \(p_{i}\) be the polynomial obtained from \(p\) by omitting the factor \(f_{i}^{d_{i}}\). According to part (iv) of Proposition \ref{lap249}, since the restriction of \(f_{j}(T)\) to \(V_{f_{i}}\) is one-to-one for \(i\neq j\), it follows that \(p_{i}(T)\) is one-to-one on \(V_{f_{i}}\). Using part (v) of Proposition \ref{lap249}, we also have \((p_{i}(T))(v_{j})=\theta\) for \(i\neq j\). Therefore, applying \(p_{i}(T)\) to (\ref{laeq250}), we obtain \((p_{i}(T))(v_{i})=\theta\). Since \(p_{i}(T)\) is one-to-one on \(V_{f_{i}}\), we must have \(v_{i}=\theta\).

To prove part (ii), using part (iii) of Proposition \ref{lap249}, we immediately have \(S_{i}\cap S_{j}=\emptyset\) for \(i\neq j\). The linear independence of the sets of union in (\ref{laeq251}) can be obtained by using the similar arguments in the proof of part (ii) of Proposition \ref{lal219}. This completes the proof. \(\blacksquare\)

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(\phi\) be an irreducible monic factor of the minimal polynomial of \(T\). Let \(v_{1},\cdots ,v_{s}\) be distinct vectors in \(V_{\phi}\) such that the set
\[S_{1}=\mathfrak{B}_{v_{1}}\cup\mathfrak{B}_{v_{2}}\cdots\cup\mathfrak{B}_{v_{s}}\]
is linearly independent. Let \(w_{i}\in V\) satisfy \((\phi (T))(w_{i})=\theta\) for \(i=1,\cdots ,s\). Then, the set
\[S_{2}=\mathfrak{B}_{w_{1}}\cup\mathfrak{B}_{w_{2}}\cdots\cup\mathfrak{B}_{w_{s}}\]
is also linearly independent.

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(\phi\) be an irreducible monic factor of the minimal polynomial of \(T\). Let \(W\) be a \(T\)-invariant subspace of \(V_{\phi}\), and let \(\mathfrak{B}\) be a basis for \(W\). Then, we have the following properties.

(i) Suppose that \(v\in\mbox{Ker}(\phi (T))\) and \(v\not\in W\). Then \(\mathfrak{B}\cup\mathfrak{B}_{v}\) is linearly independent.

(ii) Given any \(w_{1},w_{2},\cdots ,w_{s}\in\mbox{Ker}(\phi (T))\), \(\mathfrak{B}\) can be extended to the linearly independent set
\[\mathfrak{B}’=\mathfrak{B}\cup\mathfrak{B}_{w_{1}}\cup\mathfrak{B}_{w_{2}}\cup\cdots\cup\mathfrak{B}_{w_{s}}\]
whose span contains \(\mbox{Ker}(\phi (T))\).

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). If the minimal polynomial of \(T\) is of the form \(p(t)=\phi^{m}(t)\), then there exists a rational normal basis for \(T\).

Theorem. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). Every linear operator \(T\) on \(V\) has a rational normal basis.

\begin{equation}{\label{lat255}}\tag{12}\mbox{}\end{equation}
Theorem \ref{lat255}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Suppose that the characteristic polynomial of \(T\) has the following form
\[P_{T}(\lambda )=(-1)^{n}f_{1}^{d_{1}}(\lambda )\cdot f_{2}^{d_{2}}(\lambda )\cdots f_{r}^{d_{r}}(\lambda ),\]
where the \(d_{i}\) are positive integers and the \(f_{i}\)’s are the distinct irreducible monic polynomials. Then, we have the following properties.

(i) \(f_{1},f_{2},\cdots ,f_{r}\) are the irreducible monic factors of the minimal polynomial.

(ii) For each \(i=1,\cdots ,r\), we have
\[\dim (V_{f_{i}})=d_{i}\cdot\deg f_{i}.\]

(iii) If \(\mathfrak{B}\) is a rational normal basis for \(T\), then \(\mathfrak{B}_{i}=\mathfrak{B}\cap V_{f_{i}}\) is a basis for \(V_{f_{i}}\) for each \(i=1,\cdots ,r\).

(iv) If \(\widehat{\mathfrak{B}}_{i}\) is a basis for \(V_{f_{i}}\) for each \(i=1,\cdots ,r\), then
\[\widehat{\mathfrak{B}}=\widehat{\mathfrak{B}}_{1}\cup\widehat{\mathfrak{B}}_{2}\cup\cdots\cup\widehat{\mathfrak{B}}_{r}.\]
is a basis for \(V\). In particular, if each \(\widehat{\mathfrak{B}}_{i}\) is a disjoint union of cyclic ordered bases, then \(\widehat{\mathfrak{B}}\) is a rational
normal basis for \(T\). \(\sharp\)

Theorem. (Primary Decomposition Theorem). Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Suppose that the characteristic polynomial of \(T\) has the following form
\[P_{T}(\lambda )=(-1)^{n}f_{1}^{d_{1}}(\lambda )\cdot f_{2}^{d_{2}}(\lambda )\cdots f_{r}^{d_{r}}(\lambda ),\]
where the \(d_{i}\) are positive integers and the \(f_{i}\)’s are the distinct irreducible monic polynomials. Then, we have the following properties.

(i) We have
\[V=V_{f_{1}}\oplus V_{f_{2}}\oplus\cdots\oplus V_{f_{r}}.\]

(ii) For each \(i=1,\cdots ,r\), if \(T_{i}\) is the restriction of \(T\) to \(V_{f_{i}}\) and \(R_{i}\) is the rational normal form of \(T_{i}\), then the direct sum
\[R_{1}\oplus R_{2}\oplus\cdots\oplus R_{r}\]
is a rational normal form of \(T\).

Proof. By applying Theorem \ref{lat255}, we can obtain the desired results. \(\blacksquare\)

Now, we take \(T\) to be the linear operator on \({\cal F}^{n}\). Let \(B\) be the associated matrix of \(T\) with respect to the standard ordered basis. The previous discussion says that there is another ordered basis for \({\cal F}^{n}\) such that the associated matrix \(A\) of \(T\) is in rational normal form. This says that \(B\) is similar to \(A\). In general, we have the following result.

\begin{equation}{\label{lap134}}\tag{13}\mbox{}\end{equation}
Proposition \ref{lap134}. Let \({\cal F}\) be a scalar field and let \(B\) be an \(n\times n\) matrix over \({\cal F}\). Then \(B\) is similar to one and only one matrix \(A\) over \({\cal F}\) which is in rational normal form. \(\sharp\)

The polynomials \(p_{v_{1}},\cdots ,p_{v_{r}}\) resulting from matrix \(A\) are called the invariant factors for the matrix \(B\). We shall describe later an algorithm for calculating the invariant factors of a given matrix \(B\). The fact that it is possible to compute these polynomials by means of a finite number of rational operations on the entries of \(B\) is what gives the rational form its name.

Example. Let \(V\) be a \(2\)-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). We have only two cases.

  • If the minimal polynomial for \(T\) has degree \(2\), then it is equal to the characteristic polynomial for \(T\). This says that \(T\) has a cyclic vector. Therefore there exist an ordered basis \(\mathfrak{B}\) for \(V\) such that \([T]_{\mathfrak{B}}\) is the companion matrix of the minimal polynomial \(p(x)=c_{0}+c_{1}x+x^{2}\), which is given by
    \begin{equation}{\label{laeq135}}\tag{14}
    [T]_{\mathfrak{B}}=\left [\begin{array}{cc}
    0 & -c_{0}\\ 1 & -c_{1}
    \end{array}\right ].
    \end{equation}
  • If the minimal polynomial for \(T\) has degree \(1\), then \(T\) is a scalar multiple of the identity operator, i.e., \(T=cI\) for some \(c\in {\cal F}\). Given any two independent vectors \(v_{1},v_{2}\in V\), we have
    \[V=\langle v_{1}\rangle_{T}\oplus\langle v_{2}\rangle_{T}\]
    and
    \[p_{1}(x)=p_{2}(x)=x-c.\]
    Therefore, there exists an ordered basis \(\mathfrak{B}\) for \(V\) satisfying
    \begin{equation}{\label{laeq136}}\tag{15}
    [T]_{\mathfrak{B}}=\left [\begin{array}{cc}
    c & 0\\ 0 & c
    \end{array}\right ].
    \end{equation}

According to Proposition \ref{lap134}, every \(2\times 2\) matrix over \({\cal F}\) is similar to exactly one of the matrices as shown in (\ref{laeq135}) and (\ref{laeq136}). \(\sharp\)

Example. Let \(T\) be a linear operator on \(\mathbb{R}^{3}\) which is represented in the standard ordered basis by the matrix
\[B=\left [\begin{array}{rrr}
5 & -6 & -6\\ -1 & 4 & 2\\ 3 & -6 & -4
\end{array}\right ].\]
From Example \ref{laex137}, we see that the characteristic polynomial for \(T\) is \(f(x)=(x-1)(x-2)^{2}\). Since \((B-I)(B-2I)\) is a zero matrix, Proposition \ref{lap112} says that the minimal polynomial for \(T\) is \(p(x)=(x-1)(x-2)\). In the cyclic decomposition, from part (i) of Corollary \ref{lac138}, we see that the first vector \(v_{1}\) will have \(p\) as its \(T\)-annihilator \(p_{1}=p\) satisfying \(\dim\langle v_{1}\rangle_{T}=2\) which is the degree of minimal polynomial \(p\). Therefore, there can be only one further vector \(v_{2}\) such that \(\dim\langle v_{2}\rangle_{T}=1\), which also says that \(v_{2}\) must be an eigenvector of \(T\) by part (ii) of Example \ref{laex139}. The \(T\)-annihilator \(p_{2}\) of \(v_{2}\) must be \(p_{2}(x)=x-2\), since \(p(x)\cdot p_{2}(x)=f(x)\). Since
\[p_{1}(x)=(x-1)(x-2)=x^{2}-3x+2,\]
according to (\ref{laeq140}), we have
\[A=\left [\begin{array}{cc}
A_{1} & {\bf 0}\\
{\bf 0} & A_{2}
\end{array}\right ]
=\left [\begin{array}{ccc}
0 & -2 & 0\\ 1 & 3 & 0\\ 0 & 0 & 2
\end{array}\right ].\]
In other words, there exists an ordered basis \(\mathfrak{B}\) satisfying \([T]_{\mathfrak{B}}=A\). This also says that \(B\) is similar to \(A\). Next, we shall find this ordered basis \(\mathfrak{B}\). We first need to find some suitable vectors \(v_{1}\) and \(v_{2}\). Let use try \({\bf e}_{1}=(1,0,0)\). Then, we have
\[T({\bf e}_{1})=B{\bf e}_{1}=\left [\begin{array}{rrr}
5 & -6 & -6\\ -1 & 4 & 2\\ 3 & -6 & -4
\end{array}\right ]\left [\begin{array}{c}
1\\ 0\\ 0
\end{array}\right ]=(5,-1,3),\]
which says that \({\bf e}_{1}\) is not an eigenvector of \(T\). Since \(V=\langle v_{1}\rangle_{T}\oplus\langle v_{2}\rangle_{T}\) with \(\dim\langle v_{1}\rangle_{T}=2\) and \(\dim\langle v_{2}\rangle_{T}=1\), we can take \(v_{1}={\bf e}_{1}\). Then,  we have \(\dim\langle {\bf e}_{1}\rangle_{T}=2\). We also see that \(\{{\bf e}_{1},T({\bf e}_{1})\}\) is a basis for \(\langle {\bf e}_{1}\rangle_{T}\). Therefore, the subspace \(\langle {\bf e}_{1}\rangle_{T}\) consists of all vectors
\begin{align*} a{\bf e}_{1}+bT({\bf e}_{2}) & =a(1,0,0)+b(5,-1,3)\\ & =(a+5b,-b,3b)\equiv (x_{1},x_{2},x_{3}),\end{align*}
which also says
\begin{align*} \langle {\bf e}_{1}\rangle_{T} & =\left\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}:
x_{2}=-3x_{3}\right\}\\ & =\left\{(x_{1},-3x_{3},x_{3}):x_{1},x_{3}\in\mathbb{R}\right\}.\end{align*}
Since \(p_{2}(x)=x-2\), we see that \(v_{2}\) is an eigenvector of \(T\) with the corresponding eigenvalue \(2\), i.e., \(T(v_{2})=2v_{2}\). Let \(v_{2}=(x_{1},x_{2},x_{3})\). Then, we have
\begin{align*}
(2x_{1},2x_{2},2x_{3}) & =2v_{2}=T(v_{2})=Bv_{2}\\ & =\left [\begin{array}{rrr}
5 & -6 & -6\\ -1 & 4 & 2\\ 3 & -6 & -4
\end{array}\right ]\left [\begin{array}{c}
x_{1}\\ x_{2}\\ x_{3}
\end{array}\right ]\\
& =\left (5x_{1}-6x_{2}-6x_{3},-x_{1}+4x_{2}+2x_{3},3x_{1}-6x_{2}-4x_{3}\right ),
\end{align*}
which also says
\[\left\{\begin{array}{l}
2x_{1}=5x_{1}-6x_{2}-6x_{3}\\
2x_{2}=-x_{1}+4x_{2}+2x_{3}\\
2x_{3}=3x_{1}-6x_{2}-4x_{3}
\end{array}\right .\]
Therefore, we obtain \(x_{1}=2x_{2}+2x_{3}\), which says that we can take \(v_{2}=(2,1,0)\). This also means that \(\langle v_{2}\rangle_{T}\) is a subspace generated by \(v_{2}\). Finally, we can take the ordered basis \(\mathfrak{B}=\{(1,0,0),(5,-1,3),(2,1,0)\}\) such that \(A=[T]_{\mathfrak{B}}\), which is in rational normal form. \(\sharp\)

Example. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a diagonalizable linear operator on \(V\). Let \(\lambda_{1},\cdots ,\lambda_{r}\) be the distinct eigenvalues of \(T\), and let \(V_{i}\) be the space of eigenvectors associated with the eigenvalue \(\lambda_{i}\). Then \(V=V_{1}\oplus\cdots\oplus V_{r}\). If \(d_{i}=\dim (V)_{i}\), then the characteristic polynomial for \(T\) is given by
\begin{equation}{\label{laeq141}}\tag{16}
P_{T}(\lambda )=\left (\lambda -\lambda_{1}\right )^{d_{1}}\cdot\left (\lambda -\lambda_{2}\right )^{d_{2}}\cdots\left (\lambda -\lambda_{r}\right )^{d_{r}}.
\end{equation}
Given any \(v\in V\), there exist unique \(u_{i}\in V_{i}\) satisfying
\[v=u_{1}+u_{2}+\cdots +u_{r}.\]
For any polynomial \(f\), since \(T(u_{i})=\lambda_{i}u_{i}\), we have
\[(f(T))(v)=f(\lambda_{1})u_{1}+f(\lambda_{2})u_{2}+\cdots +f(\lambda_{r})u_{r}\in\langle v\rangle_{T}.\]
For any scalars \(c_{1},\cdots ,c_{r}\), there exists a polynomial \(f\) satisfying \(f(\lambda_{i})=c_{i}\). This says that \(\langle v\rangle_{T}\) is a subspace spanned by the vectors \(\{u_{1},\cdots ,u_{r}\}\). On the other hand, the formula (\ref{laeq141}) says that \((f(T))(v)=\theta\) if and only if \(f(\lambda_{i})=0\) for each \(i\) satisfying \(u_{i}\neq\theta\). By definition, the \(T\)-annihilator of \(v\) is given by
\[\prod_{u_{i}\neq\theta}\left (x-\lambda_{i}\right ).\]
Since \(d_{i}=\dim (V)_{i}\), let
\[\mathfrak{B}_{i}=\left\{u_{1}^{(i)},u_{2}^{(i)},\cdots ,u_{d_{i}}^{(i)}\right\}\]
be an ordered basis for \(V_{i}\). Let \(\kappa =\max_{i}d_{i}\). We define the vectors \(v_{1},\cdots ,v_{r}\) by
\[v_{j}=\sum_{\{i:d_{i}\geq j\}}u_{j}^{(i)}.\]
According the above discussion, the cyclic subspace \(\langle v_{j}\rangle_{T}\) is the subspace spanned by the vectors \(u_{j}^{(i)}\) as \(i\) runs over the indices for which \(d_{i}\geq j\). The \(T\)-annihilator of \(v_{j}\) is given by
\[p_{j}(x)=\prod_{\{i:d_{i}\geq j\}}\left (x-\lambda_{i}\right ).\]
We see that \(p_{j+1}\) divides \(p_{j}\). Since each \(u_{j}^{(i)}\) belongs to one and only one of the subspaces \(\langle v_{1}\rangle_{T},\cdots ,\langle v_{r}\rangle_{T}\) and \(\mathfrak{B}=\bigcup_{j=1}^{r}\mathfrak{B}_{j}\) is a basis for \(V\), we have
\[V=\langle v_{1}\rangle_{T}\oplus\langle v_{2}\rangle_{T}\oplus\cdots\oplus\langle v_{r}\rangle_{T}.\]

\begin{equation}{\label{laex143}}\tag{17}\mbox{}\end{equation}

Example \ref{laex143}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(N\) be a nilpotent linear operator on \(V\). Since \(W_{0}=\{\theta\}\) is a proper \(N\)-admissible subspace of \(V\), Theorem \ref{lat133} says that there exists \(r\) non-zero vectors \(v_{1},\cdots ,v_{r}\) satisfying
\begin{equation}{\label{laeq142}}\tag{18}
V=\langle v_{1}\rangle_{T}\oplus\langle v_{2}\rangle_{T}\oplus\cdots\oplus\langle v_{r}\rangle_{T}
\end{equation}
and \(p_{i+1}\) divides \(p_{i}\) for \(i=1,\cdots ,r-1\). Since \(N\) is nilpotent, the minimal polynomial for \(N\) is \(x^{k}\) for some \(k\leq n\). Since the annihilator divides the minimal polynomial, each \(p_{i}\) has the form of \(p_{i}(x)=x^{k_{i}}\) such that \(k_{i}\geq k_{i+1}\) for \(i=1,\cdots ,r-1\). We also have \(k_{1}=k\) and \(k_{1}+\cdots +k_{r}=n\). The companion matrix of \(p_{i}\) is the \(k_{i}\times k_{i}\) matrix given by
\[J_{i}=\left [\begin{array}{ccccc}
0 & 0 & \cdots & 0 & 0\\
1 & 0 & \cdots & 0 & 0\\
0 & 1 & \cdots & 0 & 0\\
\vdots & \vdots && \vdots &\vdots\\
0 & 0 & \cdots & 1 & 0
\end{array}\right ].\]
Theorem \ref{lat133} also says that there exists an ordered basis \(\mathfrak{B}\) for \(V\) satisfying
\begin{equation}{\label{laeq144}}\tag{19}
[N]_{\mathfrak{B}}=\left [\begin{array}{cccc}
J_{1} & {\bf 0} & \cdots & {\bf 0}\\
{\bf 0} & J_{2} & \cdots & {\bf 0}\\
\vdots & \vdots && \vdots\\
{\bf 0} & {\bf 0} & \cdots & J_{r}
\end{array}\right ].
\end{equation}

Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Suppose that the characteristic polynomial for \(T\) is given by
\[P_{T}(x)=\left (x-\lambda_{1}\right )^{d_{1}}\cdot\left (x-\lambda_{2}\right )^{d_{2}}\cdots\left (x-\lambda_{r}\right )^{d_{r}},\]
where \(\lambda_{1},\cdots ,\lambda_{r}\in {\cal F}\) and \(d_{i}\geq 1\) for \(i=1,\cdots ,r\). Then, the minimal polynomial \(p\) for \(T\) is given by
\[p(x)=\left (x-\lambda_{1}\right )^{t_{1}}\cdot\left (x-\lambda_{2}\right )^{t_{2}}\cdots\left (x-\lambda_{r}\right )^{t_{r}},\]
where \(\leq t_{i}\leq d_{i}\) for \(i=1,\cdots ,r\). Let \(W_{i}=\mbox{Ker}((T-\lambda_{i})^{t_{i}})\). Then, the primary decomposition theorem  says
\[V=W_{1}\oplus W_{2}\oplus\cdots\oplus W_{r}\]
and the operator \(T_{i}=T|_{W_{i}}\) has minimal polynomial \((x-\lambda_{i})^{t_{i}}\). Let \(N_{i}\) be the linear operator on \(W_{i}\) defined by \(N_{i}=T_{i}-\lambda_{i}I\). Then \(N_{i}\) is nilpotent and has minimal polynomial \(x^{t_{i}}\) for \(i=1,\cdots ,r\). Suppose we choose a basis \(\mathfrak{B}_{i}\) for the subspace \(W_{i}\) corresponding to the cyclic decomposition for the nilpotent operator as shown in Example \ref{laex143}. By referring to (\ref{laeq144}), the matrix \([N_{i}]_{\mathfrak{B}_{i}}\) has the form
\[[N_{i}]_{\mathfrak{B_{i}}}=\left [\begin{array}{cccc}
J_{1}^{(i)} & {\bf 0} & \cdots & {\bf 0}\\
{\bf 0} & J_{2}^{(i)} & \cdots & {\bf 0}\\
\vdots & \vdots && \vdots\\
{\bf 0} & {\bf 0} & \cdots & J_{n_{i}}^{(i)}
\end{array}\right ],\]
where each \(J_{h}^{(i)}\) is the \(k_{l}\times k_{l}\) matrix, \(h=1,\cdots ,n_{i}\) and \(l=1,\cdots ,s_{i}\) such that \(k_{1}+\cdots +k_{s_{i}}=n_{i}\) and \(n_{1}+\cdots +n_{r}=n\), and has the form of
\begin{equation}{\label{laeq145}}\tag{20}
J_{h}^{(i)}=\left [\begin{array}{ccccc}
\lambda_{i} & 0 & \cdots & 0 & 0\\
1 & \lambda_{i} & \cdots & 0 & 0\\
0 & 1 & \cdots & 0 & 0\\
\vdots & \vdots && \vdots &\vdots\\
0 & 0 & \cdots & 1 & \lambda_{i}
\end{array}\right ].
\end{equation}
The matrix of the form (\ref{laeq145}) is also called an elementary Jordan matrix with eigenvalue \(\lambda_{i}\). If we put all the ordered bases for the \(W_{i}\) together to form an ordered basis \(\mathfrak{B}\) for \(V\), then the associated matrix of \(T\) is given by
\[[T]_{\mathfrak{B}}=\left [\begin{array}{cccc}
[N_{1}]_{\mathfrak{B_{1}}} & {\bf 0} & \cdots & {\bf 0}\\
{\bf 0} & [N_{2}]_{\mathfrak{B_{2}}} & \cdots & {\bf 0}\\
\vdots & \vdots && \vdots\\
{\bf 0} & {\bf 0} & \cdots & [N_{r}]_{\mathfrak{B_{r}}}
\end{array}\right ]\]
and is also said to be in Jordan form.

The above discussion shows that if \(T\) is a linear operator on a finite-dimensional vector space \(V\) over the scalar field \({\cal F}\) such that the characteristic polynomial for \(T\) factors completely over \({\cal F}\), then there is an ordered basis \(\mathfrak{B}\) for \(V\) such that the matrix \([T]_{\mathfrak{B}}\) associated with \(T\) is in Jordan form. We can show that this matrix is unique up to the order in which the eigenvalues of \(T\) are written down. In other words, if two matrices are in Jordan form and they are similar, then they can differ only in that the order of the eigenvalues are different. More precisely, if \(B\) is an \(n\times n\) matrix over \({\cal F}\) such that its characteristic polynomial can factor completely over \({\cal F}\), then \(B\) is similar to an \(n\times n\) matrix \(A\) over \({\cal F}\) such that \(A\) is in Jordan form and \(A\) is unique up to a rearrangement of the order of its eigenvalues. In this case, we call \(A\) the Jordan form of \(B\).

Example. Let \(A\) be a \(3\times 3\) matrix over \(\mathbb{C}\) given below
\[A=\left [\begin{array}{rrr}
2 & 0 & 0\\ 1 & 2 & 0\\ b & c & -1
\end{array}\right ].\]
The characteristic polynomial for \(A\) can be calculated to obtain \((\lambda -2)^{2}(\lambda +1)\). If the minimal polynomial for \(A\) is \((x-2)^{2}(x+1)\), then \(A\) is similar to the following matrix
\[\left [\begin{array}{rrr}
2 & 0 & 0\\ 1 & 2 & 0\\ 0 & 0 & -1
\end{array}\right ].\]
If the minimal polynomial for \(A\) is \((x-2)(x+1)\), then \(A\) is similar to the following diagonal matrix
\[\left [\begin{array}{rrr}
2 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & -1
\end{array}\right ].\]
Since
\[(A-2I)(A+I)=\left [\begin{array}{rrr}
0 & 0 & 0\\ 3a & 0 & 0\\ ac & 0 & 0
\end{array}\right ],\]
it follows that \(A\) is similar to the diagonal matrix if and only if \(a=0\). \(\sharp\)

Example. Let \(A\) be a \(4\times 4\) matrix over \(\mathbb{R}\) given below
\[A=\left [\begin{array}{rrrr}
2 & 0 & 0 & 0\\ 1 & 2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & a & 2
\end{array}\right ].\]
The characteristic polynomial for \(A\) is \((\lambda -2)^{4}\). Since \(A\) is the direct sum of two \(2\times 2\) matrices, the minimal polynomial for \(A\) is \((x-2)^{2}\). If \(a=0\) or \(1\), then \(A\) is in Jordan form. \(\sharp\)

As in the case of Jordan normal form, we introduce dot diagram. For the Jordan normal form, the dot diagram is devised based on the eigenvalues of the given operator. Now, for the case of rational normal form, we shall define the dot diagram based on the irreducible monic divisors of the characteristic polynomial of the given operator. Let \(T\) be a linear operator on a finite-dimensional vector space over the scalar field \({\cal F}\) with a rational normal basis \(\mathfrak{B}\). Let \(\phi\) be an irreducible monic divisor of the characteristic polynomial of \(T\). Let \(\mathfrak{B}_{v_{1}},\mathfrak{B}_{v_{2}},\cdots ,\mathfrak{B}_{v_{s}}\) be the cyclic ordered bases contained in \(V_{\phi}\). For each \(j=1,\cdots ,s\), let \(\phi^{p_{j}}(t)\) be the annihilator of \(v_{j}\). Then, we have
\[\deg\phi^{p_{j}}=p_{j}\cdot\deg\phi .\]
Therefore, \(\mathfrak{B}_{v_{j}}\) contains \(p_{j}\cdot\deg\phi\) vectors. We also assume that the cyclic ordered bases are arranged in decreasing order of size. Therefore, we have
\[p_{1}\geq p_{2}\geq\cdots\geq p_{s}.\]
We define the dot diagram of \(\phi\) as the array consisting of \(s\) columns of dots with \(p_{j}\) dots in the \(j\)th column, which are arranged such that the \(j\)th column begins at the top and terminates after \(p_{j}\) dots. For example, if \(s=3\), \(p_{1}=4\), \(p_{2}=2\) and \(p_{3}=2\), then the dot diagram is given by
\[\begin{array}{ccc}
\bullet & \bullet & \bullet\\
\bullet & \bullet & \bullet\\
\bullet\\
\bullet
\end{array}\]

Example. Let \(V\) be a finite-dimensional vector space over \(\mathbb{R}\), and let \(T\) be a linear operator on \(V\). Suppose that the irreducible moinc divisors of the characteristic polynomial of \(T\) are
\[f_{1}(t)=t-1,\quad f_{2}(t)=t^{2}+2\mbox{ and }f_{3}=t^{2}+t+1.\]
We also assume that the dot diagrams associated with these divisors are given below
\[\begin{array}{rrr}
\begin{array}{cc}\bullet & \bullet\\ \bullet & \\ \mbox{diagram for $f_{1}$} & \end{array}
& \begin{array}{cc} & \\ \bullet & \bullet\\ & \mbox{diagram for $f_{2}$} \end{array}
& \begin{array}{c}\\ \bullet\\ \mbox{diagram for \(f_{3}\)}\end{array}
\end{array}\]
Since the dot diagram for \(f_{1}\) has two columns, it shall contribute two companion matrices. Its first column has two dots, which corresponds to a \(2\times 2\) companion matrix of \(f_{1}^{2}(t)=(t-1)^{2}\) given by
\[C_{1}=\left [\begin{array}{rr}
0 & -1\\ 1 & 2\end{array}\right ].\]
Its second column has only one dot, which corresponds to a \(1\times 1\) companion matrix of \(f_{1}(t)=t-1\) given by \(C_{2}=[1]\). The dot diagram for \(f_{2}\) has also two columns in which each column contains a single dot. This says that there two copies of \(2\times 2\) companion matrices of \(f_{2}(t)=t^{2}+2\) given by
\[C_{3}=C_{4}=\left [\begin{array}{rr}
0 & -2\\ 1 & 0\end{array}\right ].\]
Finally, the dot diagram for \(f_{3}\) corresponds to a \(2\times 2\) companion matrix of \(f_{3}(t)=t^{2}+1+1\) given by
\[C_{5}=\left [\begin{array}{rr}
0 & -1\\ 1 & -1\end{array}\right ].\]
Therefore, the rational normal form of \(T\) is a \(9\times 9\) matrix given by
\[C=\left [\begin{array}{ccccc}
C_{1} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0}\\
{\bf 0} & C_{2} & {\bf 0} & {\bf 0} & {\bf 0}\\
{\bf 0} & {\bf 0} & C_{3} & {\bf 0} & {\bf 0}\\
{\bf 0} & {\bf 0} & {\bf 0} & C_{4} & {\bf 0}\\
{\bf 0} & {\bf 0} & {\bf 0} & {\bf 0} & C_{5}
\end{array}\right ]
=\left [\begin{array}{rrrrrrrrr}
0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -2 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1
\end{array}\right ].\]

We return to the general problem of finding dot diagrams. Let \(T\) be a linear operator defined on a finite-dimensional vector space over the scalar field \({\cal F}\). Let \(\phi (t)\) be an irreducible monic divisor of the characteristic polynomial of \(T\). Let \(U\) be the restriction of linear operator \(\phi (T)\) to \(V_{\phi}\). By Proposition \ref{lap249}, it says that \(U^{q}\) is a zero mapping for some integer \(q\) (?????). This also says that the characteristic polynomial of \(U\) is \((-1)^{m}\lambda^{m}\), where \(m=\dim (V_{\phi})\) (?????). Therefore, \(V_{\phi}\) is the generalized eigenspace of \(U\) corresponding to \(\lambda =0\), and \(U\) has a Jordan normal form. The dot diagram associated with the Jordan normal form of \(U\) provides us a key to understand the dot diagram of \(T\) that is associated with \(\phi\). We now relate the two diagrams. Let \(\mathfrak{B}\) be a rational normal basis for \(T\), and let \(\mathfrak{B}_{v_{1}},\mathfrak{B}_{v_{2}},\cdots ,\mathfrak{B}_{v_{r}}\) be the cyclic ordered bases of \(\mathfrak{B}\) that are contained in \(V_{\phi}\). Suppose that the \(T\)-annihilator of \(v_{j}\) is \(\phi^{d_{j}}\). Let \(\gamma =\deg\phi\). Then \(\mathfrak{B}_{v_{j}}\) consists of \(d_{j}\cdot\gamma\) vectors in \(\mathfrak{B}\). For \(0\leq i<\gamma\), let \(\widehat{\mathfrak{B}}_{i}\) be the cycle of generalized eigenvectors of \(U\)
corresponding to \(\lambda =0\) given by
\[\widehat{\mathfrak{B}}_{i}=\left\{(\phi^{d_{j}-1}(T))(T^{i}(v_{j})),
(\phi^{d_{j}-2}(T))(T^{i}(v_{j})),\cdots ,(\phi (T))(T^{i}(v_{j})),T^{i}(v_{j})\right\},\]

where \(T^{0}(v_{j})=b_{j}\). We see that \(\widehat{\mathfrak{B}}_{i}\) is a linearly independent subset of \(\langle v_{i}\rangle_{T}\). Let

\[\bar{\mathfrak{B}}_{j}=\widehat{\mathfrak{B}}_{0}\cup\widehat{\mathfrak{B}}_{1}\cup\cdots\cup\widehat{\mathfrak{B}}_{\gamma -1}.\]

Then \(\bar{\mathfrak{B}}_{j}\) contains \(d_{j}\cdot\gamma\) vectors.

\begin{equation}{\label{lal253}}\tag{21}\mbox{}\end{equation}
Proposition \ref{lal253}. \(\bar{\mathfrak{B}}_{j}\) is an ordered basis for \(\langle v_{j}\rangle_{T}\). \(\sharp\)

The above proposition says that we can replace \(\mathfrak{B}_{v_{j}}\) by \(\bar{\mathfrak{B}}_{j}\) as a basis for \(\langle v_{j}\rangle_{T}\).

Proposition. The following set
\[\bar{\mathfrak{B}}=\bar{\mathfrak{B}}_{1}\cup\bar{\mathfrak{B}}_{2}\cup\cdots\cup\bar{\mathfrak{B}}_{r}\]
is a Jordan normal basis for \(V_{\phi}\). \(\sharp\)

Now, we are going to relate the dot diagram of \(T\) corresponding to \(\phi\) to the dot diagram of \(U\), where the dot diagram of \(T\) considers the rational normal form, and the the dot diagram of \(U\) considers the Jordan normal form. For convenience, we designate the dot diagram of \(T\) as \(D_{T}\) and the dot diagram of \(U\) as \(D_{U}\). For each \(j\), the presence of the cyclic ordered basis \(\mathfrak{B}_{v_{j}}\) results in a column of \(d_{j}\) dots in \(D_{T}\). Proposition \ref{lal253} says that this basis \(\mathfrak{B}_{v_{j}}\) can be replaced by the union \(\bar{\mathfrak{B}}_{j}\) of \(\gamma\) cycles of generalized eigenvalues of \(U\), each of length \(d_{j}\), which becomes part of the Jordan normal basis for \(U\). In other words, \(\bar{\mathfrak{B}}_{j}\) determines \(\gamma\) columns each containing \(d_{j}\) dots in dot diagram \(D_{U}\). Therefore, each column in \(D_{T}\) determines \(\gamma\) columns in \(D_{U}\) of the same length. Alternatively, each row in \(D_{U}\) has \(\gamma\) times as many dots as the corresponding row in \(D_{T}\). Since Proposition \ref{lap240} gives us the number of dots in any row of \(D_{U}\), we can divide the appropriate expression in this proposition by \(\gamma\) to obtain the number of dots in the corresponding row of \(D_{T}\), which is shown below.

\begin{equation}{\label{lap256}}\tag{22}\mbox{}\end{equation}
Proposition \ref{lap256}. Let \(V\) be an \(n\)-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a linear operator on \(V\). Let \(\phi\) be an irreducible monic divisor of the characteristic polynomial of \(T\) with degree \(\gamma\). Let \(r_{i}\) denote the number of dots in the \(i\)th row of the dot diagram for \(\phi\) with respect to a rational norma basis for \(T\). Then, we have the following formulas:
\begin{equation}{\label{laeq254}}\tag{23}
r_{1}=\frac{1}{\gamma}\cdot\left [\dim (V)-\mbox{rank}(\phi (T))\right ]
\end{equation}
and
\[r_{i}=\frac{1}{\gamma}\cdot\left [\mbox{rank}(\phi^{i-1}(T))-\mbox{rank}(\phi^{i}(T))\right ]\]
for \(i>1\). \(\sharp\)

Example. Let \(V\) be the space of all real-valued functions defined on \(\mathbb{R}\). We consider the following subset of \(V\)
\[\mathfrak{B}=\left\{e^{x}\cos 2x, e^{x}\sin 2x,xe^{x}\cos 2x,xe^{x}\sin 2x\right\}.\]
Let \(W=\mbox{span}(\mathfrak{B})\). Then \(W\) is a \(4\)-dimensional subspace of \(V\), and \(\mathfrak{B}\) is an ordered basis for \(W\). Let \(T\) be a linear operator on \(W\) defined by \(T(f)=f’\), the derivative of \(f\). Then, we have
\[A=[T]_{\mathfrak{B}}=\left [\begin{array}{rrrr}
1 & 2 & 1 & 0\\ -2 & 1 & 0 & 1\\ 0 & 0 & 1 & 2\\ 0 & 0 & -2 & 1
\end{array}\right ].\]
The characteristic polynomial of \(T\) and \(A\) is given by
\[P_{T}(\lambda )=\left (\lambda^{2}-2\lambda +5\right )^{2}.\]
Therefore \(\phi (t)=t^{2}-2t+5\) is the only irreducible monic divisor of the characteristic polynomial of \(T\). Since \(\deg\phi =1\) and \(\dim W=4\), the dot
diagram for \(\phi\) contains only two dots. After some calculation, we have
\[\phi (A)=\left [\begin{array}{rrrr}
0 & 0 & 0 & 4\\ 0 & 0 & -4 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\
\end{array}\right ],\]
which says
\[\mbox{rank}(\phi (T))=\mbox{rank}(\phi (A))=2.\]
According to (\ref{laeq254}), we obtain
\[r_{1}=\frac{1}{2}\cdot\left [4-\mbox{rank}(\phi (T))\right ]=\frac{1}{2}\cdot (4-2)=1,\]
which says that the first row has only one dot. Therefore, the second dot lies in the second row, and the dot diagram is given by
\[\begin{array}{c}
\bullet\\ \bullet
\end{array}.\]
It follows that \(W\) is a \(T\)-cyclic space generated by the \(T\)-annihilator
\[\phi^{2}=t^{4}-4t^{3}+14t^{2}-20t+25.\]
Therefore, its rational normal form is the companion matrix of \(\phi^{2}\) that is given by
\[\phi (A)=\left [\begin{array}{rrrr}
0 & 0 & 0 & -25\\ 1 & 0 & 0 & 20\\ 0 & 1 & 0 & -14\\ 0 & 0 & 1 & 4\\
\end{array}\right ].\]
In order to find the cyclic generator, it suffices to find a function \(g\) in \(W\) satisfying \((\phi (T))(g)\neq 0\). Since \((\phi (A))({\bf e}_{3})\neq {\bf 0}\), it follows \((\phi (T))(xe^{x}\cos 2x)\neq 0\). Therefore, the function \(g(x)=xe^{x}\cos 2x\) can be chosen as the cyclic generator, and the rational normal basis for \(T\) is given by
\[\mathfrak{B}_{g}=\left\{xe^{x}\cos 2x,T(xe^{x}\cos 2x),T^{2}(xe^{x}\cos 2x), T^{3}(xe^{x}\cos 2x)\right\}.\]
We can also show that \(h(x)=xe^{x}\sin 2x\) can be the cyclic generator. This says that the rational normal basis for \(T\) is not unique. \(\sharp\)

Let \(A\) be an \(n\times n\) matrix. The rational normal form of \(A\) is defined to be the rational normal form of \(T_{A}\), which is a linear operator associated with the matrix \(A\). Let \(B\) be a rational normal form of \(A\) (i.e., \(T_{A}\)). Given a rational normal basis \(\mathfrak{B}\) for \(T_{A}\), it says  \(B=[T_{A}]_{\mathfrak{B}}\), i.e., \(A\) is similar to \(B\). Let \(P\) be the matrix whose columns are the vectors of \(\mathfrak{B}\) in the same order, then \(P^{-1}AP=B\).

Example. Consider the following matrix
\[A=\left [\begin{array}{rrrrr}
0 & 2 & 0 & -6 & 2\\ 1 & -2 & 0 & 0 & 2\\ 1 & 0 & 1 & -3 & 2\\ 1 & -2 & 1 & -1 & 2\\
1 & -4 & 3 & -3 & 4
\end{array}\right ]\]
We shall find a matrix \(P\) such that \(B=P^{-1}AP\) is rational normal form of \(A\). The characteristic polynomial of \(A\) is given by
\[P_{A}(\lambda )=\left (\lambda^{2}+2\right )^{2}(\lambda -2).\]
Therefore, the distinct monic irreducible divisors of the characteristic polynomial of \(A\) are \(f_{1}(t)=t^{2}+2\) and \(f_{2}(t)=t-2\). Using part (ii) of Theorem \ref{lat255}, we have
\[\dim (V_{f_{1}})=4\mbox{ and }\dim (V_{f_{2}})=1.\]
Since \(\deg f_{1}=2\), the number of dots in the dot diagram for \(f_{1}\) is
\[\frac{\dim (V_{f_{1}})}{\deg f_{1}}=\frac{4}{2}=2,\]
and the number of dots in the first row is given by
\begin{align*} r_{1} & =\frac{1}{2}\cdot\left [\dim (\mathbb{R}^{5})-\mbox{rank}(f_{1}(A))\right ]
\\ & =\frac{1}{2}\cdot\left [5-\mbox{rank}(A^{2}+2I_{5})\right ]
\\ & =\frac{1}{2}\cdot\left (5-1\right )=2.\end{align*}
This shows that the dot diagram for \(f_{1}\) is
\[\begin{array}{cc}
\bullet\\ \bullet
\end{array}\]
and the companion matrix for \(f_{1}(t)=t^{2}+2\) is given by
\[\left [\begin{array}{rr}
0 & -2\\ 1 & 0
\end{array}\right ].\]
Since \(\dim (V_{f_{2}})=1\) and \(\deg f_{2}=1\), the dot diagram for \(f_{2}\) consists of a single dot. Therefore, the companion matrix is given by

\[\left [\begin{array}{ccccc}
0 & -2 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -2 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 2
\end{array}\right ].\]
Fro the dot diagram for \(f_{1}\), we see that if \(\mathfrak{B}\) is a rational normal basis for \(T_{A}\), then \(\mathfrak{B}\cup V_{f_{1}}\) is the union of two cyclic ordered bases \(\mathfrak{B}_{v_{1}}\) and \(\mathfrak{B}_{v_{2}}\), where \(v_{1}\) and \(v_{2}\) have the annihilator \(f_{1}\). It says that both \(v_{1}\) and \(v_{2}\) lie in \(\mbox{Ker}(f_{1}(T_{A}))\). We can check that
\[\left\{\left [\begin{array}{r}1\\0\\0\\0\\0\end{array}\right ],
\left [\begin{array}{r}0\\1\\0\\0\\0\end{array}\right ],
\left [\begin{array}{r}0\\0\\2\\1\\0\end{array}\right ],
\left [\begin{array}{r}0\\0\\-1\\0\\1\end{array}\right ]\right\}\]
is a basis for \(\mbox{Ker}(f_{1}(T_{A}))\). Let \(v_{1}={\bf e}_{1}\). Then, we have
\[Av_{1}=\left [\begin{array}{r}0\\1\\1\\1\\1\end{array}\right ].\]
Next, we want to choose \(v_{2}\in V_{f_{1}}=\mbox{Ker}(f_{1}(T_{A}))\) satisfying \(v_{2}\not\in\mathfrak{B}_{v_{1}}=\{v_{1},Av_{1}\}\). Let \(v_{2}={\bf e}_{2}\). Then, we have
\[Av_{2}=\left [\begin{array}{r}2\\-2\\0\\-2\\-4\end{array}\right ].\]
We also see that \(\mathfrak{B}_{v_{1}}\cup\mathfrak{B}_{v_{2}}\) is a basis for \(v_{f_{1}}\). On the other hand, since the dot diagram for \(f_{2}(t)=t-2\) consists of a single dot, any non-zero vector in \(V_{f_{2}}\) is an eigenvector of \(A\) corresponding to the eigenvalue \(\lambda =2\). For example, we take
\[v_{3}=\left [\begin{array}{r}0\\1\\1\\1\\2\end{array}\right ].\]
Using Theorem \ref{lat255}, it follows that
\[\mathfrak{B}=\left\{v_{1},Av_{1},v_{2},Av_{2},v_{3}\right\}\]
is a rational normal basis for \(T_{A}\). Therefore, the matrix \(P\) is given by
\[\left [\begin{array}{rrrrr}
1 & 0 & 0 & 2 & 0\\
0 & 1 & 1 & -2 & 1\\
0 & 1 & 0 & 0 & 1\\
0 & 1 & 0 & -2 & 1\\
0 & 1 & 0 & -4 & 2
\end{array}\right ]\]
such that \(B=P^{-1}AP\). \(\sharp\)

Example. Consider the following matrix
\[A=\left [\begin{array}{rrrr}
2 & 1 & 0 & 0\\ 0 & 2 & 1 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 2\\
\end{array}\right ]\]
We shall find a matrix \(P\) such that \(B=P^{-1}AP\) is rational normal form of \(A\). The characteristic polynomial of \(A\) is
\[P_{A}(\lambda )=(\lambda -2)^{4}.\]
Therefore, the only irreducible monic divisor is \(\phi (t)=t-2\), which also says \(V_{\phi}=\mathbb{R}^{4}\). According to Proposition \ref{lap256}, we have
\begin{align*}
r_{1} & =4-\mbox{rank}(\phi (A))=4-2\\
r_{2} & =\mbox{rank}(\phi (A))-\mbox{rank}(\phi^{2}(A))=2-1=1\\
r_{3} & =\mbox{rank}(\phi^{2}(A))-\mbox{rank}(\phi^{3}(A))=1-0=1,
\end{align*}
where \(r_{i}\) is the number of dots in the \(i\)th row of the dot diagram. Since there are \(\dim (\mathbb{R}^{4})=4\) dots in the dot diagram, the dot diagram is given by
\[\begin{array}{cc}
\bullet & \bullet\\ \bullet &\\ \bullet &\end{array}\]
Since the first column has three dots, we consider the companion matrix for \((t-2)^{3}\) which is given by
\[\left [\begin{array}{rrr}
0 & 0 & 8\\ 1 & 0 & -12\\ 0 & 1 & 6\end{array}\right ].\]
Since the second column has a single dot, the companion matrix for \(t-2\) is given by \([2]\). Therefore, the rational normal form of \(A\) is given by
\[B=\left [\begin{array}{cccc}
0 & 0 & 8 & 0\\
1 & 0 & -12 & 0\\
0 & 1 & 6 & 0\\
0 & 0 & 0 & 2
\end{array}\right ].\]
Next, we want find a rational normal basis for \(T_{A}\). The dot diagram indicates that there two vectors \(v_{1}\) and \(v_{2}\) in \(\mathbb{R}^{4}\) with annihilators \(\phi^{3}\) and \(\phi\), respectively, such that
\begin{align*} \mathfrak{B} & =\mathfrak{B}_{v_{1}}\cup\mathfrak{B}_{v_{2}}\\ & =\left\{v_{1},Av_{1},A^{2}v_{1},v_{2}\right\}\end{align*}
is a rational normal basis for \(T_{A}\), where \(v_{1}\not\in\mbox{Ker}((T_{A}-2I)^{2})\) and \(v_{2}\in\mbox{Ker}(T_{A}-2I)\). We can also obtain
\[\mbox{Ker}(T_{A}-2I)=\mbox{span}\left (\left\{{\bf e}_{1},{\bf e}_{4}\right\}\right )\]

and

\[\mbox{Ker}\left ((T_{A}-2I)^{2}\right )=\mbox{span}\left (\left\{{\bf e}_{1},{\bf e}_{2},{\bf e}_{4}\right\}\right ).\]
We can also check that \(v_{1}={\bf e}_{3}\) satisfies the required criteria for \(v_{1}\). Therefore, we have
\[Av_{1}=\left [\begin{array}{r}0\\ 1\\ 2\\ 0\end{array}\right ]\mbox{ and }
A^{2}v_{1}=\left [\begin{array}{r}1\\ 4\\ 4\\ 0\end{array}\right ].\]
Now, we want to pick \(v_{2}\in\mbox{Ker}(T_{A}-2I)\) and \(v_{2}\not\in\mbox{span}(\mathfrak{B}_{v_{1}})\). It can also be check that \(v_{2}={\bf e}_{4}\) satisfies the required criteria for \(v_{2}\). This says that
\[\left\{\left [\begin{array}{r}0\\ 0\\ 1\\ 0\end{array}\right ],
\left [\begin{array}{r}0\\ 1\\ 2\\ 0\end{array}\right ],
\left [\begin{array}{r}1\\ 4\\ 4\\ 0\end{array}\right ],
\left [\begin{array}{r}0\\ 0\\ 0\\ 1\end{array}\right ]\right\}\]
is a rational normal basis for \(T_{A}\). It also says that the matrix \(P\) given by
\[\left [\begin{array}{rrrr}
0 & 0 & 1 & 0\\
0 & 1 & 4 & 0\\
1 & 2 & 4 & 0\\
0 & 0 & 0 & 1
\end{array}\right ]\]
satisfies \(B=P^{-1}AP\). \(\sharp\)

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *