Bilinear Mappings

Albert Lynch (1851-1912) was a Peruvian painter.

We have sections

\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}

Bilinear Mappings.

Definition. Let \(U\), \(V\) and \(W\) be vector spaces over the scalar field \({\cal F}\). A mapping \(T:U\times V\rightarrow W\) is called a bilinear mapping when the following conditions are satisfied:

  • For all \(u_{1},u_{2}\in U\) and \(v\in V\), we have
    \[T(u_{1}+u_{2},v)=T(u_{1},v)+T(u_{2},v).\]
  • For all \(u\in U\) and \(v_{1},v_{2}\in V\), we have
    \[T(u,v_{1}+v_{2})=T(u,v_{1})+T(u,v_{2}).\]
  • For all \(\alpha\in {\cal F}\), \(u\in U\) and \(v\in V\), we have
    \[T(cu,v)=cT(u,v)=T(u,cv).\]

We can see that if \(T:U\times V\rightarrow W\) is a bilinear mapping, then, given any fixed \(u_{0}\in U\) and \(v_{0}\in V\), the mappings \(v\mapsto T(u_{0},v)\) and \(u\mapsto T(u,v_{0})\) are linear. We denote by \({\cal B}(U\times V,W)\) the set of all bilinear mappings from \(U\times V\) into \(W\).

Proposition. Let \(U\), \(V\) and \(W\) be vector spaces over the scalar field \({\cal F}\). Then, we have the following properties.

(i) If \(T_{1}\) and \(T_{2}\) are bilinear mappings, then \(T_{1}+T_{2}\) is a bilinear mapping.

(ii) Given any \(\alpha\in {\cal F}\), if \(T\) is a bilinear mapping, then \(\alpha T\) is a bilinear mapping.

Proof. For all \(u_{1},u_{2}\in U\) and \(v\in V\), we have
\begin{align*}
(T_{1}+T_{2})(u_{1}+u_{2},v) & =T_{1}(u_{1}+u_{2},v)+T_{2}(u_{1}+u_{2},v)\\
& =T_{1}(u_{1},v)+T_{1}(u_{2},v)+T_{2}(u_{1},v)+T_{2}(u_{2},v)\\
& =(T_{1}+T_{2})(u_{1},v)+(T_{1}+T_{2})(u_{2},v).
\end{align*}
The remaining proof can follow the above similar argument, and the proof is complete. \(\blacksquare\)

Let \(\theta_{W}\) be the zero element of \(W\). The zero mapping \(\Theta :U\times V\rightarrow W\) defined by \(\Theta (u,v)=\theta_{W}\) for all \(u\in U\) and \(v\in V\) is the zero element of \({\cal B}(U\times V,W)\), i.e.,
\[T+\Theta =\Theta +T=T\]
for any \(T\in {\cal B}(U\times V,W)\). Given any \(T\in {\cal B}(U\times V,W)\), we have
\begin{align*} ((-1)T+T)(u,v) & =(-1)T(u,v)+T(u,v)\\ & =-T(u,v)+T(u,v)\\ & =\theta_{W},\end{align*}
which says the inverse element of \(T\) is \((-1)T\), i.e., \(-T=(-1)T\).

Example. Let \(A\) be an \(m\times n\) matrix in the scalar field \({\cal F}\). We can define a mapping \(T_{A}:{\cal F}^{m}\times {\cal F}^{n}\rightarrow {\cal F}\) by
\[T_{A}(X,Y)=X^{\top}AY.\]
Since
\[X^{\top}A(Y_{1}+Y_{2})=X^{\top}AY_{1}+X^{\top}AY_{2}\]
and
\[X^{\top}A(cY)=cX^{\top}AY,\]
it can be realized that \(T_{A}\) is a bilinear mapping. \(\sharp\)

\begin{equation}{\label{lat42}}\tag{1}\mbox{}\end{equation}
Theorem \ref{lat42}. Let \({\cal F}\) be a scalar field. Given a bilinear mapping \(T:{\cal F}^{m}\times {\cal F}^{n}\rightarrow {\cal F}\), there exists a unique \(m\times n\) matrix \(A\) satisfying \(T(X,Y)=X^{\top}AY\).

Let \(V\) be a vector space over the scalar field \({\cal F}\). The bilinear mapping \(T:V\times V\rightarrow {\cal F}\) is called a bilinear form on \(V\). The set of all bilinear forms on \(V\) is denoted by \({\cal B}(V)\). Let \(A\) be an \(n\times n\) matrix, and let \(T_{A}:{\cal F}^{n}\times {\cal F}^{n}\rightarrow {\cal F}\) be the bilinear form on \({\cal F}^{n}\) defined by \(T_{A}(X,Y)=X^{\top}AY\). According to Theorem \ref{lat42}, the mapping \(A\mapsto T_{A}\) is an isomorphism between \({\cal B}({\cal F}^{n})\) and the space of all \(n\times n\) matrices in \({\cal F}\). In this case, we also say that \(A\) is the matrix representing the bilinear form.

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with a basis \(\{v_{1},\cdots ,v_{n}\}\), and let \(T\) be a bilinear form on \(V\). Given any \(u,v\in V\), we can write
\[u=x_{1}v_{1}+\cdots +x_{n}v_{n}\]
and
\[v=y_{1}v_{1}+\cdots +y_{n}v_{n}.\]
Therefore, we have
\[T(u,v)=\sum_{i=1}^{n}\sum_{j=1}^{n}x_{i}y_{j}T(v_{i},v_{j}).\]
If we let \(a_{ij}=T(v_{i},v_{j})\), then
\[T(u,v)=\sum_{i=1}^{n}\sum_{j=1}^{n}x_{i}y_{j}a_{ij}.\]
This says that the bilinear form can be expressed in terms of the coordinate vectors \(X\), \(Y\) and matrix \(A\).

A bilinear from \(T\) on \(V\) is said to be symmetric when \(T(u,v)=T(v,u)\).

Proposition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(T\) be a symmetric bilinear form on \(V\). Then \(\langle u,v\rangle =T(u,v)\) defines a inner product.

Proposition. An \(n\times n\) matrix \(A\) in the scaler field \({\cal F}\) represents a symmetric bilinear form if and only if \(A\) is a symmetric matrix. \(\sharp\)

Let \(A\) be a square matrix representing a bilinear form \(T\) that is given by \(T(X,Y)=X^{\top}AY\) in terms of the coordinate vectors \(X\) and \(Y\). Suppose that we change the basis in the vector space. Then, there exists a non-singular matrix \(N\) such that \(X=N\widehat{X}\) and \(Y=N\widehat{Y}\), where \(\widehat{X}\) and \(\widehat{Y}\) are the coordinate vectors with respect to the new basis. In this case, we have
\begin{align*} X^{\top}AY & =(N\widehat{X})^{\top}A(N\widehat{Y})\\ & =\widehat{X}^{\top}(N^{\top}AN)\widehat{Y}.\end{align*}
This says that, with respect to the coordinate vectors \(\widehat{X}\) and \(\widehat{Y}\), the matrix representing the bilinear form \(T\) is \(N^{\top}AN\).

Definition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a symmetric bilinear form on \(V\). The quadratic form determined by \(T\) is the mapping \(f:V\rightarrow {\cal F}\) defined by \(f(v)=g(v,v)\). \(\sharp\)

If \(V={\cal F}^{n}\) and \(A\) is a symmetric matrix in \({\cal F}\) representing a symmetric bilinear form \(T\), then the quadratic form determined by \(T\) is given by
\begin{align*} f(X) & =T(X,X)=X^{\top}AX\\ & =\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}x_{i}x_{j}.\]

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(T\) be a symmetric bilinear form on \(V\). If \(f\) is the quadratic form determined by \(T\), then we have the following relations:
\[T(u,v)=\frac{1}{4}\cdot\left [f(u+v)-f(u-v)\right ]\]
and
\[T(u,v)=\frac{1}{2}\left [f(u+v)-f(u)-f(v)\right ].\]

Example. Let \(V=\mathbb{R}^{2}\) and let \(X^{\top}=(x,y)\) denote the elements in \(\mathbb{R}^{2}\). Suppose that the quadratic form is given by
\[f(X)=f(x,y)=2x^{2}+3xy+y^{2}.\]
We shall find the matrix \(A\) representing its bilinear form. We write
\[A=\left [\begin{array}{cc}
a & b\\ c & d
\end{array}\right ].\]
Since
\begin{align*} f(X) & =f(x,y)=X^{\top}AX\\ & =[x,y]\left [\begin{array}{cc}
a & b\\ c & d
\end{array}\right ]\left [\begin{array}{c}
a\\ y\end{array}\right ]
=ax^{2}+2bxy+dy^{2},\end{align*}
we obtain \(a=2\), \(2b=3\) and \(d=1\), which can determine the matrix \(A\). \(\sharp\)

\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}

Symmetric Operators.

Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). We assume that \(V\) has a fixed non-degenerate symmetric bilinear form, which
is considered as a inner product \(\langle\cdot ,\cdot\rangle\). We recall that an operator on \(V\) is a linear mapping \(T:V\rightarrow V\) from \(V\) into itself. In this case, we can define another mapping \(\widehat{T}:V\times V\rightarrow {\cal F}\) by \(\widehat{T}(u,v)=\langle T(u),v\rangle\). for any \(u,v\in V\). We can show that \(\widehat{T}\) is a bilinear form on \(V\).

\begin{equation}{\label{lat43}}\tag{2}\mbox{}\end{equation}
Theorem \ref{lat43}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with a fixed non-degenerate symmetric bilinear form \(\langle\cdot ,\cdot\rangle\). Let \(\widehat{T}\) be any bilinear form on \(V\). Then, there exist unique operators \(T_{1}\) and \(T_{2}\) on \(V\) satisfying
\[\widehat{T}(u,v)=\langle T_{1}(u),v\rangle =\langle u,T_{2}(v)\rangle\]
for all \(u,v\in V\). \(\sharp\)

In Theorem \ref{lat43}, the operator \(T_{2}\) will be called the transpose of operator \(T_{1}\), and it is also denoted by \(T_{1}^{\top}\), i.e., \(T_{2}=T_{1}^{\top}\). Therefore, for any operator \(T\), with respect to a fixed non-degenerate symmetric bilinear form \(\langle\cdot ,\cdot\rangle\), we have the following formula
\[\langle T(u),v\rangle =\langle u,T^{\top}(v)\rangle\]
for all \(u,v\in V\). The operator \(T\) on \(V\) is said to be symmetric (with respect to a fixed non-degenerate symmetric bilinear form \(\langle\cdot ,\cdot\rangle\)) if and only if \(T=T^{\top}\). Therefore, if the operator \(T\) is symmetric, then we have
\[\langle T(u),v\rangle =\langle u,T(v)\rangle\]
for all \(u,v\in V\).

Proposition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with a fixed non-degenerate symmetric bilinear form \(\langle\cdot ,\cdot\rangle\). Let \(T_{1}\) and \(T_{2}\) be operators on \(V\) and let \(\alpha\in {\cal F}\). Then, we have
\begin{align*}
(T_{1}+T_{2})^{\top} & =T_{1}^{\top}+T_{2}^{\top}\\
(T_{1}\circ T_{2})^{\top} & =T_{2}^{\top}\circ T_{1}^{\top}\\
(\alpha T_{1})^{\top} & =\alpha T_{1}^{\top}\\
T_{1}^{\top\top} & =T_{1}.
\end{align*}

Proof. We prove only the second formula. Given any \(u,v\in V\), we have
\begin{align*} \langle (T_{1}(T_{2}(u)),v\rangle & =\langle T_{2}(u),T_{1}^{\top}(v)\rangle\\ & =\langle T_{2}^{\top}(T_{1}^{\top}(v)),\end{align*}
which means \((T_{1}\circ T_{2})^{\top}=T_{2}^{\top}\circ T_{1}^{\top}\). The proofs of other formulas are left as exercises. \(\blacksquare\)

\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}

Hermitian Operators.

Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\). We assume that \(V\) has a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). Given an operator \(T:V\rightarrow V\), for each \(u\in V\), we can define a mapping \(L_{u}:V\rightarrow\mathbb{C}\) by \(L_{u}(v)=\langle T(v),u\rangle\) for all \(v\in V\). Then, we can show that \(L_{u}\) is a linear functional.

\begin{equation}{\label{lat45}}\tag{3}\mbox{}\end{equation}
Theorem \ref{lat45}. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). Given a functional \(L\) on \(V\), there exists a unique \(u\in V\) such that \(L(v)=\langle v,u\rangle\) for all \(v\in V\). \(\sharp\)

Given any \(u\in V\), as shown above, we can define a linear functional \(L_{u}(v)=\langle T(v),u\rangle\). According to Theorem~\ref{lat45}, there exists a unique \(u’\in V\) satisfying
\[\langle T(v),u\rangle =L_{u}(v)=\langle v,u’\rangle .\]
Therefore, we can define a mapping \(T^{*}:V\rightarrow V\) by \(u\mapsto u’\), which also means
\[\langle T(v),u\rangle =\langle v,T^{*}(u)\rangle .\]
We can show that \(T^{*}\) is a linear mapping. In this case, the mapping \(T^{*}\) is called the adjoint of \(T\). The association \(w\mapsto L_{w}\) is not an isomorphism of \(V\) with the dual space \(V^{*}\). In fact, if \(\alpha\in\mathbb{C}\), then \(L_{\alpha w}=\bar{a}L_{w}\). However, this is immaterial for the existence of the element \(u’\). An operator \(T\) is called hermitian or self-adjoint when \(T^{*}=T\). This means \(\langle T(v),u\rangle =\langle v,T(u)\rangle\) for all \(u,v\in V\).

Proposition. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). Let \(T_{1}\) and \(T_{2}\) be two operators on \(V\), and let \(\alpha\in\mathbb{C}\). Then we have
\begin{align*}
(T_{1}+T_{2})^{*} & =T_{1}^{*}+T_{2}^{*}\\
(T_{1}\circ T_{2})^{*} & =T_{2}^{*}\circ T_{1}^{*}\\
(\alpha T_{1})^{*} & =\bar{\alpha}T_{1}^{*}\\
T_{1}^{**} & =T_{1}.
\end{align*}

Proof. We shall only prove the third formula. Now, we have
\begin{align*} \langle v,(\alpha T)^{*}(u)\rangle & =\langle\alpha T(v),u\rangle\\ & =\alpha\langle T(v),u\rangle\\ & =
\alpha\langle v,T^{*}(u)\rangle\\ & =\langle v,\bar{\alpha}T^{*}(u)\rangle ,\end{align*}
which shows \((\alpha T)^{*}=\bar{\alpha}T^{*}\). The proof of other formulas are left as exercises. \(\blacksquare\)

Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). Let \(\widehat{T}\) be a bilinear form on \(V\). We call \(\widehat{T}\) as a Hermitian form if and only if \(\widehat{T}(u,v)\equiv\langle u,v\rangle\) defines a Hermitian product.

\begin{equation}{\label{lat46}}\tag{4}\mbox{}\end{equation}
Theorem \ref{lat46}. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). Let \(\widehat{T}\) be a Hermitian form. Then, there exists a unique Hermitian operator \(T\) satisfying \(\widehat{T}(v,u)=\langle T(v),u\rangle\) for all \(u,v\in V\). \(\sharp\)

In Theorem \ref{lat46}, we say that the operator \(T\) represents the Hermitian form \(\widehat{T}\). We also have the following polarization identity:
\[\langle T(u+v),u+v\rangle -\langle T(u-v),u-v\rangle =2\left [\langle T(v),u\rangle +\langle T(u),v\rangle\right ]\]
and
\begin{equation}{\label{laeq47}}\tag{5}
\langle T(u+v),u+v\rangle -\langle T(u),u\rangle -\langle T(v),v\rangle=\langle T(u),v\rangle +\langle T(v),u\rangle
\end{equation}
for all \(u,v\in V\).

\begin{equation}{\label{lap50}}\tag{6}\mbox{}\end{equation}
Proposition \ref{lap50}. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). If \(T\) is an operator on \(V\) such that \(\langle T(v),v\rangle =0\) for all \(v\in V\), then \(T\) is a zero mapping.

Proof. The left-hand side of (\ref{laeq47}) is zero for all \(u,v\in V\), which says
\begin{equation}{\label{laeq48}}\tag{7}
\langle T(u),v\rangle +\langle T(v),u\rangle =0
\end{equation}
for all \(u,v\in V\). Now, we replace \(v\) by \(iv\). According to the rule of Hermitian product, we obtain
\[-i\langle T(u),v\rangle +i\langle T(v),u\rangle =0,\]
which implies
\begin{equation}{\label{laeq49}}\tag{8}
-\langle T(u),v\rangle +\langle T(v),u\rangle =0
\end{equation}
by multiplying \(-i\) on both sides. By adding (\ref{laeq48}) and (\ref{laeq49}) together, we obtain \(2\langle T(v),u\rangle =0\), which must say that \(T\) is a zero mapping. This completes the proof. \(\blacksquare\)

Proposition. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). Let \(T\) be an operator on \(V\). Then \(T\) is Hermitian if and only if \(\langle T(v),v\rangle\in\mathbb{R}\) for all \(v\in V\).

Proof. Suppose that \(T\) is Hermitian. Then, we have
\begin{align*} \langle T(v),v\rangle & =\langle v,T(v)\rangle\\ & =\overline{\langle T(v),v\rangle},\end{align*}
which says \(\langle T(v),v\rangle\in\mathbb{R}\) for all \(v\in V\). Conversely, we assume that \(\langle T(v),v\rangle\in\mathbb{R}\) for all \(v\in V\). Then, we have
\begin{align*} \langle T(v),v\rangle & =\overline{\langle T(v),v\rangle}\\ & =\langle v,T(v)\rangle\\ & =\langle T^{*}(v),v\rangle ,\end{align*}
which says \(\langle (T-T^{*})(v),v\rangle =0\) for all \(v\in V\). Using Proposition \ref{lap50}, we conclude that \(T-T^{*}\) is a zero mapping, i.e., \(T=T^{*}\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{d}}\tag{D}\mbox{}\end{equation}

Unitary Operators.

Let \(V\) be a finite-dimensional vector space over \(\mathbb{R}\) with a fixed positive definite inner product \(\langle\cdot ,\cdot\rangle\). The linear mapping \(T:V\rightarrow V\) is said to be a real unitary mapping when
\[\langle T(u),T(v)\rangle =\langle u,v\rangle\]
for all \(u,v\in V\). We may say that \(T\) is unitary means that \(T\) preserves the inner product.

\begin{equation}{\label{lap51}}\tag{9}\mbox{}\end{equation}
Proposition \ref{lap51}. Let \(V\) be a finite-dimensional vector space over \(\mathbb{R}\) with a fixed positive definite inner product \(\langle\cdot ,\cdot\rangle\), and let \(T:V\rightarrow V\) be a linear mapping. Then the following statements are equivalent.

(a) \(T\) is a real unitary mapping.

(b) \(T\) preserves the length of vectors, i.e., \(\parallel T(v)\parallel=\parallel v\parallel\) for all \(v\in V\).

(c) For every unit vector \(v\in V\), \(T(v)\) is also a unit vector.

Proof. The equivalence between (b) and (c) is left as an exercise. If \(T\) is unitary, then we have
\begin{align*} \parallel T(v)\parallel^{2} & =\langle T(v),T(v)\rangle\\ & =\langle v,v\rangle\\ & =\parallel v\parallel^{2},\end{align*}
which says that (a) implies (b). Suppose that (b) holds true. We have
\begin{align*}
\langle u+v,u+v\rangle -\langle u-v,u-v\rangle & =\langle T(u+v),T(u+v)\rangle -\langle T(u-v),T(u-v)\rangle\\
& =4\langle T(u),T(v)\rangle .
\end{align*}
We also have
\[\langle u+v,u+v\rangle -\langle u-v,u-v\rangle =4\langle u,v\rangle .\]
Therefore, we obtain \(4\langle T(u),T(v)\rangle =4\langle u,v\rangle\), which says that (b) implies (a). This completes the proof. \(\blacksquare\)

The statement (c) in Proposition \ref{lap51} says that the unitary mapping maps unit vector into unit vector. This is why we call \(T\) as a unitary mapping. We also see that the unitary mapping preserves perpendicularity, since \(\langle u,v\rangle =0\) implies
\[\langle T(u),T(v)\rangle =\langle u,v\rangle =0.\]
However, the converse is not true in general; that is, a mapping that preserves perpendicularity is not necessarily a unitary mapping. For instance, over \(\mathbb{R}\), a mapping which sends a vector \(v\) to a vector \(2v\) preserves perpendicularity, but it is not unitary. We also remark that a unitary mapping is invertible. Indeed, if \(T\) is a unitary mapping and \(T(v)=\theta\), then \(v=\theta\), since \(T\) preserves length. This also says that \(\ker (T)=\{\theta\}\); that is, \(T\) is one-to-one and invertible.

\begin{equation}{\label{lap53}}\tag{10}\mbox{}\end{equation}
Proposition \ref{lap53}. Let \(V\) be a finite-dimensional vector space over \(\mathbb{R}\) with a fixed positive definite inner product \(\langle\cdot ,\cdot\rangle\). The linear mapping \(T:V\rightarrow V\) is unitary if and only if \(T^{\top}T=I\).

Proof. We have
\begin{equation}{\label{laeq52}}\tag{11}
\langle (T^{\top}T)(u),v\rangle =\langle T(u),T(v)\rangle .
\end{equation}
If \(T\) is unitary, then, from (\ref{laeq52}), we immediately have \(\langle (T^{\top}T)(u),v\rangle =\langle u,v\rangle\) for all \(u,v\in V\), which says \(T^{\top}T=I\). Conversely, if \(T^{\top}T=I\), then, from (\ref{laeq52}), we also see that \(T\) is unitary. This completes the proof. \(\blacksquare\)

Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). The linear mapping \(T:V\rightarrow V\) is said to be a complex unitary mapping when
\[\langle T(u),T(v)\rangle =\langle u,v\rangle\]
for all \(u,v\in V\). Then, Proposition \ref{lap51} still hold true for the complex unitary mapping. On the other hand, the analogue of Proposition \ref{lap53} is shown below.

Proposition. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) with a fixed positive definite Hermitian product \(\langle\cdot ,\cdot\rangle\). The linear mapping \(T:V\rightarrow V\) is unitary if and only if \(T^{*}T=I\).

Proposition. Let \(V\) be a finite-dimensional vector space over \(\mathbb{R}\) with a inner product \(\langle\cdot ,\cdot\rangle\) and \(\dim (V)\geq 1\). Let \(\{v_{1},\cdots ,v_{n}\}\) be an orthogonal basis for \(V\). We define
\[V_{0}=\left\{v\in V:\langle v,u\rangle =0\mbox{ for all }u\in V\right\}.\]
Then \(V_{0}\) is a subspace of \(V\) and the number of integers \(i\) such that \(\langle v_{i},v_{i}\rangle =0\) is equal to \(\dim (V)_{0}\).

Theorem. (Sylvester’s Theorem). Let \(V\) be a finite-dimensional vector space over \(\mathbb{R}\) with a inner product \(\langle\cdot ,\cdot\rangle\) and \(\dim (V)\geq 1\). Then there exists an integer \(r\geq 0\) having the following propperty: if \(\{v_{1},\cdots ,v_{n}\}\) is an orthogonal basis of \(V\), then there are precisely \(r\) integers satisfying \(\langle v_{i},v_{i}\rangle =0\). \(\sharp\)

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *