Vector Spaces

Henri Biva (1848-1929) was a French painter.

We have sections

\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}

Concepts and Definitions.

Let \({\cal F}\) be either the set \(\mathbb{R}\) of all real numbers or the set \(\mathbb{C}\) of all complex numbers, and let \(V\) be a given set over the scalar field \({\cal F}\) such that the addition and scalar multiplication are closed in the following sense:

  • (addition) if \(x,y\in V\), then \(x+y\in V\);
  • (scalar multiplication) if \(\alpha\in {\cal F}\) and \(x\in V\), then \(\alpha x\in V\).

Example. We take \(V\) as the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\). Then \({\bf x}\in\mathbb{R}^{n}\) can be expressed as \({\bf x}=(x_{1},x_{2},\cdots ,x_{n})\). For any \({\bf x},{\bf y}\in V=\mathbb{R}^{n}\), the addition is given by\[{\bf x}+{\bf y}=(x_{1},\cdots ,x_{n})+(y_{1},\cdots ,y_{n})=(x_{1}+y_{1},\cdots ,x_{n}+y_{n})\in\mathbb{R}^{n}.\] Let \(\alpha\in\mathbb{R}\). The scalar multiplication is given by \[\alpha {\bf x}=\alpha (x_{1},\cdots ,x_{n})=(\alpha x_{1},\cdots ,\alpha x_{n})\in\mathbb{R}^{n}.\]

The same addition and scalar multiplication can also apply to the \(n\)-tuple space \({\cal F}^{n}\), where \({\cal F}\) is a scalar field. \(\sharp\)

\begin{equation}{\label{lad146}}\tag{1}\mbox{}\end{equation}

Definition \ref{lad146}. Let \(V\) be a given set over the scalar field \({\cal F}\) such that the addition and scalar multiplication are closed. We say that \(V\) is a vector space over the scalar field \({\cal F}\) when  the following conditions are satisfied.

  • There is an element \(\theta\in V\) satisfying \[\theta +x=x+\theta =x\] for any \(x\in V\). This element \(\theta\) is called the zero element.
  • For any element \(x\in V\), there exists an element \(y\in V\) satisfying \[x+y=y+x=\theta .\] In this case, we also write \(y=-x\) and call \(-x\) the inverse element of \(x\).
  • For any \(x\in X\), we have \(1x=x\).
  • (associativity) For any elements \(x,y,z\in V\), we have \[(x+y)+z=x+(y+z).\]
  • (commutativity) For any elements \(x,y\in V\), we have \[x+y=y+x.\]
  • For any \(\alpha\in {\cal F}\) and \(x,y\in V\), we have \[\alpha (x+y)=\alpha x+\alpha y.\]
  • For any \(\alpha ,\beta\in {\cal F}\) and \(x\in V\), we have \[(\alpha +\beta )x=\alpha x+\beta x.\]
  • For any \(\alpha ,\beta\in {\cal F}\) and \(x\in V\), we have \[(\alpha\beta )x=\alpha (\beta x).\]

The elements of \({\cal F}\) are called scalars, and the elements of \(V\) are called vectors. \(\sharp\)

Example. We present many well-known vector spaces.

  1. The \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\) is a vector space over \(\mathbb{R}\) or \(\mathbb{C}\).
  2. Let \(V\) be the set of all \(m\times n\) matrices in \({\cal F}\). The vector addition in \(V\) is defined as the matrix addition, and the scalar multiplication in \(V\) is defined as the matrix product of scalars and matrices. Then \(V\) will form a vector space over \({\cal F}\).
  3. Let \({\cal F}\) be a scalar field, and let \(S\) be a nonempty set. Let \(V\) be the set of all functions from the set \(S\) into \({\cal F}\). The vector addition in \(V\) is defined as the usual addition of functions, and the scalar multiplication in \(V\) is also defined as the product of the scalars and the functions. Then \(V\) will form a vector space over \({\cal F}\).
  4. Let \({\cal F}\) be a scalar field, and let \(V\) be the set of all polynomials with coefficients in \({\cal F}\). The element in \(V\) has the form of \[c_{0}+c_{1}x+c_{2}x^{2}+\cdots +c_{n}x^{n},\] where \(c_{i}\in {\cal F}\) for \(i=1,\cdots ,n\). Since the addition of any two polynomials is a polynomial, and the product of a scalars and a polynomial is also a polynomial, the set \(V\) can also form a vector space over \({\cal F}\). \(\sharp\)

Proposition. Let \(V\) be a vector space over the scalar field \({\cal F}\). Then, we have the following properties.

(i) The zero element in \(V\) is unique.

(ii) The inverse element of each element in \(V\) is unique.

Proof. To prove part (i), suppose that \(z\) is an element in \(V\) satisfying \(x+z=z+x=x\) for all \(x\in V\). If we take \(x=\theta\), then we have

\[z=\theta +z=z+\theta =\theta ,\]

which proves the uniqueness.

To prove part (ii), given any element \(x\in V\), if there exist \(y,z\in V\) satisfying

\begin{align*} x+y & =y+x\\ & =x+z\\ & =z+x\\ & =\theta ,\end{align*}

then we want to show \(y=z\). Now, we have

\begin{align*} y & =y+\theta\\ & =y+(x+z)\\ & =(y+x)+z\\ & =\theta +z\\ & =z,\end{align*}

which proves the desired result. This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lap3}}\tag{2}\mbox{}\end{equation}

Proposition \ref{lap3}. For \(x\in V\) and \(\alpha\in {\cal F}\), \(\alpha x=\theta\) if and only if \(\alpha =0\) or \(x=\theta\).

Proof. We first show that \(\alpha =0\) or \(x=\theta\) implies \(\alpha x=\theta\). We consider the following cases.

  • If \(\alpha =0\), then
    \begin{align*} x+\alpha x & =1x+0x\\ & =(1+0)x\\ & =1x=x,\end{align*}
    which says that \(\alpha x=\theta\) by the uniqueness of zero element.
  • If \(x=\theta\), then we have
    \begin{align*} \alpha x+\alpha\theta & =\alpha\theta +\alpha\theta\\ & =\alpha (\theta +\theta )\\ & =\alpha\theta ,\end{align*}
    which implies \(\alpha x=\theta\) by the uniqueness of zero element.

Suppose that \(\alpha x=\theta\). If \(\alpha =0\), then we are done. Therefore, we assume that \(\alpha\neq 0\). Then, we also have

\begin{align*} \frac{1}{\alpha}(\alpha x) & =\frac{1}{\alpha}\theta\\ & =\theta ,\end{align*}

which implies \((\frac{1}{\alpha}\alpha )x=\theta\). Therefore, we obtain \(1x=x=\theta\). This completes the proof. \(\blacksquare\)

For any element \(x\in V\), the inverse element of \(x\) is denoted by \(-x\). The element \(-1x\) means the scalar multiplication of \(x\) by the scalar \(-1\). We expect the equality \(-1x=-x\), which is guaranteed below.

\begin{equation}{\label{lap149}}\tag{3}\mbox{}\end{equation}

Proposition \ref{lap149}. Let \(V\) be a vector space over the scalar field \({\cal F}\). For any \(x\in V\), we have \(-1x=-x\).

Proof. The idea is to show \(x+(-1x)=\theta\). Now, we have

\begin{align*} x+(-1x) & =1x+(-1)x\\ & =(1+(-1))x\\ & =\theta x\\ & =\theta\end{align*}

by Proposition \ref{lap3}. This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lap147}}\tag{4}\mbox{}\end{equation}

Proposition \ref{lap147}. (Cancellation law for vector addition). Let \(V\) be a vector space over the scalar field \({\cal F}\). Given any \(x,y,z\in V\), suppose that \(x+z=y+z\). Then, we have \(x=y\).

Proof. There exists an inverse element \(v\in V\) of \(z\) satisfying \(v+z=\theta\). Therefore, we have

\begin{align*} x & =x+\theta\\ & =x+(v+z)\\ & =(x+z)+v\\ & =(y+z)+v\\ & =y+(z+v)\\ & =y+\theta =y.\end{align*}

This completes the proof. \(\blacksquare\)

\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}

Subspaces.

The concept of spanned subspace in vector space plays an important role for studying the structure of vector space. For example, the basis and dimension of a vector space can present the information for the structure of this vector space.

Let \(V\) be a vector space over the scalar field \({\cal F}\). A subspace of \(V\) is a subset \(W\) of \(V\) which is itself a vector space over \({\cal F}\) with the operations of vector addition and scalar multiplication on \(V\). In other words, all of the conditions given in Definition \ref{lad146} should be satisfied for \(W\). Since those conditions have been satisfied for \(V\), and \(W\) is a subset of \(V\), it follows that \(W\) is a subspace of \(V\) if and only if the following four conditions are satisfied:

  • \(W\) is closed under vector addition, i.e., \(u+v\in W\) for \(u,v\in W\);
  • \(W\) is closed under scalar multiplication, i.e., \(\alpha u\in W\) for \(\alpha\in {\cal F}\) and \(u\in W\);
  • \(W\) has a zero vector;
  • If \(w\in W\), then \(-w\in W\), i.e., each vector in \(W\) has an additive inverse in \(W\).

\begin{equation}{\label{lar148}}\tag{5}\mbox{}\end{equation}

Remark \ref{lar148}. Let \(V\) be a vector space over the scalar field \({\cal F}\) with the zero vector \(\theta_{V}\). If \(W\) is a subspace of \(V\), then one of the above conditions say that \(W\) has a zero vector \(\theta_{W}\). We need to claim \(\theta_{V}=\theta_{W}\). Since \(w+\theta_{W}=w\) for all \(w\in W\), we have

\begin{align*} w+\theta_{W} & =w\\ & =w+\theta_{V}.\end{align*}

According to Proposition \ref{lap147}, it follows that \(\theta_{V}=\theta_{W}\). \(\sharp\)

\begin{equation}{\label{lap150}}\tag{6}\mbox{}\end{equation}

Proposition \ref{lap150}. Let \(V\) be a vector space over the scalar field \({\cal F}\) with zero vector \(\theta\), and let \(W\) be a subset of \(V\). Then \(W\) is a subspace of \(V\) if and only if the following conditions are satisfied:

  • \(W\) is closed under vector addition;
  • \(W\) is closed under scalar multiplication;
  • the zero vector \(\theta\) in \(V\) is also in \(W\).

Proof. From Remark \ref{lar148}, we immediately have that if \(W\) is a subspace of \(V\), then the desired conditions are satisfied. For the converse, from the previous observations (check it!), we just need to claim that each vector in \(W\) has an additive inverse in \(W\). Since \(W\) is closed under multiplication, we must have \(-1w\in W\) for each \(w\in W\). Since \(-1w=-w\) by Proposition \ref{lap149}, we complete the proof. \(\blacksquare\)

\begin{equation}{\label{lap56}}\tag{7}\mbox{}\end{equation}

Proposition \ref{lap56}. Let \(V\) be a vector space over the scalar field \({\cal F}\). A nonempty subset \(W\) of \(V\) is a subspace of \(V\) if and only if \(\alpha u+v\in W\) for any \(u,v\in W\) and \(\alpha\in {\cal F}\).

Proof. Suppose that \(\alpha u+v\in W\) for any \(u,v\in W\) and \(\alpha\in {\cal F}\). Since \(W\) is nonempty, we have \(\theta =w+(-w)\in W\) for some \(w\in W\). Therefore, we also have \(\alpha u=\alpha u+\theta\in W\) and \(u+v=1u+v\in W\). This shows that \(W\) is a subspace by Proposition \ref{lap150}. Conversely, if \(W\) is a subspace, then \(\alpha u\in W\) for any \(u\in W\) and \(\alpha\in {\cal F}\), which also implies \(\alpha u+v\in W\) for any \(u,v\in W\) and \(\alpha\in {\cal F}\) by Proposition \ref{lap150} again. This completes the proof. \(\blacksquare\)

Example. We present many interesting subspaces.

(i) Let \(V\) be a vector space over the scalar field \({\cal F}\).

  • \(V\) itself is a subspace of \(V\).
  • \(\{\theta\}\) is a subspace of \(V\), which is also called the {\bf zero subspace} of \(V\).

(ii) Let \(V=\mathbb{R}^{n}\), and let \(W\) be the set of vectors in \(V\) whose last coordinate is equal to \(0\). Then \(W\) is a subspace of \(V\). In this case, we could identify \(W\) with \(\mathbb{R}^{n-1}\).

(iii) The space of polynomial functions over the scalar field \({\cal F}\) is a subspace of all functions from \({\cal F}\) into \({\cal F}\).

(iv) In \({\cal F}^{n}\), the subset defined by

\[W=\left\{(0,\alpha_{2},\cdots ,\alpha_{n}):\alpha_{i}\in {\cal F}\mbox{ for }i=2,3,\cdots ,n\right\}\]

is a subspace of \({\cal F}^{n}\).

(v) Let \(V\) be the space of all functions from \(\mathbb{R}\) into \(\mathbb{R}\). The set of all continuous real-valued function on \(\mathbb{R}\) is a subspace of \(V\).

(vi) Let \(V\) be the space of all \(n\times n\) matrices over the scalar field \({\cal F}\).

  • The set of all symmetric matrices is a subspace of \(V\).
  • The set of all diagonal matrices is a subspace of \(V\). \(\sharp\)

We shall present how we can form a new subspace from some other subspaces. Let \(U\) be a universal set, and let \(\Lambda\) be a set of indices. We consider the collection \(\{W_{\lambda}\}_{\lambda\in\Lambda}\) of subsets of \(U\), which means that \(W_{\lambda}\subseteq U\) for all \(\lambda\in\Lambda\). Then we can define the intersection as follows:

\[W=\bigcap_{\lambda\in\Lambda}W_{\lambda}.\]

The element \(w\) in \(W\) means that \(w\in W_{\lambda}\) for all \(\lambda\in\Lambda\).

\begin{equation}{\label{lap57}}\tag{8}\mbox{}\end{equation}

Proposition \ref{lap57}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(\{W_{\lambda}\}_{\lambda\in\Lambda}\) be any collection of subspaces of \(V\). Then \(W=\bigcap_{\lambda\in\Lambda}W_{\lambda}\) is a subspace of \(V\).

Proof. Since each \(W_{\lambda}\) is a subspace, we have \(\theta\in W_{\lambda}\) for all \(\lambda\in\Lambda\), which says that \(W\neq\emptyset\) and \(\theta\in W\). Given any \(u,v\in W\) and \(\alpha\in {\cal F}\), by the definition of \(W\), we have \(u,v\in W_{\lambda}\) for all \(\lambda\in\Lambda\). Since each \(W_{\lambda}\) is a subspace, Proposition \ref{lap56} says \(\alpha u+v\in W_{\lambda}\) for all \(\lambda\in\Lambda\), which implies \(\alpha u+v\in W\). Using Proposition \ref{lap56} again, we conclude that \(W\) is a subspace of \(V\), and the proof is complete. \(\blacksquare\)

Inspired by Proposition \ref{lap57}, we may want to ask whether the union of two subspaces of \(V\) is a subspace of \(V\). The answer is negative. Suppose that \(W_{1}\) and \(W_{2}\) are two subspaces of \(V\). It is easy to show that \(W_{1}\cup W_{2}\) contains the zero vector and is closed under scalar multiplication. However, the union \(W_{1}\cup W_{2}\) may not be closed under vector addition. A counterexample can be easily constructed, and it is left as an exercise.

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(\{x_{1},\cdots ,x_{n}\}\) be a finite subset of \(V\). Given \(\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\), the following expression

\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}\]

is called a linear combination of \(\{x_{1},\cdots ,x_{n}\}\). \(\sharp\)

\begin{equation}{\label{lap4}}\tag{9}\mbox{}\end{equation}

Proposition \ref{lap4}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S=\{x_{1},\cdots ,x_{n}\}\) be a finite subset of \(V\). The set of all linear combinations of \(\{x_{1},\cdots ,x_{n}\}\) is a subspace of \(V\).

Proof. Let
\[W=\left\{\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}:\alpha_{1},\cdots ,\alpha_{n}\in {\cal F}\right\}.\]
We are going to show that \(W\) is a subspace of \(V\). Let

\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}\mbox{ and }\beta_{1}x_{1}+\beta_{2}x_{2}+\cdots +\beta_{n}x_{n}\]

be any two elements in \(W\). Then, we have
\begin{align*}
& \left (\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}\right )+\left (\beta_{1}x_{1}+\beta_{2}x_{2}+\cdots +\beta_{n}x_{n}\right )\\ & \quad =\left (\alpha_{1}+\beta_{1}\right )x_{1}+\left (\alpha_{2}+\beta_{2}\right )x_{2}+\cdots +\left (\alpha_{n}+\beta_{n}\right )x_{n}\in W.\end{align*}

On the other hand, given any \(\gamma\in {\cal F}\), we also have
\[\gamma\left (\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}\right )
=(\gamma\alpha_{1})x_{1}+(\gamma\alpha_{2})x_{2}+\cdots +(\gamma\alpha_{n})x_{n}\in W.\]

Finally, the zero element \(\theta\) of \(V\) can be expressed as
\[\theta =0x_{1}+0x_{2}+\cdots +0x_{n},\]
which shows that \(\theta\in W\). This completes the proof. \(\blacksquare\)

In Proposition \ref{lap4}, we say that the subspace \(W\) is  generated or spanned by \(S=\{x_{1},\cdots ,x_{n}\}\). In general, we are going to introduce another concept of subspace spanned by a subset of \(S\) that may not be a finite subset of \(V\). Let \(V\) be a vector space over the scalar field \({\cal F}\). Given any subset \(S\) of \(V\), we are going to find a smallest subspace \(W_{S}\) of \(V\) that contains \(S\). This means that if \(W\) is any subspace of \(V\) containing \(S\), then \(W_{S}\subseteq W\). From Proposition \ref{lap57}, this smallest subspace \(W_{S}\) can be obtained by taking the intersection of all subspaces of \(V\) which contain \(S\).

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a subset of \(V\). The subspace spanned by \(S\) is defined to be the intersection of all subspaces of \(V\) which contain \(S\). In this case, we also write \(\mbox{span}(S)\) to denote the subspace spanned by \(S\). \(\sharp\)

\begin{equation}{\label{lar153}}\tag{10}\mbox{}\end{equation}

Remark \ref{lar153}. Since the intersection of all subspace containing the empty set \(\emptyset\) is \(\{\theta\}\), we see that the empty set \(\emptyset\) spans \(\{\theta\}\), i.e., \(\mbox{span}(\emptyset )=\{\theta\}\). \(\sharp\)

\begin{equation}{\label{lap58}}\tag{11}\mbox{}\end{equation}

Proposition \ref{lap58}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a subset of \(V\). The subspace \(\mbox{span}(S)\) consists of all linear combinations of vectors in \(S\).

Proof. Let \(L\) be the set of all linear combinations of vectors in \(S\). We shall prove that \(L=\mbox{span}(S)\). Given any \(x\in L\), there exists \(\alpha_{1},\cdots ,\alpha_{m}\in {\cal F}\) and \(v_{1},\cdots ,v_{m}\in S\) satisfying
\[x=\alpha_{1}v_{1}+\cdots +\alpha_{m}v_{m}.\]
It is clear to see \(v_{1},\cdots ,v_{m}\in\mbox{span}(S)\). Since \(\mbox{span}(S)\) is a subspace, it follows \(x\in\mbox{span}(S)\). Therefore, we obtain the inclusion \(L\subseteq\mbox{span}(S)\). On the other hand, we are going to claim that \(L\) is a subspace of \(V\) containing \(S\). It is obvious that \(L\) contains \(S\). Now, for any \(x,y\in L\) and \(\gamma\in {\cal F}\), we have
\[x=\alpha_{1}v_{1}+\cdots +\alpha_{m}v_{m}\]
for some \(\alpha_{1},\cdots ,\alpha_{m}\in {\cal F}\) and \(v_{1},\cdots ,v_{m}\in S\) and
\[y=\beta_{1}u_{1}+\cdots +\beta_{n}u_{m}\]
for some \(\beta_{1},\cdots ,\beta_{n}\in {\cal F}\) and \(u_{1},\cdots ,u_{n}\in S\). Therefore, we obtain
\[\alpha x+y=\sum_{i=1}^{m}\alpha\alpha_{i}v_{i}+\sum_{j=1}^{n}\beta_{j}u_{j}\in L.\]
Proposition \ref{lap56} says that \(L\) is a subspace of \(V\). Since \(\mbox{span}(S)\) is the smallest subspace which contains \(S\), we must have \(\mbox{span}(S)\subseteq L\). Therefore, we obtain \(L=\mbox{span}(S)\). This completes the proof. \(\blacksquare\)

Example. In \(\mathbb{R}^{3}\), given three vectors \(v_{1}=(1,1,0)\), \(v_{2}=(1,0,1)\) and \(v_{3}=(0,1,1)\). We shall claim \(\mathbb{R}^{3} =\mbox{span}(\{v_{1},v_{2},v_{3}\})\). Given any \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\), there exists \(\alpha_{1},\alpha_{2},\alpha_{3}\in\mathbb{R}\) satisfying
\[(x_{1},x_{2},x_{3})=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\alpha_{3}v_{3}.\]
In fact, we can find
\begin{align*} \alpha_{1} & =\frac{1}{2}\left (x_{1}+x_{2}-x_{3}\right )\\
\alpha_{2} & =\frac{1}{2}\left (x_{1}-x_{2}+x_{3}\right )\\
\alpha_{3} & =\frac{1}{2}\left (-x_{1}+x_{2}+x_{3}\right ).\end{align*}

Example. We take the vector space \(V={\cal F}^{5}\), where \({\cal F}\) is a scalar field. Given three vectors \(v_{1}=(1,2,0,3,0)\), \(v_{2}=(0,0,1,4,0)\) and  \(v_{3}=(0,0,0,0,1)\),  we shall find the subspace \(W\) spanned by \(\{v_{1},v_{2},v_{3}\}\). According to Proposition \ref{lap58}, we see that \(w\in W=\mbox{span}(\{v_{1},v_{2},v_{3}\})\) if and only if there exists \(\alpha_{1},\alpha_{2},\alpha_{3}\in {\cal F}\) satisfying
\begin{align*} w & =\alpha_{1}v_{1}+\alpha_{2}v_{2}+\alpha_{3}v_{3}\\ & =\left(\alpha_{1},2\alpha_{1},\alpha_{2},3\alpha_{1}+4\alpha_{2},\alpha_{3}\right ).\end{align*}
This says
\begin{align*} W & =\mbox{span}(\{v_{1},v_{2},v_{3}\})\\ & =\left\{\left (\alpha_{1},2\alpha_{1},\alpha_{2},3\alpha_{1}+4\alpha_{2},\alpha_{3}\right ):\alpha_{1},\alpha_{2},\alpha_{3}\in {\cal F}\right\}.\end{align*}
Alternatively, the subspace \(W\) can be described as the set of \(5\)-tuples \(w=(x_{1},x_{2},x_{3},x_{4},x_{5})\) with \(x_{i}\in {\cal F}\)  for \(i=1,\cdots ,5\) satisfying
\[x_{2}=2x_{1}\mbox{ and }x_{4}=3x_{1}+4x_{3}.\]

Example. Let \(V\) be the vector space of all polynomial functions over the scalar field \({\cal F}\). Let \(f_{n}(x)=x^{n}\) for all \(n=0,1,2,\cdots\) and \(S=\{f_{1},f_{2},f_{3},\cdots\}\). Then, we have \(V=\mbox{span}(S)\) according to Proposition \ref{lap58}. \(\sharp\)

Example. Let \(V\) be the space of all \(2\times 2\) matrices over \(\mathbb{R}\). Given four matrices
\[\begin{array}{cccc}
M_{1}=\left [\begin{array}{cc}
1 & 1\\ 1 & 0\end{array}\right ], &
M_{2}=\left [\begin{array}{cc}
1 & 1\\ 0 & 1\end{array}\right ], &
M_{3}=\left [\begin{array}{cc}
1 & 0\\ 1 & 1\end{array}\right ], &
M_{4}=\left [\begin{array}{cc}
0 & 1\\ 1 & 1\end{array}\right ]
\end{array}\]

We shall claim \(V=\mbox{span}(\{M_{1},M_{2},M_{3}.M_{4}\})\). Given any \(2\times 2\) matrices \(M\), there exists \(\alpha_{1},\cdots ,\alpha_{4}\in\mathbb{R}\) satisfying
\[M=M_{1}=\left [\begin{array}{cc}
m_{11} & m_{12}\\ m_{21} & m_{22}\end{array}\right ]
=\alpha_{1}M_{1}+\alpha_{2}M_{2}+\alpha_{3}M_{3}+\alpha_{4}M_{4}.\]

In fact, we can find
\begin{align*}
\alpha_{1} & =\frac{1}{3}\left (m_{11}+m_{12}+m_{21}\right )-\frac{2}{3}m_{22}\\
\alpha_{2} & =\frac{1}{3}\left (m_{11}+m_{12}+m_{22}\right )-\frac{2}{3}m_{21}\\
\alpha_{3} & =\frac{1}{3}\left (m_{11}+m_{21}+m_{22}\right )-\frac{2}{3}m_{12}\\
\alpha_{4} & =\frac{1}{3}\left (m_{12}+m_{21}+m_{22}\right )-\frac{2}{3}m_{11}.
\end{align*}

Let \(S_{1},\cdots ,S_{m}\) be subsets of a vector space \(V\) over the scalar field. The sum \(S_{1}+\cdots +S_{m}\) is defined to be
\[S_{1}+\cdots +S_{m}=\left\{s_{1}+\cdots +s_{m}:s_{i}\in S_{i}\mbox{ for all }i=1,\cdots ,m\right\}.\]

Example. Let \(V\) be the vector space of all \(2\times 2\) matrices over the scalar field \({\cal F}\). Let \(W_{1}\) be the subset of \(V\) consisting of all matrices of the form
\[\left [\begin{array}{cc}
x & y\\ z & 0
\end{array}\right ]\]

where \(x,y,z\in {\cal F}\), and let \(W_{2}\) be the subset of \(V\) consisting of all matrices of the form
\[\left [\begin{array}{cc}
x & 0\\ 0 & y
\end{array}\right ]\]

where \(x,y\in {\cal F}\). Then we can show that \(W_{1}\) and \(W_{2}\) are subspaces of \(V\). We also have
\[W_{1}\cap W_{2}=\left [\begin{array}{cc}
x & 0\\ 0 & 0
\end{array}\right ].\]

It is clear to see \(W_{1}+W_{2}\subseteq V\). Since
\[\left [\begin{array}{cc}
a & b\\ c & d
\end{array}\right ]=\left [\begin{array}{cc}
a & b\\ c & 0
\end{array}\right ]+\left [\begin{array}{cc}
0 & 0\\ 0 & d
\end{array}\right ]\in W_{1}+W_{2},\]

we obtain \(V=W_{1}+W_{2}\). \(\sharp\)

\begin{equation}{\label{lap65}}\tag{12}\mbox{}\end{equation}

Proposition \ref{lap65}. Let \(W_{1},\cdots ,W_{m}\) be subspaces of a vector space \(V\) over the scalar field. We define \(W=W_{1}+\cdots +W_{m}\). Then, we have the following properties.

(i) \(W\) is a subspace with \(W_{i}\subset W\) for all \(i=1,\cdots ,m\).

(ii) \(W\) is the subspace spanned by the union of \(W_{1},\cdots ,W_{m}\), i.e.,
\[W=\mbox{span}\left (\bigcup_{i=1}^{m}W_{i}\right ).\]

Proof. Part (ii) can be obtained by using the similar arguments in the proof of Proposition \ref{lap58}. It is left as an exercise. \(\blacksquare\)

\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}

Independence.

Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a subset of \(V\). We shall introduce the concept of independence for \(S\), which can be used to define the base and dimension for \(V\).

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(\{x_{1},\cdots ,x_{n}\}\) be a finite subset of \(V\). We say that \(x_{1},\cdots ,x_{n}\) are linearly independent over the scalar field \({\cal F}\) when
\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}=\theta\]
implies \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). We also say that \(x_{1},\cdots ,x_{n}\) are linearly dependent over the scalar field \({\cal F}\) when there exists \(\alpha_{j}\neq 0\) satisfying
\[\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots +\alpha_{n}x_{n}=\theta .\]

\begin{equation}{\label{laex5}}\tag{13}\mbox{}\end{equation}

Example \ref{laex5}. We consider the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\). Then, the vectors
\begin{align*}
{\bf e}_{1} & =(1,0,0,\cdots ,0)\\
{\bf e}_{2} & =(0,1,0,\cdots ,0)\\
& \vdots\\
{\bf e}_{n} & =(0,0,0,\cdots ,1)
\end{align*}

are linearly independent. Indeed, if
\begin{align*} \alpha_{1}{\bf e}_{1}+\alpha_{2}{\bf e}_{2}+\cdots +\alpha_{n}{\bf e}_{n} & ={\bf 0}\\ & =(0,0,0,\cdots ,0),\end{align*}
then we obtain \((\alpha_{1},\alpha_{2},\cdots ,\alpha_{n})={\bf 0}\), which says \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). \(\sharp\)

Example. Let \(V\) be the vector space consisting of all polynomials with degree less than or equal to \(n\) and coefficients in \(\mathbb{C}\). Let
\[p_{k}(x)=x^{k}+x^{k+1}+\cdots +x^{n}\]
for \(k=0,1,\cdots ,n\). Then, we are going to claim that the set
\[\left\{p_{0}(x),p_{1}(x),p_{2}(x),\cdots ,p_{n}(x)\right\}\]
is linearly independent in \(V\). For \(\alpha_{0},\alpha_{1},\cdots ,\alpha_{n} \in\mathbb{C}\), if
\[\alpha_{0}p_{0}(x)+\alpha_{1}p_{1}(x)+\alpha_{2}p_{2}(x)+\cdots+\alpha_{n}p_{n}(x)=0,\]
then we have
\[\alpha_{0}+\left (\alpha_{0}+\alpha_{1}\right )x+\left (\alpha_{0}+\alpha_{1}+\alpha_{2}\right )x^{2}+\cdots +
\left (\alpha_{0}+\alpha_{1}+\alpha_{2}+\cdots +\alpha_{n}\right )x^{n}.\]

Therefore, we must have \(\alpha_{i}=0\) for \(i=0,1,2,\cdots ,n\). This shows that linear independence. \(\sharp\)

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a subset of \(V\). The set \(S\) is called linearly independent when any distinct elements \(s_{1},\cdots ,s_{n}\) of \(S\) are linearly independent. \(\sharp\)

\begin{equation}{\label{lar154}}\tag{14}\mbox{}\end{equation}

Remark \ref{lar154}. We have some interesting observations for the linearly independent sets.

  • Since the linearly dependent sets must be nonempty, the empty set is linearly independent.
  • The singleton set \(S=\{u\}\) is linearly independent. Suppose that it is not. Then there exists \(\alpha\neq 0\) such that \(\alpha u=\theta\). Therefore, we obtain a contradiction shown below
    \[u=\frac{1}{\alpha}(\alpha u)=\frac{1}{\alpha}\theta =\theta .\]

\begin{equation}{\label{lap152}}\tag{15}\mbox{}\end{equation}

Remark \ref{lap152}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S_{1}\subseteq S_{2}\subseteq V\). Then, we have the following properties.

  • If \(S_{2}\) is linearly independent, then \(S_{1}\) is linearly independent.
  • If \(S_{1}\) is linearly dependent, then \(S_{2}\) is linearly dependent.

The proofs are left as exercises. \(\sharp\)

\begin{equation}{\label{lap6}}\tag{16}\mbox{}\end{equation}

Proposition \ref{lap6}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(\{x_{1},\cdots ,x_{n}\}\) be finite set spans \(V\). Given any finite subset \(\{y_{1},\cdots ,y_{m}\}\) of \(V\), if \(m>n\), then \(y_{1},\cdots ,y_{m}\) are linearly dependent.

Proof. Since \(\{x_{1},\cdots ,x_{n}\}\) spans \(V\), for each \(y_{j}\), there exists \(a_{ij}\in {\cal F}\) satisfying
\[y_{j}=\sum_{i=1}^{n}a_{ij}x_{i}.\]
Given any \(m\) scalars \(\lambda_{1},\cdots ,\lambda_{m}\), we have
\begin{align*} \lambda_{1}y_{1}+\cdots +\lambda_{m}y_{m} & =\sum_{j=1}^{m}\lambda_{j}\left (\sum_{i=1}^{n}a_{ij}x_{i}\right )\\ &  =\sum_{i=1}^{n}\left (\sum_{j=1}^{m}a_{ij}\lambda_{j}\right )x_{i}.\end{align*}
Since \(m>n\), Proposition \ref{lap60} says that there exist scalars \(\lambda_{1},\cdots ,\lambda_{m}\) not all \(0\) satisfying
\[\sum_{j=1}^{m}a_{ij}\lambda_{j}=0\mbox{ for }i=1,\cdots ,m,\]
which also implies \(\lambda_{1}y_{1}+\cdots +\lambda_{m}y_{m}=0\). This shows that \(y_{1},\cdots ,y_{m}\) are linearly dependent, and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lal61}}\tag{17}\mbox{}\end{equation}

Proposition \ref{lal61}. Let \(V\) be a vector space over a scalar field \({\cal F}\), and let \(S\) be a linearly independent subset of \(V\). Suppose that \(u\in V\) and \(u\not\in\mbox{span}(S)\). Then, the set \(\{u\}\cup S\) is linearly independent.

Proof. Suppose that \(v_{1},\cdots ,v_{m}\) are distinct vectors in \(S\) satisfying
\begin{equation}{\label{laeq62}}\tag{18}
\lambda_{1}v_{1}+\lambda_{2}v_{2}+\lambda_{m}v_{m}+\cdots +\lambda_{m+1}u=\theta
\end{equation}

for some \(\lambda_{i}\in {\cal F}\), \(i=1,2,\cdots ,m+1\). If \(\lambda_{m+1}\neq 0\), then we have
\[u=\left (-\frac{\lambda_{1}}{\lambda_{m+1}}\right )v_{1}+\left (-\frac{\lambda_{2}}{\lambda_{m+1}}\right )v_{2}+\cdots +
\left (-\frac{\lambda_{m}}{\lambda_{m+1}}\right )v_{m}\in\mbox{span}(S),\]

which contradicts \(u\not\in\mbox{span}(S)\). Therefore, we must have \(\lambda_{m+1}=0\). From (\ref{laeq62}), we also have
\[\lambda_{1}v_{1}+\lambda_{2}v_{2}+\cdots +\lambda_{m}v_{m}=\theta .\]
Since the set \(S\) is linearly independent, we must have \(\lambda_{i}=0\) for \(i=1,\cdots ,m\). This shows that \(\{v_{1},\cdots ,v_{m},u\}\) are linear independent, and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lap156}}\tag{19}\mbox{}\end{equation}

Proposition \ref{lap156}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a linearly independent subset of \(V\). Suppose that \(v\in V\) and \(v\not\in S\). Then \(S\cup\{v\}\) is linearly dependent if and only if \(v\in\mbox{span}(S)\).

Proof. Suppose that \(S\cup\{v\}\) is linearly dependent. Then there exists \(v_{1},\cdots ,v_{n}\) in \(S\cup\{v\}\) satisfying
\[\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}=\theta\]
for some non-zero scalars \(\alpha_{1},\alpha_{2},\cdots ,\alpha_{n}\) in \({\cal F}\). If \(v_{i}\in S\) for all \(i=1,\cdots ,n\), then the linearly independence of \(S\) says \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). Therefore, one of \(v_{1},\cdots ,v_{n}\) must be \(v\). We may assume \(v_{1}=v\). This says
\[\alpha_{1}v+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}=\theta .\]
In this case, if \(\alpha_{1}=0\), then \(\alpha_{2},\cdots ,\alpha_{n}\) must all be zero, since \(v_{2},\cdots ,v_{n}\) are distinct vectors in the linearly independent set \(S\). Therefore, we obtain
\begin{align*} v & =\frac{1}{\alpha_{1}}\left (-\alpha_{2}v_{2}-\cdots -\alpha_{n}v_{n}\right )\\ & =-\left (\frac{\alpha_{2}}{\alpha_{1}}\right )v_{2}-\cdots-\left (\frac{\alpha_{n}}{\alpha_{1}}\right )v_{n},\end{align*}
which shows \(v\in\mbox{span}(S)\).

Conversely, if \(v\in\mbox{span}(S)\), then there exists \(u_{1},\cdots ,u_{m}\) in \(S\) satisfying
\[v=\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{m}u_{m}\]
for some \(\beta_{1},\cdots ,\beta_{m}\) in \({\cal F}\). This also says
\[-v+\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{m}u_{m}=\theta ,\]
i.e., the set \(\{v,u_{1},\cdots ,u_{m}\}\) is linearly dependent. According Remark \ref{lap152}, we conclude that \(S\cup\{v\}\) is linearly dependent. This completes the proof. \(\blacksquare\)

Proposition. Let \(A\) be an \(n\times n\) matrix such that the column vector \(A^{(j)}\) for \(j=1,\cdots ,n\) are linearly dependent. Then \(\det (A)=0\). Equivalently, if \(\det (A)\neq 0\), then the column vector \(A^{(j)}\) for \(j=1,\cdots ,n\) are linearly independent.

Corollary. If a system of \(n\) homogeneous linear equations in \(n\) unknowns has a matrix of coefficient whose determinant is not \(0\), then this system has a solution, which can be determined by the Cramer’s rule.

\begin{equation}{\label{d}}\tag{D}\mbox{}\end{equation}

Bases and Dimensions.

Now, we shall assign a dimension to a certain vector space over the scalar field \({\cal F}\). This will be done through the concept of basis for vector spaces.

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(\mathfrak{B}\) be a subset of \(V\). The set \(\mathfrak{B}\) is called a basis for \(V\) when the set \(\mathfrak{B}\) is linearly independent and \(V=\mbox{span}(\mathfrak{B})\).

\begin{equation}{\label{lar155}}\tag{20}\mbox{}\end{equation}

Remark \ref{lar155}. We have some interesting observations.

  • The basis of a vector space is not unique.
  • According to Remarks \ref{lar153} and \ref{lar154}, since the empty set \(\emptyset\) is linearly independent and \(\mbox{span}(\emptyset )=\{\theta\}\), we see that \(\emptyset\) is a basis for the zero vector space \(\{\theta\}\). \(\sharp\)

Example. Let \(V\) be a vector space over the scalar field \({\cal F}\) with a basis \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\). For any element \(v\in V\), according to Proposition \ref{lap58}, there exist \(\alpha_{1},\cdots ,\alpha_{n}\) satisfying
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]

Example. We consider the Euclidean space \(\mathbb{R}^{n}\). Then, the vectors \({\bf e}_{1},{\bf e}_{2},\cdots ,{\bf e}_{n}\) defined in Example \ref{laex5} form a basis of \(\mathbb{R}^{n}\), since
\[\mathbb{R}^{n}=\mbox{span}\left (\{{\bf e}_{1},{\bf e}_{2},\cdots ,{\bf e}_{n}\}\right ).\]
This particular basis of \(\mathbb{R}^{n}\) is called the standard basis. \(\sharp\)

Example. Let \(V\) be the vector space consisting of all \(m\times n\) matrices over the scalar field \({\cal F}\). Let \(M_{ij}\) denote the \(m\times n\) matrix whose only non-zero entry is \(1\) in the \(i\)th row and \(j\)th column. Then, we have
\[\mathfrak{B}=\left\{M_{ij}:i\in\{1,2,\cdots ,m\}\mbox{ and }j\in\{1,2,\cdots ,n\}\right\}\]
is a basis for \(V\). \(\sharp\)

Example. Let \(V\) be the vector space consisting of all polynomials with degree less than or equal to \(n\) and coefficients in \(\mathbb{C}\). Then, the set \(\{1,x,x^{2},\cdots ,x^{n}\}\) is a basis for \(V\). If \(V\) is the vector space consisting of all polynomials with coefficients in \(\mathbb{C}\). Then, the infinite set
$\{1,x,x^{2},\cdots ,x^{n},\cdots\}$ is a basis for \(V\). \(\sharp\)

Example. Let \({\cal F}=\mathbb{C}\), and let \(V\) be the space of polynomial functions over \(\mathbb{C}\). This says that the element \(f\) in \(V\) has the form of
\begin{equation}{\label{laeq59}}\tag{21}
f(x)=c_{0}+c_{1}x+c_{2}x_{2}+\cdots +c_{n}x^{n}.
\end{equation}

Let \(f_{k}=x^{k}\) for \(k=0,1,2,\cdots\). Then \(\mathfrak{B}=\{f_{0},f_{1},f_{2},\cdots\}\) is an infinite set. We shall claim that \(\mathfrak{B}\) is a basis of \(V\). From (\ref{laeq59}), since each element \(f\in V\) can be expressed as
\[f=c_{0}f_{0}+c_{1}f_{1}+\cdots +c_{n}f_{n},\]
the set \(\mathfrak{B}\) spans \(V\). To show that the set \(\mathfrak{B}\) is independent means that each finite subset of \(\mathfrak{B}\) is independent. Now, suppose that
\[c_{0}f_{0}+c_{1}f_{1}+\cdots +c_{n}f_{n}=0,\]
which says
\[c_{0}+c_{1}x+c_{2}x_{2}+\cdots +c_{n}x^{n}=0\]
for all \(x\in\mathbb{C}\). In other words, every \(x\in\mathbb{C}\) is a root of \(f\). Since the root of \(f\) has only \(n\) roots, we must have \(c_{i}=0\) for all \(i=0,1,2,\cdots ,n\). This shows that the infinite set \(\mathfrak{B}\) is indeed a basis of \(V\). \(\sharp\)

Proposition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a finite subset of \(V\) such that \(V=\mbox{span}(S)\). Then, there exists a subset \(\mathfrak{B}\) of \(S\) such that \(\mathfrak{B}\) is a basis for \(V\). In other words, the vector space has a finite basis.

Proof. If \(S=\emptyset\) or \(S=\{\theta\}\), then \(V=\{\theta\}\), which says that \(\emptyset\) is a basis for \(V\). Therefore, if \(S\neq\emptyset\) and \(S\neq\{\theta\}\), then \(S\) contains a non-zero vector \(v_{1}\). The singleton set \(\{v_{1}\}\) is linearly independent by Remark \ref{lar154}. Therefore, we can choose vectors \(v_{2},\cdots , v_{n}\) in \(S\) such that \(\{v_{1},\cdots ,v_{n}\}\) are linearly independent. Since \(S\) is finite, we can eventually reach a maximal linearly independent subset \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) of \(S\). This means that, for each \(u\in S\) and \(u\not\in\mathfrak{B}\), the set \(\mathfrak{B}\cup\{u\}\) is linearly dependent. We want to claim that this set \(\mathfrak{B}\) is a basis for \(V\). Since \(\mathfrak{B}\) is independent according to the construction, we remain to show \(\mbox{span}(\mathfrak{B})=V\). Given any \(u\in S\), if \(u\in\mathfrak{B}\), then \(u\in\mbox{span}(\mathfrak{B})\). If \(u\not\in\mbox{span}(\mathfrak{B})\), the construction of \(\mathfrak{B}\) says that the set \(\mathfrak{B}\cup\{u\}\) is linearly dependent. By Proposition \ref{lap156}, we have \(u\in\mbox{span}(\mathfrak{B})\). Therefore, we have the inclusion \(S\subseteq\mbox{span}(\mathfrak{B})\), i.e., \(\mbox{span}(S)\subseteq\mbox{span}(\mbox{span}(\mathfrak{B}))\). Since \(\mbox{span}(\mathfrak{B})\) is a subspace, we must have \(\mbox{span}(S)\subseteq\mbox{span}(\mathfrak{B})\). Finally, the assumption \(V=\mbox{span}(S)\) says that \(\mbox{span}(\mathfrak{B})=V\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{e}}\tag{E}\mbox{}\end{equation}

Finite-Dimensional Vector Spaces.

The vector space \(V\) over the scalar field \({\cal F}\) is called finite-dimensional when the basis \(\mathfrak{B}\) for \(V\) is a finite set. In order to introduce the concept of dimension, the following result is very interesting and useful.

\begin{equation}{\label{lac7}}\tag{22}\mbox{}\end{equation}

Proposition \ref{lac7}. Let \(V\) be a vector space over the scalar field \({\cal F}\). If \(\{x_{1},\cdots ,x_{n}\}\) and \(\{y_{1},\cdots ,y_{m}\}\) are two bases, then \(m=n\).

Proof. Proposition \ref{lap6} says that \(m>n\) and \(n>m\) are impossible. This shows \(n=m\). \(\blacksquare\)

Proposition \ref{lac7} says that any basis of \(V\) has the same number of elements. Therefore, it allows us to define the dimension of a finite-dimensional vector space.

Definition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). The dimension of \(V\) is the number of elements of a given basis of \(V\). The dimension of \(V\) will be denoted by \(\dim (V)\).

Remark. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\). According to the above properties, we have the following observations.

  • Any subset of \(V\) which contains more than \(n\) elements is linearly dependent.
  • No basis of \(V\) whoch contains less than \(n\) elements can span \(V\). \(\sharp\)

Example. We have the following interesting examples.

(i) The scalar field \({\cal F}\) is itself a vector space over the scalar field \({\cal F}\) with dimension \(1\) in which \(1\) is a basis of \({\cal F}\).

(ii) If \({\cal F}\) is a scalar field, then \(\dim ({\cal F}^{n})=n\), since the standard basis for \({\cal F}^{n}\) contains \(n\) element.

(iii) Let \({\cal M}\) be the space of all \(m\times n\) matrices in \({\cal F}\). We have \(\dim {\cal M}=m\cdot n\), since the \(m\cdot n\) matrices which have \(1\) in the \(ij\)-th entry with \(0\) elsewhere form a basis for \({\cal M}\). \(\sharp\)

Let \(V\) be a vector space over the scalar field \({\cal F}\). Then, the zero subspace of \(V\) is spanned by the zero element \(\theta\). However, the set \(\{\theta\}\) is linearly dependent, which says that it cannot be a basis. According to this reason, we shall agree that the zero subspace has dimension zero. Alternatively, we could argue that the empty set is a basis for the zero subspace according to Remark \ref{lar155}.

\begin{equation}{\label{lat159}}\tag{23}\mbox{}\end{equation}

Theorem \ref{lat159}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) be a finite subset of \(V\). Then \(\mathfrak{B}\) is a basis for \(V\) if and only if each \(v\neq\theta\) in \(V\) can be uniquely expressed as
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n},\]
where \(\alpha_{1},\cdots ,\alpha_{n}\) are unique scalars in \({\cal F}\).

Proof. Let \(\mathfrak{B}\) be a basis for \(V\). By definition, if \(v\in V=\mbox{span}(\mathfrak{B})\), then \(v\) is a linear combination of the vectors in \(\mathfrak{B}\). Next, we want to show the uniqueness. Suppose that
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\mbox{ and }
v=\beta_{1}v_{1}+\beta_{2}v_{2}+\cdots +\beta_{n}v_{n}\]

for some \(\alpha_{1},\cdots ,\alpha_{n}\) and \(\beta_{1},\cdots ,\beta_{n}\) in \({\cal F}\). By taking subtraction, we obtain
\[\theta =(\alpha_{1}-\beta_{1})v_{1}+(\alpha_{2}-\beta_{2})v_{2}+\cdots +(\alpha_{n}-\beta_{n})v_{n}.\]
The independence of \(\mathfrak{B}\) shows that \(\alpha_{i}=\beta_{i}\) for \(i=1,2,\cdots ,n\).

For the converse, since \(\mathfrak{B}\) is a subset of \(V\), we immediately have \(\mbox{span}(\mathfrak{B})\subseteq V\). On the other hand, since each \(v\in V\) can be expressed as a linear combination of the vectors in \(\mathfrak{B}\), we also have \(V\subseteq \mbox{span}(\mathfrak{B})\), which obtains the equality \(V=\mbox{span}(\mathfrak{B})\). We remain to claim that \(\mathfrak{B}\) is independent. Suppose that \(\theta =\alpha_{1}v_{1}+\cdots +\alpha_{n}v_{n}\) for some \(\alpha_{1},\cdots ,\alpha_{n}\) in \({\cal F}\). For each \(\alpha_{i}\), there exist \(\beta_{i},\gamma_{i}\in {\cal F}\) satisfying \(\alpha_{i}=\beta_{i}-\gamma_{i}\). Therefore, we obtain
\[\beta_{1}v_{1}+\beta_{2}v_{2}+\cdots +\beta_{n}v_{n}=\gamma_{1}v_{1}+\gamma_{2}v_{2}+\cdots +\gamma_{n}v_{n}.\]
Since this expression must be unique, we have \(\beta_{i}=\gamma_{i}\), i.e., \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). This shows the independence of \(\mathfrak{B}\), and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lap63}}\tag{24}\mbox{}\end{equation}

Proposition \ref{lap63}. Let \(V\) be a finite-dimensional vector space over a scalar field \({\cal F}\), and let \(W\) be a subspace of \(V\). Let \(S_{0}\) be any linearly independent subset \(W\). Then, the set \(S_{0}\) is finite, and there exists another subset \(S\) of \(W\) such that \(S_{0}\cup S\) is a basis of \(W\).

Proof. Let \(\dim (V)=n\). It is also obvious that \(S_{0}\) is also a linear independent subset of \(V\), which says that \(S_{0}\) contains no more than \(n\) elements. Next, we want to extend \(S_{0}\) to form a basis for \(W\). If \(S_{0}\) spans \(W\), then \(S_{0}\) is a basis for \(W\), and we are done. If \(S_{0}\) does not span \(W\), using Proposition \ref{lal61}, we can find a \(u_{1}\in W\) such that \(S_{1}\equiv S_{0}\cup\{u_{1}\}\) is linearly independent. If \(S_{1}\) spans \(W\), then we are done. If \(S_{1}\) does not span \(W\). then we can also apply Proposition \ref{lal61} to find another \(u_{2}\in W\) such that
\[S_{2}\equiv S_{1}\cup\{u_{2}\}=S_{0}\cup\{u_{1},u_{2}\}\]
is linearly independent. We can continue in this way, in not more than \(n\) steps, to obtain a set
\[S_{m}=S_{0}\cup\{u_{1},\cdots ,u_{m}\}\]
which is a basis for \(W\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lac64}}\tag{25}\mbox{}\end{equation}

Corollary \ref{lac64}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). We have the following properties.

(i) If \(W\) is a subspace of \(V\), then \(\dim (W)\leq\dim (V)\).

(ii) If \(W\) is a proper subspace of \(V\), then \(\dim (W)<\dim (V)\).

Proof. To prove part (i), we can assume that \(W\) contains a vector \(u\neq\theta\). From Proposition \ref{lap63} and its proof, there is a basis of \(W\) which contains \(u\) and contains no more than \(\dim (V)\) elements. Therefore, we must have \(\dim (W)\leq\dim (V)\).

To prove part (ii), suppose that \(\dim (W)=\dim (V)\). Since \(W\) is a proper subset of \(V\), there exists \(\widehat{u}\in V\) and \(\widehat{u}\not\in W\). If we adjoin \(\widehat{u}\) to a basis of \(W\), we obtain a linearly independent set of \(V\) with \(1+\dim (W)=1+\dim (V)\) elements. This contradiction says that we must have \(\dim (W)<\dim (V)\), and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lac8}}\tag{26}\mbox{}\end{equation}

Corollary \ref{lac8}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\). Let \(r\in\mathbb{N}\) with \(r<n\), and let \(u_{1},\cdots ,u_{r}\in V\) be linearly independent. Then, there exist \(v_{r+1},\cdots ,v_{n}\in V\) such that \(\{u_{1},\cdots ,u_{r},v_{r+1},\cdots ,v_{n}\}\) forms a basis of \(V\). In other words, every linearly independent subset of \(V\) can be extended to a basis for \(V\).

Corollary. Let \({\cal F}\) be a scalar field, and let \(A\) be an \(n\times n\) matrix in \({\cal F}\). If the row vectors of \(A\) form a linear independent subset of \({\cal F}^{n}\), then \(A\) is invertible.

Proof. Let \(v_{1},\cdots ,v_{n}\) be the row vectors of \(A\). Suppose that \(W\) is a subspace of \({\cal F}^{n}\) spanned by \(\{v_{1},\cdots ,v_{n}\}\). This says  \(\dim (W)=n\). Since \(\dim ({\cal F}^{n})=n\). Corollary \ref{lac64} says \(W={\cal F}^{n}\). Let \(\{e_{1},\cdots ,e_{n}\}\) be a standard basis of \({\cal F}^{n}\). There exists \(b_{ij}\in {\cal F}\) satisfying
\[e_{i}=\sum_{j=1}^{n}b_{ij}v_{j}\]
for \(i=1,\cdots ,n\). For the matrix \(B\) with entries \(b_{ij}\), we have \(BA=I_{n}\). This completes the proof. \(\blacksquare\)

Example. We have some interesting examples considering the dimension of subspaces.

(i) In \(\mathbb{R}^{5}\), we consider the subspace \(W\) of \(\mathbb{R}^{5}\) defined by
\[W=\left\{(x_{1},x_{2},x_{3},x_{4},x_{5}):x_{1}+x_{3}+x_{5}=0\mbox{ and }x_{2}=x_{4}\right\}.\]
We can verify that the following set is a basis for \(W\):
\[\left\{(-1,0,1,0,0),(-1,0,0,0,1),(0,1,0,1,0)\right\}.\]
This says that \(\dim (W)=3\).

(ii) Let \({\cal M}_{n\times n}({\cal F})\) be the vector space consisting of all \(n\times n\) matrices over \({\cal F}\), and let \(W\) be the subspace consisting of all diagonal matrices over \({\cal F}\). Then, we can verify that the following set is a basis for \(W\):
\[\left\{E_{11},E_{22},\cdots ,E_{nn}\right\},\]
where \(E_{ii}\) is the \(n\times n\) matrix in which the only non-zero entry is \(1\) in the \(i\)th row and \(i\)th column. This says that \(\dim (W)=n\).

(iii) Let \(W\) be a subspace consisting of all symmetric matrices in \({\cal M}_{n\times n}({\cal F})\). Then, we can verify that the following set is a basis for \(W\):
\[\left\{A_{ij}:1\leq i\leq j\leq n\right\},\]
where \(A_{ii}\) is the \(n\times n\) matrix having \(1\) in the \(i\)th row and \(j\)th column, \(1\) in the \(j\)th row and \(i\)th column, and \(0\) elsewhere. This says
\[\dim (W)=n+(n-1)+\cdots +1=\frac{n(n+1)}{2}.\]

Theorem. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(W_{1}\) and \(W_{2}\) be two subspaces of \(V\). Then
\[\dim (W_{1})+\dim (W_{2})=\dim\left (W_{1}\cap W_{2}\right )+\dim\left (W_{1}+W_{2}\right ).\]

Proof. By Proposition \ref{lap63} and its corollaries, we see that \(W_{1}\cap W_{2}\) has a finite basis \(\{w_{1},\cdots ,w_{r}\}\), and there exist \(u_{1},\cdots ,u_{m}\in W\) and \(v_{1},\cdots ,v_{n}\in W_{2}\) such that \(\{w_{1},\cdots ,w_{r},u_{1},\cdots ,u_{m}\}\) is a basis for \(W_{1}\), and \(\{w_{1},\cdots ,w_{r},v_{1},\cdots ,v_{n}\}\) is a basis for \(W_{2}\). We shall show that the set
\[\Gamma =\{w_{1},\cdots ,w_{r},u_{1},\cdots ,u_{m},v_{1},\cdots ,v_{n}\}\]
is a basis for \(W_{1}+W_{2}\). By part (ii) of Proposition \ref{lap65}, the subspace \(W_{1}+W_{2}\) is spanned by \(W_{1}\cup W_{2}\). Since \(W_{1}\) is spanned by \(\{w_{1},\cdots ,w_{r},u_{1},\cdots ,u_{m}\}\), and \(W_{2}\) is spanned by \(\{w_{1},\cdots ,w_{r},v_{1},\cdots ,v_{n}\}\), we also see that \(W_{1}+W_{2}\) is panned by the set \(\Gamma\). Now, we are going to claim that
\[w_{1},\cdots ,w_{r},u_{1},\cdots ,u_{m},v_{1},\cdots ,v_{n}\]
are independent. Suppose that
\begin{equation}{\label{laeq66}}\tag{27}
\sum_{i=1}^{r}\alpha_{i}w_{i}+\sum_{j=1}^{m}\beta_{j}u_{j}+\sum_{k=1}^{n}\gamma_{k}v_{k}=\theta
\end{equation}

for some \(\alpha_{i},\beta_{j},\gamma_{k}\in {\cal F}\). Then, we have
\[-\sum_{k=1}^{n}\gamma_{k}v_{k}=\sum_{i=1}^{r}\alpha_{i}w_{i}+\sum_{j=1}^{m}\beta_{j}u_{j}\in W_{1},\]
which also says \(\sum_{k=1}^{n}\gamma_{k}v_{k}\in W_{1}\). Since \(\sum_{k=1}^{n}\gamma_{k}v_{k}\in W_{2}\), we obtain \(\sum_{k=1}^{n}\gamma_{k}v_{k}\in W_{1}\cap W_{2}\). Therefore, we must have
\[\sum_{k=1}^{n}\gamma_{k}v_{k}=\sum_{k=1}^{n}\lambda_{i}w_{i}\]
for some \(\lambda_{i}\in {\cal F}\), which also says
\[\sum_{k=1}^{n}\gamma_{k}v_{k}-\sum_{k=1}^{n}\lambda_{i}w_{i}=\theta .\]
Since \(w_{1},\cdots ,w_{r},v_{1},\cdots ,v_{n}\) are independent, we obtain \(\gamma_{k}=0\) for all \(k=1,\cdots ,n\). From (\ref{laeq66}), we have
\[\sum_{i=1}^{r}\alpha_{i}w_{i}+\sum_{j=1}^{m}\beta_{j}u_{j}=\theta .\]
Since \(w_{1},\cdots ,w_{r},u_{1},\cdots ,u_{m}\) are independent, we must have \(\alpha_{i}=0\) and \(\beta_{j}=0\) for all \(i\) and \(j\). This shows \(\dim (W_{1}+W_{2})=r+m+n\). Finally, we have
\[\dim (W_{1})+\dim (W_{2})=r+m+r+n=r+(r+m+n)=\dim (W_{1}\cap W_{2})+\dim (W_{1}+W_{2}),\]
and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{f}}\tag{F}\mbox{}\end{equation}

Maximal Set of Linearly Independent Vectors.

The basis for a vector space over the scalar field is strongly related with the concept of maximal set of linearly independent vectors. Based on this concept, we can guarantee the existence of basis for a vector space. We first present the maximal set of linearly independent vectors in finite-dimensional vector space over the scalar field.

Definition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(\{v_{1},\cdots ,v_{n}\}\) be a finite subset of \(V\). We say that \(\{v_{1},\cdots ,v_{n}\}\) forms a maximal set of linearly independent vectors} of \(V\) when, given any other vector \(u\) of \(V\), the vectors \(u,v_{1},\cdots ,v_{n}\) are linearly dependent. \(\sharp\)

\begin{equation}{\label{lat7}}\tag{28}\mbox{}\end{equation}

Theorem \ref{lat7}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\), and let \(\{v_{1},\cdots ,v_{n}\}\) be a maximal set of linearly independent vectors of \(V\). Then \(\{v_{1},\cdots ,v_{n}\}\) forms a basis of \(V\).

Proof. We must show that the set \(\{v_{1},\cdots ,v_{n}\}\) spans \(V\). Given any \(u\in V\), by the definition, there exist \(\lambda_{0},\lambda_{1},\cdots ,\lambda_{n}\) not all zero satisfying
\[\lambda_{0}u+\sum_{i=1}^{n}\lambda_{i}v_{i}=\theta .\]
If \(\lambda_{0}=0\), then \(\lambda_{1}v_{1}+\cdots +\lambda{n}v_{n}=\theta\), where some of \(\lambda_{1},\cdots ,\lambda_{n}\) are not zero, which contradicts the independence of \(v_{1},\cdots ,v_{n}\). Therefore, we must have \(\lambda_{0}\neq 0\). In this case, we can obtain
\[u=\left (-\frac{\lambda_{1}}{\lambda_{0}}\right )v_{1}+\left (-\frac{\lambda_{2}}{\lambda_{0}}\right )v_{2}+\cdots +
\left (-\frac{\lambda_{n}}{\lambda_{0}}\right )v_{n}.\]

This shows that the set \(\{v_{1},\cdots ,v_{n}\}\) indeed spans \(V\), and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lat26}}\tag{29}\mbox{}\end{equation}

Corollary \ref{lat26}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\). If \(x_{1},\cdots ,x_{n}\) are linearly independent vectors of \(V\), then \(\{x_{1},\cdots ,x_{n}\}\) forms a basis of \(V\).

Proof. According to Proposition \ref{lap6}, we see that \(\{x_{1},\cdots ,x_{n}\}\) is a maximal set of linearly independent vectors of \(V\). Using Theorem \ref{lat7}, we conclude that \(\{x_{1},\cdots ,x_{n}\}\) forms a basis of \(V\), and the proof is complete. \(\blacksquare\)

\begin{equation}{\label{lac161}}\tag{30}\mbox{}\end{equation}

Corollary \ref{lac161}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\), and let \(W\) be a subspace of \(V\) with \(\dim (W)=n\). Then, we have \(V=W\)

Proof. A basis for \(W\) must be a basis for \(V\). \(\sharp\)

Next, we consider the maximal linearly independent subset of a infinite-dimensional vector space over the scalar field.

Definition. Let \(U\) be a universal set, and let \({\cal U}\) be a family of subsets of \(U\). A member \(M\in {\cal U}\) is called a  maximal set in \({\cal U}\) with respect to the set inclusion if and only if \(M\) is not contained in any member of \({\cal U}\) except for \(M\) itself. \(\sharp\)

Example. We present some interesting examples for the maximal sets.

(i) Let \({\cal U}\) be the family of all subsets of \(U\). Then, it is easy to see that \(U\) is the maximal set in \({\cal U}\).

(ii) Let \(S\) and \(T\) be two disjoint subsets of \(U\), i.e., \(S\cap T=\emptyset\). Let \({\cal U}_{S}\) be the family of all subsets of \(S\), and let \({\cal U}_{T}\) be the family of all subsets of \(S\). We consider the family \({\cal U}={\cal U}_{S}\cup {\cal U}_{T}\). Then we can see that \(S\) and \(T\) are the maximal sets in \({\cal U}\).

(iii) Let \(U\) be an infinite universal set, and let \({\cal U}\) be a family of finite subsets of \(U\). Then there is no maximal sets in \({\cal U}\). Suppose that \(M\) is a maximal set in \({\cal U}\). Then \(M\) must be a finite subset of \(U\). Let \(u\in U\) and \(u\not\in M\). We see that \(M\cup\{u\}\) is also a finite subset of \(U\), which means that \(M\cup\{u\}\) is in \({\cal U}\). This contradicts the maximality of \(M\) in \({\cal U}\). \(\sharp\)

Definition. Let \(U\) be a universal set, and let \({\cal N}\) be a family of subsets of \(U\). We say that \({\cal N}\) is a nested family when each pair of sets \(A\) and \(B\) in \({\cal N}\) satisfies either \(A\subseteq B\) or \(B\subseteq A\).

Example. We take the universal set \(U\) as \(\mathbb{N}\), the set of all positive integers. Let \(A_{n}=\{1,2,\cdots ,n\}\). Then the family
\[{\cal N}=\left\{A_{n}:n\in\mathbb{N}\right\}\]
is a nested family, since \(A_{m}\subseteq A_{n}\) for \(m\leq n\). \(\sharp\)

Maximal Principle. Let \(U\) be a universal set, and let \({\cal U}\) be a family of subsets of \(U\). Suppose that, for each nested family \({\cal N}\subseteq {\cal U}\), there exists a member \(F\in {\cal U}\) satisfying \(N\subseteq F\) for all \(N\in {\cal N}\). Then \({\cal U}\) contains a maximal set in \({\cal U}\).

Definition. Let \(V\) be a vector space over the scalar field \({\cal U}\), and let \(U\) be a subset of \(V\). A subset \(M\) of \(U\) is called a maximal linearly independent subset of \(U\) when the following conditions are satisfied:

  • the set \(M\) is linearly independent;
  • given any vector \(u\in U\) and \(u\not\in M\), the set \(M\cup\{u\}\) is linearly dependent; that is, if \(\widehat{M}\) is a linearly independent subset of \(U\) containing \(M\), then \(\widehat{M}=M\).

\begin{equation}{\label{lat158}}\tag{31}\mbox{}\end{equation}

Theorem \ref{lat158}. Let \(V\) be a vector space over the scalar field \({\cal F}\). Then, we have the following properties

(i) Let \(\mathfrak{B}\) be a basis for a vector space \(V\). Then \(\mathfrak{B}\) is a maximal linearly independent subset of \(V\).

(ii) Let \(M\) be a subset of \(V\) such that \(\mbox{ span}(M)=V\). If \(\mathfrak{B}\) is a maximal linearly independent subset of \(M\), then \(\mathfrak{B}\) is a basis for \(V\).

Proof. To prove part (i), we shall claim the following facts.

  • The set \(\mathfrak{B}\) is linearly independent by definition.
  • Since \(\mbox{span}(\mathfrak{B})=V\), if \(v\in V\) and \(v\not\in\mathfrak{B}\), then Proposition \ref{lap156} says that \(\mathfrak{B}\cup\{v\}\) is linearly independent.

Therefore, \(\mathfrak{B}\) is indeed a maximal linearly independent subset of \(V\).

To prove part (ii), since \(\mathfrak{B}\) is linearly independent, we just need to claim \(\mbox{span}(\mathfrak{B})=V\). We first want to show \(M\subseteq\mbox{span}(\mathfrak{B})\). Suppose that it is not. Then, there exists \(v\in M\) and \(v\not\in\mbox{span}(\mathfrak{B})\). Using Proposition \ref{lap156} again, we must say that \(\mbox{span}(\mathfrak{B})\cup\{v\}\) is linearly independent, which contradicts the maximality of \(\mbox{span}(\mathfrak{B})\). Therefore, we must have \(M\subseteq\mbox{span}(\mathfrak{B})\), which also says  \(\mbox{span}(M)\subseteq\mbox{span}(\mbox{span}(\mathfrak{B}))\). Since \(\mbox{span}(\mathfrak{B})\) is a subspace of \(V\), we obtain \(V=\mbox{span}(M)\subseteq \mbox{span}(\mathfrak{B})\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lat157}}\tag{32}\mbox{}\end{equation}

Theorem \ref{lat157}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(S\) be a linearly independent subset of \(V\). There exists a maximal linearly independent subset of \(V\) that contains \(S\).

Proof. Let \({\cal U}\) be the family of all linearly independent subsets of \(V\) that contains \(S\). Then \({\cal U}\neq\emptyset\), since \(S\in {\cal U}\). We shall show that \({\cal U}\) contains a maximal set in \({\cal U}\) by applying the maximal principle. Therefore, we first need to show that if \({\cal N}\) is a nested family in \({\cal U}\), then exists a member \(F\in {\cal U}\) satisfying \(N\subseteq F\) for all \(N\in {\cal N}\). We shall claim that \(F\) can be taken as the union of all the members of \({\cal N}\). It is clear to see \(N\subseteq F\) for all \(N\in {\cal N}\). Therefore, it suffices to show \(F\in {\cal U}\). Since each \(N\) contains \(S\), it is also obvious that \(F\) contains \(S\). Now, we remain to show that \(F\) is linearly independent. Given any \(v_{1},\cdots ,v_{n}\) in \(F\) and \(\alpha_{1},\cdots ,\alpha_{n}\) in \({\cal F}\), we consider
\[\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}=\theta .\]
Since \(F\) is the union of all the members of \({\cal N}\), for each \(v_{i}\in F\), there exists \(N_{i}\in {\cal N}\) satisfying \(v_{i}\in N_{i}\) for \(i=1,\cdots ,n\). Since \({\cal N}\) is a nested family, there exists \(\widehat{N}\in {\cal N}\) satisfying \(N_{i}\subseteq\widehat{N}\) for all \(i=1,\cdots ,n\), which also says \(v_{i}\in\widehat{N}\) for all \(i=1,\cdots ,n\). However, the set \(\widehat{N}\) is linearly independent. We must have \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). This completes the proof. \(\blacksquare\)

Corollary. Let \(V\) be a vector space over the scalar field \({\cal F}\). Then \(V\) has a basis.

Proof. If \(V=\{\theta\}\), then \(\emptyset\) is a basis for \(V\). Therefore, if \(V\neq\{\theta\}\), then \(V\) contains a non-zero vector \(v\). The singleton set \(\{v\}\) is linearly independent by Remark \ref{lar154}. Therefore, there always exists a linear independent subset \(S\) of \(V\). Theorem \ref{lat157} says that there exists a maximal linearly independent subset \(\mathfrak{B}\) of \(V\). Finally, from part (ii) of Theorem \ref{lat158}, we see that \(\mathfrak{B}\) is a basis for \(V\) by taking \(M=V\). This completes the proof. \(\blacksquare\)

Now, we can extend Theorem \ref{lat159} to the case of infinite-dimensional vector space.

Theorem. Let \(V\) be a vector space over the scalar field \({\cal F}\) (which is not necessarily finite-dimensional), and let \(\mathfrak{B}\) be a subset of \(V\). Then \(\mathfrak{B}\) is a basis for \(V\) if and only if, for each \(v\neq\theta\) in \(V\), there exist unique vectors \(v_{1},\cdots ,v_{n}\) in \(\mathfrak{B}\) and unique non-zero scalars \(\alpha_{1},\cdots ,\alpha_{n}\) in \({\cal F}\) satisfying
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]

Proof. Let \(\mathfrak{B}\) be a basis for \(V\). By definition, if \(v\in V=\mbox{span}(\mathfrak{B})\), then there exist vectors \(v_{1},\cdots ,v_{n}\) in \(\mathfrak{B}\) and non-zero scalars \(\alpha_{1},\cdots ,\alpha_{n}\) in \({\cal F}\) satisfying
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
Next, we want to show the uniqueness. Suppose that there exist some other vectors \(u_{1},\cdots ,u_{m}\) in \(\mathfrak{B}\) and non-zero scalars \(\beta_{1},\cdots ,\beta_{m}\) in \({\cal F}\) satisfying
\[v=\beta_{1}u_{1}+\beta_{2}u_{2}+\cdots +\beta_{m}u_{m}.\]
By taking subtraction, we obtain
\begin{equation}{\label{laeq162}}\tag{33}
\theta =\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}-\beta_{1}u_{1}-\beta_{2}u_{2}-\cdots -\beta_{m}u_{m}.
\end{equation}

If there exists a \(u_{j}\neq v_{i}\) for all \(i=1,\cdots ,n\), then (\ref{laeq162}) can be expressed as
\[\theta =\cdots -\beta_{j}u_{j}+\cdots .\]
The independence of \(\mathfrak{B}\) shows that \(\beta_{j}=0\), which is a contradiction. Therefore, we must have \(m=n\) and \(v_{i}=u_{i}\) for \(i=1,\cdots ,n\). In this case, we also have
\[\theta =\left (\alpha_{1}-\beta_{1}\right )v_{1}+\left (\alpha_{2}-\beta_{2}\right )v_{2}+\cdots +\left (\alpha_{n}-\beta_{n}\right )v_{n},\]
which implies \(\alpha_{i}=\beta_{i}\) for \(i=1,2,\cdots ,n\) by the independence of \(\mathfrak{B}\) again.

For the converse, using the same argument in the proof of Theorem \ref{lat159}, we can obtain \(V=\mbox{span}(\mathfrak{B})\). Now, we want to claim that \(\mathfrak{B}\) is independent. Suppose that \(\theta =\alpha_{1}v_{1}+\cdots +\alpha_{n}v_{n}\) for some \(v_{1},\cdots ,v_{n}\) in \(\mathfrak{B}\) and some \(\alpha_{1},\cdots ,\alpha_{n}\) in \({\cal F}\). Also, using the same argument in the proof of Theorem \ref{lat159}, we have \(\alpha_{i}=0\) for all \(i=1,\cdots ,n\). This shows the independence of \(\mathfrak{B}\), and the proof is complete. \(\blacksquare\)

Proposition. Let \(V\) be a vector space over the scalar field \({\cal F}\) (which is not necessarily finite-dimensional). Let \(S_{1}\) and \(S_{2}\) be subsets of \(V\) with \(S_{1}\subseteq S_{2}\). If \(S_{1}\) is linearly independent and \(\mbox{span}(S_{2})=V\), then there exists a basis \(\mathfrak{B}\) for \(V\) satisfying \(S_{1}\subseteq\mathfrak{B}\subseteq S_{2}\).

Proof. Apply the maximal principle to the family of all linearly independent subsets of \(S_{2}\) that contains \(S_{1}\), and proceed as in the proof of Theorem \ref{lat157}. \(\blacksquare\)

\begin{equation}{\label{g}}\tag{G}\mbox{}\end{equation}

Coordinates.

In \(\mathbb{R}^{n}\), the vector \({\bf x}\in\mathbb{R}\) is expressed as the form of coordinates \({\bf x}=(x_{1},x_{2},\cdots ,x_{n})\). We can see that \({\bf x}\) is a linear combination of the standard basis in \(\mathbb{R}\), i.e.,
\[{\bf x}=x_{1}{\bf e}_{1}+x_{2}{\bf e}_{2}+\cdots +x_{n}{\bf e}_{n}.\]
Inspired by this concept, we can also define the coordinates of each element in the finite-dimensional vector space \(V\). Since the basis in \(V\) is not unique, the coordinates will depend on the chosen basis. On the other hand, the coordinates also depend on the ordering of the vectors in the chosen basis, since, given a set of vectors, we can determine which is the “first” vector in the basis, which is the “second” vector in the basis, and so on. In other words, the coordinates will depend on the sequences of vectors rather than sets of vectors.

Definition. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\). An ordered basis for \(V\) is a finite sequence of vectors which is linearly independent and spans \(V\). \(\sharp\)

We can see that the ordered basis is a basis together with the specified ordering. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\) and an order basis \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\). Given any \(v\in V\), Theorem \ref{lat159} says that there is a unique \(n\)-tuple \((\alpha_{1},\cdots ,\alpha_{n})\) in \({\cal F}\) satisfying
\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}.\]
In the case, we shall call \(\alpha_{i}\) the \(i\)th coordinate of \(v\) relative to the ordered basis \(\mathfrak{B}\). For the product space \({\cal F}^{n}\) of scalar field, the coordinate vector \(v\in {\cal F}^{n}\) are usually defined based on the linear combination with respect to the standard basis \(\{{\bf e}_{1},\cdots ,{\bf e}_{n}\}\). Now, we can also consider the coordinate vectors for vector addition and scalar multiplication as follows.

  • Let \(u\) and \(v\) have the \(n\)-tuple coordinate vector \((\alpha_{1},\cdots ,\alpha_{n})\) and \((\beta_{1},\cdots ,\beta_{n})\), respectively; that is,\[v=\alpha_{1}v_{1}+\alpha_{2}v_{2}+\cdots +\alpha_{n}v_{n}\]
    and \[u=\beta_{1}v_{1}+\beta_{2}v_{2}+\cdots +\beta_{n}v_{n}.\] Then, we have
    \[u+v=\left (\alpha_{1}+\beta_{1}\right )v_{1}+\left (\alpha_{2}+\beta_{2}\right )v_{2}+\cdots+\left (\alpha_{n}+\beta_{n}\right )v_{n}.\]
    This says that the coordinate vector of \(u+v\) is given by
    \[\left (\alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2},\cdots ,\alpha_{n}+\beta_{n}\right ).\]
  • Given any \(c\in {\cal F}\), we have
    \[cv=c\alpha_{1}v_{1}+c\alpha_{2}v_{2}+\cdots +c\alpha_{n}v_{n},\]
    which says that the coordinate vector of \(cv\) is given by
    \[\left (c\alpha_{1},c\alpha_{2},\cdots ,c\alpha_{n}\right ).\]

If we change the basis for \(V\), then the coordinate vector of the same vector will be different from that of the old one. In order to obtain the relation between them, it will be more convenient to use the coordinate vector of \(v\) relative to the ordered basis \(\mathfrak{B}\). In this case, we write
\[[v]_{\mathfrak{B}}=\left [\begin{array}{c}
\alpha_{1}\\ \alpha_{2}\\ \vdots \\ \alpha_{n}
\end{array}\right ].\]

Example. Let \({\cal F}\) be a scalar field, and let \((\alpha_{1},\cdots ,\alpha_{n})\) be a vector in \({\cal F}^{n}\). If \(\mathfrak{B}=\{{\bf e}_{1},\cdots ,{\bf e}_{n}\}\) is the standard ordered basis of \({\cal F}^{n}\), then we have
\[v=(\alpha_{1},\cdots ,\alpha_{n})=\alpha_{1}{\bf e}_{1}+\cdots +\alpha_{n}{\bf e}_{n},\]
which says
\[[v]_{\mathfrak{B}}=\left [\begin{array}{c}
\alpha_{1}\\ \alpha_{2}\\ \vdots \\ \alpha_{n}
\end{array}\right ].\]

Example. Let \(V\) be the space of all polynomial with coefficients in \(\mathbb{R}\) and degrees that are less than or equal to \(2\). Then \(\mathfrak{B}_{1}=\{1,x,x^{2}\}\) and \(\mathfrak{B}_{2}=\{x,1,x^{2}\}\) are the ordered base for \(V\). For \(v=f(x)=2-3x+5x^{2}\in V\), we have
\[[v]_{\mathfrak{B}_{1}}=\left [\begin{array}{r}
2\\ -3\\ 5\end{array}\right ]\mbox{ and }
[v]_{\mathfrak{B}_{2}}=\left [\begin{array}{r}
-3\\ 2\\ 5\end{array}\right ].\]

\begin{equation}{\label{lat70}}\tag{34}\mbox{}\end{equation}

Theorem \ref{lat70}. Suppose that \(V\) is a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\), and that \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}’=\{v’_{1},\cdots ,v’_{n}\}\) are two ordered bases for \(V\). Then, there exists a unique \(n\times n\) invertible matrix \(P\) satisfying
\[[v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}\mbox{ and }[v]_{\mathfrak{B}’}=P^{-1}[v]_{\mathfrak{B}}\]
for any \(v\in V\), where the columns of \(P\) is given by \(P_{\cdot j}=[v’_{j}]_{\mathfrak{B}}\) for \(j=1,\cdots ,n\).

Proof. For each \(v’_{j}\), there exist unique \(p_{ij}\) for \(i,j=1,\cdots ,n\) satisfying
\begin{equation}{\label{laeq67}}\tag{35}
v’_{j}=\sum_{i=1}^{n}p_{ij}v_{i}.
\end{equation}

Let \((\alpha^{\prime}_{1},\cdots ,\alpha^{\prime}_{n})\) be the coordinate vector of a given vector \(v\in V\) in the ordered basis \(\mathfrak{B}’\). Then we have
\begin{align*}
v & =\alpha^{\prime}_{1}v’_{1}+\alpha^{\prime}_{2}v’_{2}+\cdots +\alpha^{\prime}_{n}v’_{n}=\sum_{j=1}^{n}\alpha^{\prime}_{j}v’_{j}\\
& =\sum_{j=1}^{n}\alpha^{\prime}_{j}\left (\sum_{i=1}^{n}p_{ij}v_{i}\right )\mbox{ (by }(\ref{laeq67}))\\
& =\sum_{j=1}^{n}\sum_{i=1}^{n}p_{ij}\alpha^{\prime}_{j}v_{i}\\
& = \sum_{i=1}^{n}\left (\sum_{j=1}^{n}p_{ij}\alpha^{\prime}_{j}\right )v_{i}.
\end{align*}

Let \((\alpha_{1},\cdots ,\alpha_{n})\) be the coordinate vector of \(v\). The uniqueness says
\begin{equation}{\label{laeq68}}\tag{36}
\alpha_{i}=\sum_{j=1}^{n}p_{ij}\alpha^{\prime}_{j}
\end{equation}

for \(i=1,\cdots ,n\). Let \(P\) be the \(n\times n\) matric with entries \(p_{ij}\). Then, we have
\[[v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}.\]
Since \(\mathfrak{B}\) and \(\mathfrak{B}’\) are linearly independent sets, i.e., \([v]_{\mathfrak{B}}={\bf 0}\) implies \([v]_{\mathfrak{B}’}={\bf 0}\). Proposition \ref{lap69} says that \(P\) is invertible. Therefore, we obtain
\[[v]_{\mathfrak{B}’}=P^{-1}[v]_{\mathfrak{B}}.\]
This completes the proof. \(\blacksquare\)

Theorem. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\), and let \(\mathfrak{B}\) be an ordered basis of \(V\). Suppose that \(P\) is an \(n\times n\) invertible matrix over \({\cal F}\). Then there is a unique ordered basis \(\mathfrak{B}’\) of \(V\) satisfying
\[[v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}\mbox{ and }[v]_{\mathfrak{B}’}=P^{-1}[v]_{\mathfrak{B}}\]
for any \(v\in V\).

Proof. Let \(\mathfrak{B}=\{v_{1},\cdots ,v_{n}\}\) and \(\mathfrak{B}’=\{v’_{1},\cdots ,v’_{n}\}\) be two ordered bases for \(V\). The equality \([v]_{\mathfrak{B}}=P[v]_{\mathfrak{B}’}\) says
\[v’_{j}=\sum_{i=1}^{n}p_{ij}v_{i}.\]
We need to show that the vectors \(v’_{j}\) defined by the above equations form a basis. Let \(Q=P^{-1}\). Then, we have
\begin{align*} \sum_{j=1}^{n}q_{jk}v’_{j} &  =\sum_{j=1}^{n}q_{jk}\left(\sum_{i=1}^{n}p_{ij}v_{i}\right)\\ &  =\sum_{j=1}^{n}\sum_{i=1}^{n}p_{ij}q_{jk}v_{i}\\ & =\sum_{i=1}^{n}\left (\sum_{j=1}^{n}p_{ij}q_{jk}\right )v_{i}=v_{k},\end{align*}
which says that the subspace spanned by \(\mathfrak{B}’\) contains \(\mathfrak{B}\). Therefore, \(\mathfrak{B}’\) is a basis for \(V\). From definition and Theorem \ref{lat70}, we can obtain the desired equalities. This completes the proof. \(\blacksquare\)

Example. Let \({\cal F}=\mathbb{R}\), and let \(x\) be a fixed real number. The following matrix
\[P=\left [\begin{array}{rr}
\cos x & -\sin x\\
\sin x & \cos x
\end{array}\right ]\]

is invertible with inverse
\[P^{-1}=\left [\begin{array}{rr}
\cos x & \sin x\\
-\sin x & \cos x
\end{array}\right ].\]

Therefore, for each \(x\), the set \(\mathfrak{B}’\) consisting of the vectors \((\cos x,\sin x)\) and \((-\sin x,\cos x)\) is a basis for \(\mathbb{R}^{2}\). This basis may be described as the one obtained by rotating the standard basis through the angle \(x\). If \(v=(v_{1},v_{2})\) is the vector in \(\mathbb{R}^{2}\), then
\[[v]_{\mathfrak{B}’}=\left [\begin{array}{rr}
\cos x & \sin x\\
-\sin x & \cos x
\end{array}\right ]\left [\begin{array}{c}
v_{1}\\ v_{2}
\end{array}\right ];\]

that is,
\begin{align*}
v’_{1} & =v_{1}\cos x+v_{2}\sin x\\
v’_{2} & =-v_{1}\sin x+v_{2}\cos x.
\end{align*}

Example. Let \({\cal F}=\mathbb{R}\). The following matrix
\[P=\left [\begin{array}{rrr}
-1 & 4 & 5\\
0 & 2 & -3\\
0 & 0 & 8
\end{array}\right ]\]

is invertible with inverse
\[P^{-1}=\left [\begin{array}{rrr}
-1 & 2 & \frac{11}{8}\\
0 & \frac{1}{2} & \frac{3}{16}\\
0 & 0 & \frac{1}{8}
\end{array}\right ].\]

Therefore, the vectors
\begin{eqnarray*}
v’_{1}=(-1,0,0), & v’_{2}=(4,2,0), & v’_{3}=(5,-3,8)
\end{eqnarray*}
form a basis of \(\mathbb{R}^{3}\). The coordinates \(\alpha_{1}^{\prime},\alpha_{2}^{\prime},\alpha_{3}^{\prime}\) of the vector \((\alpha_{1},\alpha_{2},\alpha_{3})\) in the basis \(\mathfrak{B}’\) are given by
\[\left [\begin{array}{c}
\alpha_{1}^{\prime}\\ \alpha_{2}^{\prime}\\ \alpha_{3}^{\prime}\end{array}\right ]
=\left [\begin{array}{rrr}
-1 & 2 & \frac{11}{8}\\
0 & \frac{1}{2} & \frac{3}{16}\\
0 & 0 & \frac{1}{8}
\end{array}\right ]\left [\begin{array}{c}
\alpha_{1}\\ \alpha_{2}\\ \alpha_{3}\end{array}\right ]
=\left [\begin{array}{r}
-\alpha_{1}+2\alpha_{2}+\frac{11}{8}\alpha_{3}\\
\frac{1}{2}\alpha_{2}+\frac{3}{16}\alpha_{3}\\
\frac{1}{8}\alpha_{3}\end{array}\right ].\]

In particular, we can obtain
\[(3,2,-8)=-10\alpha_{1}^{\prime}-\frac{1}{2}\alpha_{2}^{\prime}-\alpha_{3}^{\prime}.\]

\begin{equation}{\label{h}}\tag{H}\mbox{}\end{equation}

Direct Sums and Direct Product.

Let \(V\) be a vector space over the scalar field \({\cal F}\). Let \(U\) and \(W\) be two subspaces of \(V\). We define
\[U+W=\left\{u+w:u\in U\mbox{ and }w\in W\right\}.\]
Then \(U+W\) is a subspace of \(V\) that will be shown below.

Proposition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(U\) and \(W\) be two subspaces of \(V\). Then \(U+W\) is a subspace of \(V\),

Proof. Given any two elements \(u_{1}+w_{1}\) and \(u_{2}+w_{2}\) in \(U+W\), where \(u_{1},u_{2}\in U\) and \(w_{1},w_{2}\in W\), we have
\[(u_{1}+w_{1})+(u_{2}+w_{2})=(u_{1}+u_{2})+(w_{1}+w_{2})\in U+W.\]
Given any \(\alpha\in {\cal F}\), we also have
\[\alpha (u_{1}+w_{1})=\alpha u_{1}+\alpha w_{1}\in U+W.\]
Finally, since \(\theta =\theta +\theta\in V\), this says \(\theta\in U+W\), and the proof is complete. \(\blacksquare\)

Definition. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(U\) and \(W\) be two subspaces of \(V\). We say that \(V\) is a direct sum of \(U\) and \(W\) when,  for every element \(x\in V\), there exist unique elements \(u\in U\) and \(w\in W\) such that \(x=u+w\). In this case, we write \(V=U\oplus W\). \(\sharp\)

\begin{equation}{\label{lat119}}\tag{37}\mbox{}\end{equation}

Theorem \ref{lat119}. Let \(V\) be a vector space over the scalar field \({\cal F}\), and let \(U\) and \(W\) be two subspaces of \(V\). If \(V=U+W\) and \(U\cap W=\{\theta\}\), then \(V=U\oplus W\).

Proof. It is obvious that, given any \(x\in V\), there exist \(u\in U\) and \(w\in W\) satisfying \(x=u+w\). We want to claim that there \(u\) and \(w\) are uniquely determined. Let \(u’\in U\) and \(w’\in W\) such that \(x=u’+w’\). We are going to show \(u=u’\) and \(w=w’\). Since \(x=u+w=u’+w’\), we have \(u-u’=w-w’\).
This also says \(u-u’\in U\) and \(w-w’\in W\). Therefore, we obtain
\[u-u’=w-w’\in U\cap W=\{\theta\},\]
which shows \(u=u’\) and \(w=w’\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lap9}}\tag{38}\mbox{}\end{equation}

Proposition \ref{lap9}. Let \(V\) be a vector space over the scalar field \({\cal F}\) with \(\dim (V)=n\). If \(W\) is a subspace ov \(V\), then there exists a subspace \(U\) of \(V\) satisfying \(V=U\oplus W\).

Proof. Using Corollary \ref{lac8}, we can select a basis for \(W\), and extend it to a basis for \(V\). Suppose that \(\dim (W)=r\in\mathbb{N}\) and \(\{x_{1},\cdots ,x_{r}\}\) is a basis for \(W\). Then, we can take \(U\) as the subspace generated by \(\{v_{r+1},\cdots ,v_{n}\}\), where \(\{x_{1},\cdots ,x_{r},v_{r+1},\cdots ,v_{n}\}\) forms a basis for \(V\). This completes the proof. \(\blacksquare\)

We remark that, given any subspace \(W\) of \(V\), there will exist many subspaces \(U\) of \(V\) satisfying \(V=U\oplus W\). In other words, the subspace \(U\) determined in Proposition \ref{lap9} is not unique.

\begin{equation}{\label{lap120}}\tag{39}\mbox{}\end{equation}

Proposition \ref{lap120}. Let \(V\) be a finite-dimensional vector space over the scalar field \({\cal F}\), and let \(U\) and \(W\) be two subspaces of \(V\). If \(V=U\oplus W\), then
\[\dim (V)=\dim U+\dim (W).\]

Proof. By the definition of direct sum, each element \(x\in V\) has a unique expression \(x=u+w\), where \(u\in U\) and \(w\in W\). Suppose that \(\dim U=r\) and \(\dim (W)=s\). Let \(\{u_{1},\cdots ,u_{r}\}\) be a basis for \(U\), and let \(\{w_{1},\cdots ,w_{s}\}\) be a basis for \(W\). Therefore, there exists \(\alpha_{1},\cdots ,\alpha_{r},\alpha_{r+1}, \cdots ,\alpha_{r+s}\in {\cal F}\) satisfying
\[u=\alpha_{1}u_{1}+\cdots +\alpha_{r}u_{r}\]

and
\[w=\alpha_{r+1}w_{1}+\cdots +\alpha_{r+s}w_{s},\]
which says that \(x\) can be uniquely expressed as
\[x=u+w=\alpha_{1}u_{1}+\cdots +\alpha_{r}u_{r}+\alpha_{r+1}w_{1}+\cdots +\alpha_{r+s}w_{s}.\]
We conclude that \(\{u_{1},\cdots ,u_{r},w_{1},\cdots ,w_{s}\}\) forms a basis for \(V\), which also says
\[\dim (V)=r+s=\dim U+\dim (W).\]
This completes the proof. \(\blacksquare\)

Let \(U\) and \(W\) be two any vector spaces over the same scalar field \({\cal F}\). We define
\[U\times W=\left\{(u,w):u\in U\mbox{ and }w\in W\right\}.\]
The addition and scalar multiplication in \(U+W\) are defined below.

  • Given any \((u_{1},w_{1}),(u_{2},w_{2})\in U\times W\), the addition is defined by
    \[(u_{1},w_{1})+(u_{2},w_{2})=(u_{1}+u_{2},w_{1}+w_{2})\in U\times W.\]
  • Given any \(\alpha\in {\cal F}\) and \((u,w)\in U\times W\), the scalar multiplication is defined by
    \[\alpha (u,w)=(\alpha u,\alpha w)\in U\times W.\]

Let \(U\) and \(W\) be two any vector spaces over the same scalar field \({\cal F}\). The vector space \(U\times W\) is called the {\bf direct product} of \(U\) and \(W\). Then, we can show
\[\dim (U\times W)=\dim (U)+\dim (W).\]

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *