Sequences and Series of Functions

Giovanni Antonio Canal (Canaletto) (1697-1768) was an Italian painter.

We have sections

\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}

Uniform Convergence.

We are going to deal with the sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions whose terms are real-valued functions having a common domain. Since the terms of a sequence is taken to be the real-valued functions, two concepts of convergence will be introduced, which are called pointwise convergence and uniform convergence. Given a sequence of functions \(\{f_{n}\}_{n=1}^{\infty}\) defined on a metric space \((M,d)\), for each \(x\in M\), \(\{f_{n}(x)\}_{n=1}^{\infty}\) turns into a sequence of real numbers. Let \(S\) denote the set of \(x\) for which the sequence of real numbers \(\{f_{n}(x)\}_{n=1}^{\infty}\) is convergent. Then, the function defined by
\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)\mbox{ for }x\in S\]
is called the limit function of the sequence \(\{f_{n}\}_{n=1}^{\infty}\) of real-valued functions. We also say that \(\{f_{n}\}_{n=1}^{\infty}\) converges pointwise to \(f\) on the set \(S\).

Example. We consider the sequence of functions \(\{f_{n}\}_{n=1}^{\infty}\) given by

\[f_{n}(x)=\frac{x^{2n}}{1+x^{2n}}\]
for \(x\in\mathbb{R}\). In this case, the limit function of \(\{f_{n}\}_{n=1}^{\infty}\) exists for each \(x\in\mathbb{R}\), and is given by
\[f(x)=\left\{\begin{array}{ll}
0 & \mbox{if \(|x|<1\)}\\
\frac{1}{2} & \mbox{if \(|x|=1\)}\\
1 & \mbox{if \(|x|>1\)}.
\end{array}\right .\]
We see that each function \(f_{n}\) is continuous on \(\mathbb{R}\), but the limit function \(f\) is discontinuous at \(x=1\) and \(x=-1\). \(\sharp\)

Example. We consider the sequence of functions \(\{f_{n}\}_{n=1}^{\infty}\) given by

\[f_{n}(x)=n^{2}x\left (1-x\right )^{n}\]
for \(x\in\mathbb{R}\). Given any \(x\in [0,1]\), the limit function is given by
\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)=0.\]
In this case, we have
\[\int_{0}^{1}f(x)dx=0.\]
However, the integrals of \(f_{n}\) is given by
\begin{align*}
\int_{0}^{1}f_{n}(x)dx & =n^{2}\int_{0}^{1}x(1-x)^{n}dx=n^{2}\int_{0}^{1}(1-t)t^{n}dt\\
& =\frac{n^{2}}{n+1}-\frac{n^{2}}{n+2}=\frac{n^{2}}{(n+1)(n+2)},
\end{align*}
which says
\[\lim_{n\rightarrow\infty}\int_{0}^{1}f_{n}(x)dx=1\neq 0.\]
Therefore, we obtain
\[\lim_{n\rightarrow\infty}\int_{0}^{1}f_{n}(x)dx\neq\int_{0}^{1}\left [\lim_{n\rightarrow\infty}f_{n}(x)\right ]dx.\]
In other words, the operations of “limit” and “integration” cannot be interchanged in general. The sufficient conditions for guaranteeing the equality will be provided below. \(\sharp\)

Example. We consider the sequence of functions \(\{f_{n}\}_{n=1}^{\infty}\) given by

\[f_{n}(x)=\frac{\sin nx}{\sqrt{n}}\]
for \(x\in\mathbb{R}\). Then, the limit function is given by
\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)=0\]
for all \(x\in\mathbb{R}\). Since
\[f’_{n}(x)=\sqrt{n}\cdot\cos nx,\]
this says that the limit \(\lim_{n\rightarrow\infty}f’_{n}(x)\) does not exist. Therefore, we obtain
\[\lim_{n\rightarrow\infty}\left [\frac{d}{dx}f_{n}(x)\right ]\neq\frac{d}{dx}\left [\lim_{n\rightarrow\infty}f_{n}(x)\right ].\]
In other words, the operations of “limit” and “differentiation” cannot be interchanged in general. The sufficient conditions for guaranteeing the equality will be provided below. \(\sharp\)

Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions which converges pointwise on a set \(S\) to a limit function \(f\). This means that, for every \(x\in S\) and every \(\epsilon >0\), there exists an integer \(N\) (which depends on both \(x\) and \(\epsilon\)) such that
\[n>N\mbox{ implies }|f_{n}(x)-f(x)|<\epsilon.\]
When this number \(N\) is independent of \(x\) (i.e., depending only on \(\epsilon\)), the convergence is said to be uniform on \(S\). The formal definition is given below.

Definition. We say that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions converges uniformly to \(f\) on a set \(S\) when, given any \(\epsilon >0\), there exists an integer \(N\) (which depends only on \(\epsilon\)) such that
\[n>N\mbox{ implies }\left |f_{n}(x)-f(x)\right |<\epsilon\mbox{ for all }x\in S.\]

\begin{equation}{\label{map180}}\tag{1}\mbox{}\end{equation}
Proposition \ref{map180}. Suppose that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions converges uniformly to \(f\) on \(S\), and that each function \(f_{n}\) is continuous at \(c\in S\). Then, the limit function \(f\) is also continuous at \(c\).

Proof. Suppose that \(c\) is an isolated point of \(S\). Then \(f\) is automatically continuous at \(c\). Therefore, we assume that \(c\) is an accumulation point of \(S\). The uniform convergence says that, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n\geq N\mbox{ implies }\left |f_{n}(x)-f(x)\right |<\frac{\epsilon}{3}\mbox{ for every }x\in S.\]
Since \(f_{N}\) is continuous at \(c\), there exists \(\delta>0\) such that
\[d(x,c)<\delta\mbox{ implies }\left |f_{N}(x)-f_{N}(c)\right |<\frac{\epsilon}{3}.\]
Therefore, we have
\begin{align*} \left |f(x)-f(c)\right | & \leq\left |f(x)-f_{N}(x)\right |+\left |f_{N}(x)-f_{N}(c)\right |+\left |f_{N}(c)-f(c)\right |
\\ & <\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3}=\epsilon.\end{align*}
This completes the proof. \(\blacksquare\)

Given any accumulation point \(c\), Proposition \ref{map180} says
\begin{align*} \lim_{x\rightarrow c}\lim_{n\rightarrow\infty}f_{n}(x) & =\lim_{x\rightarrow c}f(x)=f(c)
\\ & =\lim_{n\rightarrow\infty}f_{n}(c)=\lim_{n\rightarrow\infty}\lim_{x\rightarrow c}f_{n}(x),\end{align*}
which says that the operations of two different “limits” can be interchanged.

Example. Consider a sequence of functions \(\{f_{n}\}_{n=1}^{\infty}\) given by

\[f_{n}(x)=\frac{nx}{1+nx}\mbox{ for }x\geq 0.\]
For \(x>0\), we have
\[\lim_{n\rightarrow\infty}\frac{nx}{1+nx}=\lim_{n\rightarrow\infty}\frac{1}{1+1/nx}=1.\]
It is clear to see \(f_{n}(0)=0\) for all \(n\). Therefore, the limit function is given by
\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)=\left\{\begin{array}{ll}
0 & \mbox{if \(x=0\)}\\ 1 & \mbox{if \(x>0\)}
\end{array}\right .\]

Given any \(0<a\), we are going to claim that \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \([a,\infty )\). For \(x\in [a,\infty )\), we have \(f(x)=1\) and
\begin{equation}{\label{sol1}}\tag{2}
\left |f_{n}(x)-f(x)\right |=\left |\frac{nx}{1+nx}-1\right |=\frac{1}{1+nx}\leq\frac{1}{1+na}.
\end{equation}
Since
\[\lim_{n\rightarrow\infty}\frac{1}{1+na}=0,\]
given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n\geq N\mbox{ implies }\frac{1}{1+na}=\left |\frac{1}{1+na}\right |<\epsilon.\]
From (\ref{sol1}), we obtain
\[\left |f_{n}(x)-f(x)\right |\leq\frac{1}{1+na}<\epsilon\]

for all \(x\in [a,\infty )\) and for all \(n\geq N\). It says that \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \([a,\infty )\). Next, we are going to claim that \(\{f_{n}\}_{n=1}^{\infty}\) does not converge uniformly to \(f\) on \([0,\infty )\). Recall that \(\{f_{n}\}_{n=1}^{\infty}\) does not converge uniformly to \(f\) on \([0,\infty )\) when there exists \(\epsilon >0\) such that, given any integer \(N\), there exist \(n_{0}\geq N\) and \(x_{0}\in S\) satisfying \(|f_{n_{0}}(x_{0})-f(x_{0})|\geq\epsilon\). Now, given any integer \(N\), we take \(n_{0}=N\) and \(0<x_{0}<1/n_{0}\). Then, we have \(f(x_{0})=1\) and
\begin{align*} \left |f_{n_{0}}(x_{0})-f(x_{0})\right | & =\left |\frac{n_{0}x_{0}}{1+n_{0}x_{0}}-1\right |\\ & =\frac{1}{1+n_{0}x_{0}}\\ & \geq\frac{1}{1+1}=\frac{1}{2}.\end{align*}
This says that \(\{f_{n}\}_{n=1}^{\infty}\) does not converges uniformly to \(f\) on \([0,\infty )\). \(\sharp\)

Example. Consider a sequence of functions \(\{f_{n}\}_{n=1}^{\infty}\) given by

\[f_{n}(x)=\frac{1}{1+x^{n}}\mbox{ for }x\in [0,1].\]
For \(x\in [0,1)\), we have
\[\lim_{n\rightarrow\infty}\frac{1}{1+x^{n}}=1.\]
It is clear to see \(f_{n}(1)=1/2\) for all \(n\). Therefore, the limit function is given by
\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)=\left\{\begin{array}{ll}
\frac{1}{2} & \mbox{if \(x=1\)}\\ 1 & \mbox{if \(0\leq x<1\)}
\end{array}\right .\]

For \(0<a<1\), we are going to claim that \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \([0,a]\). For \(x\in [0,a]\), we have \(f(x)=1\) and
\begin{align*} \left |f_{n}(x)-f(x)\right | & =\left |\frac{1}{1+x^{n}}-1\right |\\ & =\frac{x^{n}}{1+x^{n}}\leq\frac{a^{n}}{1+a^{n}}.\end{align*}
Since
\[\lim_{n\rightarrow\infty}\frac{a^{n}}{1+a^{n}}=0,\]
it follows that \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \([0,a]\). Next, we are going to claim that \(\{f_{n}\}_{n=1}^{\infty}\) does not converge uniformly to \(f\) on \([0,1]\). Given an integer \(n\) and \(x\in [0,1)\) satisfying \(\sqrt[n]{1/2}<x<1\), we have \(f(x)=1\) and
\begin{align*} \left |f_{n}(x)-f(x)\right | & =\left |\frac{1}{1+x^{n}}-1\right |\\ & =\frac{x^{n}}{1+x^{n}}\\ & \geq\frac{\frac{1}{2}}{1+1}=\frac{1}{4}.\end{align*}
This says that \(\{f_{n}\}_{n=1}^{\infty}\) does not converges uniformly to \(f\) on \([0,1]\). \(\sharp\)

\begin{equation}{\label{mat273}}\tag{3}\mbox{}\end{equation}
Theorem \ref{mat273}. (Cauchy Condition for Sequence of Functions). Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions defined on \(S\). There exists a function \(f\) such that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \(S\) if and only if the Cauchy condition is satisfied: for every \(\epsilon >0\), there exists an integer \(N\) such that
\[m>N\mbox{ and }n>N\mbox{ imply }\left |f_{m}(x)-f_{n}(x)\right |<\epsilon\mbox{ for all }x\in S.\]

Proof. Suppose that \(f_{n}\rightarrow f\) uniformly on \(S\). Then, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n>N\mbox{ implies }\left |f_{n}(x)-f(x)\right |<\frac{\epsilon}{2}\mbox{ for all }x\in S.\]
Now, we take \(m>N\). Then, we also have
\[\left |f_{m}(x)-f(x)\right |<\frac{\epsilon}{2}\mbox{ for all }x\in S.\]
Therefore, we obtain
\begin{align*} \left |f_{m}(x)-f_{n}(x)\right | & \leq\left |f_{m}(x)-f(x)\right |+\left |f(x)-f_{n}(x)\right |\\ & <\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon\mbox{ for all }x\in S.\end{align*}

Conversely, suppose that the Cauchy condition is satisfied. Then, for each \(x\in S\), the sequence of real numbers \(\{f_{n}(x)\}_{n=1}^{\infty}\) satisfies the Cauchy condition. Using the the completeness of \(\mathbb{R}\), the sequence of real numbers \(\{f_{n}(x)\}_{n=1}^{\infty}\) is convergent. Therefore, we can define a limit function
\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)\mbox{ for each }x\in S,\]
which is a pointwise convergence. We want to show that \(f_{n}\rightarrow f\) uniformly on \(S\). Since the Cauchy condition is satisfied, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n>N\mbox{ implies }\left |f_{n}(x)-f_{n+k}(x)\right |<\frac{\epsilon}{2}\mbox{ for all }k\mbox{ and for every }x\in S.\]
Therefore, we obtain
\begin{align*} \left |f_{n}(x)-f(x)\right | & =\lim_{k\rightarrow\infty}\left |f_{n}(x)-f_{n+k}(x)\right |\\ & \leq
\frac{\epsilon}{2}<\epsilon\mbox{ for every }x\in S.\end{align*}
This completes the proof. \(\blacksquare\)

\begin{equation}{\label{ma122}}\tag{4}\mbox{}\end{equation}

Remark \ref{ma122}. We have the following observations.

  • The sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions converges uniformly to \(f\) on a set \(S\) if and only if, given any \(\epsilon >0\), there exists an integer \(N\) such that
    \[n>N\mbox{ implies }\sup_{x\in S}\left |f_{n}(x)-f(x)\right |\leq\epsilon.\]
  • The Cauchy condition is satisfied if and only if, given any \(\epsilon >0\), there exists an integer \(N\) such that
    \[m>N\mbox{ and }n>N\mbox{ implies }\sup_{x\in S}\left |f_{m}(x)-f_{n}(x)\right |\leq\epsilon.\]

Proposition. The sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions converges uniformly to \(f\) on a set \(S\) if and only if the sequence \(\{a_{n}\}_{n=1}^{\infty}\) defined by
\[a_{n}=\sup_{x\in S}\left |f_{n}(x)-f(x)\right |\]
converges to \(0\).

Proof. Suppose that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on a set \(S\). Then, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n>N\mbox{ implies }\left |f_{n}(x)-f(x)\right |<\frac{\epsilon}{2}\mbox{ for all }x\in S.\]
Therefore, we have that
\[n>N\mbox{ implies }|a_{n}|=\sup_{x\in S}\left |f_{n}(x)-f(x)\right |\leq\frac{\epsilon}{2}<\epsilon,\]
which says that \(\lim_{n\rightarrow\infty}a_{n}=0\). Suppose that \(\lim_{n\rightarrow\infty}a_{n}=0\). Given any \(\epsilon>0\), there exists an integer
$N$ such that
\[n>N\mbox{ implies }\sup_{x\in S}\left |f_{n}(x)-f(x)\right |=|a_{n}|<\epsilon,\]
which also says that
\[n>N\mbox{ implies }\left |f_{n}(x)-f(x)\right |<\epsilon\mbox{ for all }x\in S.\]
This completes the proof. \(\blacksquare\)

Example. Let \(({\cal M}_{S},d)\) be a metric space of all bounded real-valued functions on a nonempty set \(S\) with the metric \(d\) defined by
\[d(f,g)=\sup_{x\in S}\left |f(x)-g(x)\right |.\]
Suppose that \(f_{n}\rightarrow f\) in the metric space \(({\cal M}_{S},d)\). It means that, given any \(\epsilon>0\), there exists an integer \(N\) such that
\begin{equation}{\label{ma123}}\tag{5}
n\geq N\mbox{ implies }d(f_{n},f)=\sup_{x\in S}\left |f_{n}(x)-f(x)\right |<\epsilon.
\end{equation}
From Remark \ref{ma122}, we have \(f_{n}\rightarrow f\) uniformly on \(S\). Conversely, suppose that \(f_{n}\rightarrow f\) uniformly on \(S\). Using (\ref{ma123}) again, we also have \(f_{n}\rightarrow f\) in the metric space \(({\cal M}_{S},d)\). Therefore, we conclude that \(f_{n}\rightarrow f\) in the metric space \(({\cal M}_{S},d)\) if and only if \(f_{n}\rightarrow f\) uniformly on \(S\). In other words, the uniform convergence on \(S\) is the same as ordinary convergence in the metric space \(({\cal M}_{S},d)\). \(\sharp\)

\begin{equation}{\label{ma146}}\tag{6}\mbox{}\end{equation}
Theorem \ref{ma146}. (Dini’s Theorem). Let \(A\) be a closed and bounded (compact) subset of \(\mathbb{R}\), and let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of continuous functions defined on the common domain \(A\) such that the following conditions are satisfied:

  • \(f_{n}(x)\geq 0\) for all \(x\in A\) and all \(n\geq 1\);
  • \(\{f_{n}\}_{n=1}^{\infty}\) converges pointwise to a continuous function \(f\);
  • \(f_{n+1}(x)\leq f_{n}(x)\) for all \(x\in A\) and all \(n\).

Then, the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \(A\).

Proof. Let \(g_{n}=f_{n}-f\) for all \(n\). It suffices to show that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) converges uniformly to \(0\) on \(A\). The second condition says that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) converges pointwise to \(0\). Therefore, given any \(\epsilon>0\), there exists an integer \(N_{x}\equiv N(\epsilon ,x)\) (that depends on \(x\) and \(\epsilon\)) such that
\begin{equation}{\label{ma147}}\tag{7}
n\geq N_{x}\mbox{ implies }|g_{n}(x)|<\frac{\epsilon}{2}.
\end{equation}
The continuity of function \(g_{N_{x}}\) says that there exists a neighborhood \(U_{x}(N_{x})\) of \(x\) such that
\begin{equation}{\label{ma148}}\tag{8}
y\in U_{x}(N_{x})\mbox{ implies }\left |g_{N_{x}}(y)-g_{N_{x}}(x)\right |<\frac{\epsilon}{2}.
\end{equation}
Therefore, the family \(\{U_{x}(N_{x}):x\in A\}\) of neighborhoods forms a cover of compact set \(A\). The compactness says that there exists a finite subcover
\[\left\{U_{x_{i}}(N_{x_{i}}):x_{i}\in A\mbox{ for }i=1,\cdots,n\right\}\]
that also covers \(A\). That is to say, we have
\[A\subseteq\bigcup_{i=1}^{n}U_{x_{i}}(N_{x_{i}}).\]
Let
\begin{equation}{\label{ma149}}\tag{9}
N=\max\left\{N_{x_{1}},\cdots,N_{x_{n}}\right\}.
\end{equation}
Then, for any \(x\in A\), there exists an index \(i\) satisfying \(x\in U_{x_{i}}(N_{x_{i}})\). Since the sequence \(\{f_{n}\}_{n=1}^{\infty}\) is decreasing, it follows that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) is also decreasing. Therefore, for \(n\geq N\), using (\ref{ma147}) and (\ref{ma148}), we have
\begin{align*} 0 & \leq g_{n}(x)\leq g_{N_{x_{i}}}(x)\\ & =\left (g_{N_{x_{i}}}(x)-g_{N_{x_{i}}}(x_{i})\right )
+g_{N_{x_{i}}}(x_{i})\\ & <\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon.\end{align*}
The integer \(N\) given in (\ref{ma149}) is independent of any \(x\in A\). Therefore, we obtain
\[|g_{n}(x)|<\epsilon\mbox{ for all }x\in A\mbox{ and for all }n\geq N.\]
This completes the proof. \(\blacksquare\)

Example. Given a sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions by

\[f_{n}(x)=\frac{x}{(1+x)^{n}}\mbox{ for }1\leq x\leq 2,\]
using the Dini’s Theorem \ref{ma146}, it is clear that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) is uniformly convergent to \(0\) on \([1,2]\). \(\sharp\)

In what follows, we are going to study the uniform convergence of infinite series of real-valued functions.

Definition. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions defined on \(S\). For each \(x\in S\), we define
\begin{equation}{\label{maeq274}}\tag{10}
s_{n}(x)=\sum_{k=1}^{n}f_{k}(x).
\end{equation}
If there exists a function \(f\) satisfying \(s_{n}\rightarrow f\) uniformly on \(S\), we say that the series \(\sum_{k=1}^{\infty}f_{k}(x)\) of functions converges uniformly on \(S\), and we write
\[\sum_{k=1}^{\infty}f_{k}(x)=f(x)\]
uniformly on \(S\). \(\sharp\)

\begin{equation}{\label{map286}}\tag{11}\mbox{}\end{equation}
Proposition \ref{map286}. Suppose that the infinite series \(\sum_{k=1}^{\infty}f_{k}(x)\) converges uniformly to \(f\) on \(S\), and that each function \(f_{n}\) is continuous at \(c\in S\). Then \(f\) is also continuous at \(c\).

Proof. Let \(s_{n}\) be given in (\ref{maeq274}). Continuity of each \(f_{k}\) at \(c\) implies continuity of \(s_{n}\) at \(c\). The result follows from Proposition \ref{map180} immediately. \(\blacksquare\)

\begin{equation}{\label{mat275}}\tag{12}\mbox{}\end{equation}
Theorem \ref{mat275}. (Cauchy Condition for Uniform Convergence of Series). The infinite series \(\sum_{k=1}^{\infty}f_{k}(x)\) of functions converges uniformly on \(S\) if and only if, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n>N\mbox{ implies }\left |\sum_{k=n+1}^{n+p}f_{k}(x)\right |<\epsilon\]

for every \(x\in S\) for all \(p\).

Proof. Let \(s_{n}\) be given in (\ref{maeq274}). Then, the result follows immediately from Theorem \ref{mat273}. \(\blacksquare\)

\begin{equation}{\label{map151}}\tag{13}\mbox{}\end{equation}
Lemma \ref{map151}. (Cauchy condition for series of real numbers). The series \(\sum_{k=1}^{\infty}a_{k}\) is convergent if and only if, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n>N\mbox{ implies }\left |a_{n+1}+\cdots +a_{n+p}\right |<\epsilon\]
for all \(p\).

The above Lemma \ref{map151} can refer to the page Infinite Series of Real Numbers.

\begin{equation}{\label{mat284}}\tag{14}\mbox{}\end{equation}
Theorem \ref{mat284}. (Weierstrass M-test). Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions defined on \(S\), and let \(\{M_{n}\}_{n=1}^{\infty}\) be a sequence of nonnegative numbers satisfying
\[0\leq\left |f_{n}(x)\right |\leq M_{n}\mbox{ for each }x\in S\mbox{ and for all }n.\]
Suppose that the infinite series \(\sum_{k=1}^{\infty}M_{k}\) is convergent. Then, the infinite series \(\sum_{k=1}^{\infty}f_{k}(x)\) of functions is convergent uniformly on \(S\).

Proof. Since we have the following inequality
\begin{align*} \left |\sum_{k=n+1}^{n+p}f_{k}(x)\right | & \leq\sum_{k=n+1}^{n+p}\left |f_{k}(x)\right |\\ & \leq\sum_{k=n+1}^{n+p}M_{k},\end{align*}
the result follows immediately from Lemma \ref{map151} and Theorem \ref{mat275}. \(\blacksquare\)

Example. Given a series of functions by

\[\sum_{n=1}^{\infty}\frac{\cos^{2}(nx)}{n^{2}}\]
on \(\mathbb{R}\), let \(M_{n}=1/n^{2}\). Then, we have
\[\left |\frac{\cos^{2}(nx)}{n^{2}}\right |\leq\frac{1}{n^{2}}=M_{n}\mbox{ for all }n.\]
Since the \(p\)-series \(\sum_{n=1}^{\infty}(1/n^{2})\) for \(p=2\) is convergent, the Weierstrass M-test says that the series is uniformly convergent on \(\mathbb{R}\). \(\sharp\)

Example. Given a series of function by

\[\sum_{n=1}^{\infty}\frac{x^{n}}{n!}\]
on \(\mathbb{R}\), we are going to claim that this series cannot be uniformly convergent on \(\mathbb{R}\). We consider the sequence of partial sum of functions \(\{s_{n}\}_{n=1}^{\infty}\), where
\[s_{n}=\sum_{k=1}^{n}\frac{x^{k}}{k!}.\]
Let us recall that the Cauchy condition is not satisfied when there exists \(\epsilon >0\) such that, for any given integer \(N\), there exist \(m\geq N\) and \(n\geq N\) satisfying
\[\left |s_{m}(x)-s_{n}(x)\right |>\epsilon\mbox{ for all }x\in\mathbb{R};\] that is,
\[\sup_{x\in\mathbb{R}}\left |s_{m}(x)-s_{n}(x)\right |\geq\epsilon.\]
Now, for any \(N\geq 1\), we have
\begin{align*} \sup_{x\in\mathbb{R}}\left |s_{N+1}(x)-s_{N}(x)\right | & =\sup_{x\in\mathbb{R}}
\left |f_{N+1}(x)\right |\\ & \geq\left |f_{N+1}(N+1)\right |\\ & =\frac{(N+1)^{N+1}}{(N+1)!}\geq 1>0.9,\end{align*}
which says that the Cauchy condition for \(\{s_{n}\}_{n=1}^{\infty}\) is not satisfied by taking \(\epsilon =0.9\), \(m=N+1\) and \(n=N\). Therefore, the sequence \(\{s_{n}\}_{n=1}^{\infty}\) cannot be uniformly convergent on \(\mathbb{R}\), which also says that the series cannot be uniformly convergent on \(\mathbb{R}\). \(\sharp\)

Example. Given a series of functions by

\[f(x)=\sum_{n=1}^{\infty}\left (\frac{x^{n}}{n!}\right )^{2},\]
we are going to claim that \(f\) is continuous on \(\mathbb{R}\). Given any fixed \(t\in\mathbb{R}\), we first claim that the series is uniformly convergent on
$[-t,t]$. Let
\[M_{n}=\left (\frac{t^{n}}{n!}\right )^{2}.\]
Since
\begin{align*} \lim_{n\rightarrow\infty}\frac{M_{n+1}}{M_{n}} & =\lim_{n\rightarrow\infty}
\left [\frac{t^{n+1}}{(n+1)!}\right ]^{2}\cdot\left (\frac{n!}{t^{n}}\right )^{2}
\\ & =\lim_{n\rightarrow\infty}\left (\frac{t}{n+1}\right )^{2}=0<1,\end{align*}
the ratio test says that the series of real numbers \(\sum_{n=1}^{\infty}M_{n}\) is convergent. Using the Weierstrass M-test, it follows that the series is uniformly convergent on \([-t,t]\). Since each function
\[f_{n}(x)=\left (\frac{x^{n}}{n!}\right )^{2}\]
is continuous on \([-t,t]\), the following function
\[f(x)=\sum_{n=1}^{\infty}\left (\frac{x^{n}}{n!}\right )^{2}=\lim_{n\rightarrow\infty}s_{n}(x)\]
is continuous on \([-t,t]\) by Proposition \ref{map286}. Since \(t\) can be arbitrary real number, it follows that \(f\) is continuous on \(\mathbb{R}\). \(\blacksquare\)

Example. Given a sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions by

\[f_{n}(x)=\frac{x}{(1+x)^{n}}\mbox{ for }1\leq x\leq 2,\]
we are going to claim that the series \(\sum_{n=1}^{\infty}f_{n}\) is uniformly convergent on \([1,2]\). We first claim that the series is pointwise convergent for \(x\in [1,2]\). Suppose that \(|1+x|>1\). Then \(|1/(1+x)|<1\). Therefore, we have
\begin{align*} \sum_{n=1}^{\infty}\frac{x}{(1+x)^{n}} & =x\cdot\sum_{n=1}^{\infty}\left (\frac{1}{1+x}\right )^{n}\\ & =\frac{x}{1+x}\cdot\frac{1}{1-\frac{1}{x+1}}=1,\end{align*}
which shows that the series is pointwise convergent to \(1\) for \(x\in [1,2]\). Now, we have
\begin{align*} f_{n}^{\prime}(x) & =\frac{(1+x)^{n}-nx(1+x)^{n-1}}{(1+x)^{2n}}\\ & =\frac{(1+x)^{n-1}\cdot [1+(1-n)x]}{(1+x)^{2n}}.\end{align*}
Since \(1\leq x\leq 2\), there exists an integer \(N\) satisfying \(f_{n}^{\prime}(x)\leq 0\) for \(n\geq N\) and for all \(x\in [1,2]\), which also says that \(f_{n}\) is decreasing on \([1,2]\) for \(n\geq N\). We define
\[M_{n}=f_{n}(1)\mbox{ for }n\geq N\] and \[M_{n}=\sup_{x\in [1,2]}f_{n}(x)\mbox{ for }n<N.\]
Then, we have \(|f_{n}(x)|\leq M_{n}\) for all \(n\). Since \(\sum_{n=1}^{\infty}f_{n}(1)=1\), it follows \(\sum_{n=N}^{\infty}f_{n}(1)<1\). Therefore, the infinite series
\begin{align*} \sum_{n=1}^{\infty}M_{n} & =\sum_{n=1}^{N-1}M_{n}+\sum_{n=N}^{\infty}M_{n}
\\ & =\sum_{n=1}^{N-1}M_{n}+\sum_{n=N}^{\infty}f_{n}(1)\\ & <1+\sum_{n=1}^{N-1}M_{n}\end{align*}
is convergent. Using the Weierstrass M-test, we conclude that the infinite series \(\sum_{n=1}^{\infty}f_{n}\) is uniformly convergent to \(1\) on \([1,2]\). \(\sharp\)

\begin{equation}{\label{ma150}}\tag{15}\mbox{}\end{equation}
Theorem \ref{ma150}. (Weierstrass Approximation Theorem). Suppose that \(f\) is a continuous function on a closed interval \([a,b]\). Then, there exists a sequence of polynomials that converge uniformly to \(f\) on \([a,b]\). \(\sharp\)

Example. Let \(f\) be a continuous function on \([a,b]\). Suppose that
\[\int_{a}^{b}x^{n}f(x)dx=0\mbox{ for all }n.\]
We are going to claim that \(f(x)=0\) for all \(x\in [a,b]\). We consider the following polynomials
\[P_{n}(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n}.\]
Using the assumption, we have
\[\int_{a}^{b}f(x)\cdot P_{n}(x)dx=a_{0}\int_{a}^{b}f(x)dx+a_{1}\int_{a}^{b}x\cdot f(x)dx
+\cdots +a_{n}\int_{a}^{b}x^{n}\cdot f(x)dx=0.\]
The Weierstrass approximation Theorem \ref{ma150} says that there exists a sequence \(\{P_{n}\}_{n=1}^{\infty}\) of polynomials that converges uniformly to \(f\) on \([a,b]\), which also says that the sequence \(\{f\cdot P_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f^{2}\) on \([a,b]\). Therefore, we obtain
\[\int_{a}^{b}f^{2}(x)dx=\lim_{n\rightarrow\infty}\int_{a}^{b}f(x)\cdot P_{n}(x)dx=0.\]
Since \(f^{2}\) is continuous on \([a,b]\), it follows \(f^{2}(x)=0\) for all \(x\in [a,b]\), which also says that \(f(x)=0\) for all \(x\in [a,b]\). \(\sharp\)

Definition. We say that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) of functions is uniformly bounded on \(S\) when there exists a constant \(M>0\) satisfying
\[|f_{n}(x)|\leq M\mbox{ for all }x\in S\mbox{ and for all }n.\]
The number \(M\) is called a uniform bound for \(\{f_{n}\}_{n=1}^{\infty}\). \(\sharp\)

It is easy to show that if each function \(f_{n}\) is bounded and if \(f_{n}\rightarrow f\) uniformly on \(S\), then the sequence \(\{f_{n}\}_{n=1}^{\infty}\) is uniformly bounded on \(S\).

Lemma. Given two sequences \(\{a_{n}\}_{n=1}^{\infty}\) and \(\{b_{n}\}_{n=1}^{\infty}\), the \(n\)th partial sums of \(\{a_{n}\}_{n=1}^{\infty}\) is given by \(s_{n}=a_{1}+\cdots +a_{n}\). Then, we have the following identity
\begin{equation}{\label{ma22}}\tag{16}
\sum_{k=1}^{n}a_{k}b_{k}=s_{n}b_{n+1}-\sum_{k=1}^{n}s_{k}\left (b_{k+1}-b_{k}\right ).
\end{equation}
Therefore, if the series \(\sum_{k=1}^{\infty}s_{k}(b_{k+1}-b_{k})\) and the sequence \(\{s_{n}b_{n+1}\}_{n=1}^{\infty}\) are convergent, then the series \(\sum_{k=1}^{\infty}a_{k}b_{k}\) is convergent.

Proof. Let \(s_{0}=0\). Then, we have
\begin{align*} \sum_{k=1}^{n}a_{k}b_{k} & =\sum_{k=1}^{n}(s_{k}-s_{k-1})b_{k}\\ & =\sum_{k=1}^{n}s_{k}b_{k}
-\sum_{k=1}^{n}s_{k}b_{k+1}+s_{n}b_{n+1}\\ & =\sum_{k=1}^{n}s_{k}\left (b_{k}-b_{k+1}\right )+s_{n}b_{n+1}.\end{align*}
This completes the proof. \(\blacksquare\)

The above lemma can refer to the page Infinite Series of Real Numbers.

Theorem. (Dirichlet’s Test). Let \(\{g_{n}\}_{n=1}^{\infty}\) be a decreasing sequence of nonnegative functions satisfying \(g_{n}\rightarrow 0\) uniformly on \(S\). Let \(s_{n}\) denote the \(n\)th partial sum of the infinite series \(\sum_{k=1}^{\infty}f_{k}\) of functions, where each function \(f_{k}\) is defined on \(S\). Suppose that \(\{s_{n}\}_{n=1}^{\infty}\) is uniformly bounded on \(S\). Then, the infinite series \(\sum_{k=1}^{\infty}f_{k}\cdot g_{k}\) of functions is uniformly convergent on \(S\).

Proof. Using (\ref{ma22}), we define the following partial sums
\begin{align*} t_{n}(x) & =\sum_{k=1}^{n}f_{k}(x)g_{k}(x)\\ & =\sum_{k=1}^{n}s_{k}(x)\left [g_{k}(x)-g_{k+1}(x)\right ]+g_{n+1}(x)s_{n}(x).\end{align*}
Therefore, for \(n>m\), we have
\[t_{n}(x)-t_{m}(x)=\sum_{k=m+1}^{n}s_{k}(x)\left [g_{k}(x)-g_{k+1}(x)\right ]+g_{n+1}(x)s_{n}(x)-g_{m+1}(x)s_{m}(x).\]
Since each \(g_{n}\) is nonnegative and \(\{s_{n}\}_{n=1}^{\infty}\) is uniformly bounded by a constant \(M\), we have
\begin{align}
\left |t_{n}(x)-t_{m}(x)\right | & \leq M\sum_{k=m+1}^{n}\left [g_{k}(x)-g_{k+1}(x)\right ]+M\cdot g_{n+1}(x)+M\cdot g_{m+1}(x)\label{maeq276}\tag{17}\\
& =M\cdot\left [g_{m+1}(x)-g_{n+1}(x)\right ]+M\cdot g_{n+1}(x)+M\cdot g_{m+1}(x)\nonumber\\
& =2M\cdot g_{m+1}(x)\nonumber
\end{align}
Since \(g_{n}\rightarrow 0\) uniformly on \(S\), there exists an integer \(N\) such that \(n>N\) implies \(|g_{n}(x)|<\epsilon /2M\) for each \(x\in S\). Therefore, from (\ref{maeq276}), we obtain \(|t_{n}(x)-t_{m}(x)|<\epsilon\) for each \(x\in S\); that is, the sequence \(\{t_{n}\}_{n=1}^{\infty}\) satisfies the Cauchy condition. Theorem \ref{mat273} says that the infinite series \(\sum_{k=1}^{\infty}f_{k}(x)g_{k}(x)\) converges uniformly on \(S\). This completes the proof. \(\blacksquare\)

Theorem. (Abel’s Test). Let \(\{g_{n}\}_{n=1}^{\infty}\) be a decreasing sequence of functions. Suppose that \(\{g_{n}\}_{n=1}^{\infty}\) is uniformly bounded on \(S\), and that the infinite series \(\sum_{k=1}^{\infty}f_{k}\) of functions converges uniformly on \(S\). Then, the infinite series \(\sum_{k=1}^{\infty}f_{k}\cdot g_{k}\) converges uniformly on \(S\). \(\sharp\)

\begin{equation}{\label{ma3}}\tag{18}\mbox{}\end{equation}
Lemma \ref{ma3}. Suppose that the real-valued function \(\alpha\) is increasing on \([a,b]\). Given any two functions \(f\) and \(g\) defined on \([a,b]\), we have
\[U({\cal P},f+g,\alpha )\leq U({\cal P},f,\alpha )+U({\cal P},g,\alpha )\]
and
\[L({\cal P},f+g,\alpha )\geq L({\cal P},f,\alpha )+L({\cal P},g,\alpha ).\]
We also have
\[U({\cal P},f-g,\alpha )\leq U({\cal P},f,\alpha )-L({\cal P},g,\alpha )\]
and
\[L({\cal P},f-g,\alpha )\geq L({\cal P},f,\alpha )-U({\cal P},g,\alpha ).\]

The above Lemma \ref{ma3} can refer to the page The Reimann-Stieltjes Integrals.

\begin{equation}{\label{mat18}}\tag{19}\mbox{}\end{equation}
Lemma \ref{mat18}. Let \(f\) be a real-valued function defined on \([a,b]\). Then \(f\) is of bounded variation on \([a,b]\) if and only if \(f\) can be expressed as the difference of two increasing functions.

The above Lemma \ref{mat18} can refer to the page Functions of Bounded Variation.

\begin{equation}{\label{map40}}\tag{20}\mbox{}\end{equation}
Lemma \ref{map40}. Suppose that \(f\in R(\alpha )\) and \(f\in R(\beta )\) on \([a,b]\). Then \(f\in R(c_{1}\alpha+c_{2}\beta )\) on \([a,b]\) for any constants \(c_{1}\) and \(c_{2}\). Moreover, we have
\[\int_{a}^{b}fd\left (c_{1}\alpha +c_{2}\beta\right )=c_{1}\int_{a}^{b}fd\alpha +c_{2}\int_{a}^{b}fd\beta .\]

The above Lemma \ref{map40} can refer to the page The Reimann-Stieltjes Integrals.

\begin{equation}{\label{mat277}}\tag{21}\mbox{}\end{equation}
Theorem \ref{mat277}. Let \(\alpha\) be of bounded variation on \([a,b]\), and let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions. Suppose that \(f_{n}\rightarrow f\) uniformly on \([a,b]\), and that \(f_{n}\in R(\alpha )\) for all \(n\). Define
\[g_{n}(x)=\int_{a}^{x}f_{n}(t)d\alpha (t)\mbox{ for }x\in [a,b].\]
Then, we have the following properties:

(i) \(f\in R(\alpha )\) on \([a,b]\);

(ii) \(g_{n}\rightarrow g\) uniformly on \([a,b]\), where
\[g(x)=\int_{a}^{x}f(t)d\alpha (t);\]

that is,
\[\lim_{n\rightarrow\infty}\int_{a}^{x}f_{n}(t)d\alpha (t)=\int_{a}^{x}\left [\lim_{n\rightarrow\infty}f_{n}(t)\right ]d\alpha (t).\]
In other words, a uniformly convergent sequence of functions can be integrated term by term.

Proof. We first assume that \(\alpha\) is increasing. When \(\alpha (a)=\alpha (b)\), the desired result is obvious. Therefore, we assume \(\alpha (a)\neq\alpha (b)\), i.e., \(\alpha (a)<\alpha (b)\). To prove part (i), from Theorem~\ref{mat62}, we need to show that \(f\) satisfies the Riemann’s condition. Since \(f_{n}\rightarrow f\) uniformly on \([a,b]\), given any \(\epsilon >0\), there exists an integer \(N\) satisfying
\[\left |f_{N}(x)-f(x)\right |<\frac{\epsilon}{3[\alpha (b)-\alpha (a)]}\mbox{ for all }x\in [a,b],\]
which also says
\[\sup_{x\in [x_{k-1},x_{k}]}\left |f(x)-f_{N}(x)\right |\leq\frac{\epsilon}{3[\alpha (b)-\alpha (a)]}.\]
Therefore, given any partition \({\cal P}\) of \([a,b]\), since \(\Delta\alpha_{k}\geq 0\) for all \(k\), we have
\begin{align*}
\left |U({\cal P},f_{N}-f,\alpha )\right | & =\left |\sum_{k=1}^{n}M_{k}(f_{N}-f)\cdot\Delta\alpha_{k}\right |\\ & \leq\sum_{k=1}^{n}\left |M_{k}(f_{N}-f)\right |\cdot\Delta\alpha_{k}\\
& =\sum_{k=1}^{n}\left |\sup_{x\in [x_{k-1},x_{k}]}\left [f_{N}(x)-f(x)\right ]\right |\cdot\Delta\alpha_{k}
\\ & \leq\sum_{k=1}^{n}\sup_{x\in [x_{k-1},x_{k}]}\left |f_{N}(x)-f(x)\right |\cdot\Delta\alpha_{k}\\
& \leq\sum_{k=1}^{n}\frac{\epsilon}{3[\alpha (b)-\alpha (a)]}\cdot\Delta\alpha_{k}=\frac{\epsilon}{3}.
\end{align*}
We can similarly obtain
\[\left |L({\cal P},f-f_{N},\alpha )\right |=\left |L({\cal P},f_{N}-f,\alpha )\right |\leq\frac{\epsilon}{3}.\]
For this integer \(N\), since \(f_{N}\in R(\alpha )\), using the Riemann’s condition, we can choose a partition \({\cal P}(\epsilon )\) such that
\[{\cal P}\supseteq{\cal P}(\epsilon )\mbox{ implies }U({\cal P},f_{N},\alpha )-L({\cal P},f_{N},\alpha )<\frac{\epsilon}{3}.\]
Therefore, using Lemma \ref{ma3}, we have
\begin{align*}
U({\cal P},f,\alpha )-L({\cal P},f,\alpha ) & \leq U({\cal P},f_{N}-f,\alpha )-L({\cal P},f_{N}-f,\alpha )+U({\cal P},f_{N},\alpha )-L({\cal P},f_{N},\alpha )\\
& <|U({\cal P},f-f_{N},\alpha )|+|L({\cal P},f-f_{N},\alpha )|+\frac{\epsilon}{3}\leq\epsilon ,
\end{align*}
which shows that \(f\) satisfies the Riemann’s condition, i.e, \(f\in R(\alpha )\) on \([a,b]\). Now, we assume that \(\alpha\) is of bounded variation. Lemma \ref{mat18} says that \(\alpha=\alpha_{1}-\alpha_{2}\) for some increasing functions \(\alpha_{1}\) and \(\alpha_{2}\) on \([a,b]\). It follows \(f\in R(\alpha_{1})\) and \(f\in R(\alpha_{2})\) on \([a,b]\). Lemma \ref{map40} says that \(f\in R(\alpha_{1}-\alpha_{2})=R(\alpha )\) on \([a,b]\).

To prove part (ii), we first assume that \(\alpha\) is increasing, the uniform convergence of the sequence \(\{f_{n}\}_{n=1}^{\infty}\) on \([a,b]\) says that, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n>N\mbox{ implies }\left |f_{n}(t)-f(t)\right |<\frac{\epsilon}{2\left [\alpha (b)-\alpha (a)\right ]}\mbox{ for all }t\in [a,b].\]
Then, we have
\begin{align*} \left |g_{n}(x)-g(x)\right | & \leq\int_{a}^{x}\left |f_{n}(t)-f(t)\right |d\alpha (t)\\ & \leq\frac{\alpha (x)-\alpha (a)}{\alpha (b)-\alpha (a)}\cdot\frac{\epsilon}{2}\\ & \leq\frac{\epsilon}{2}\\ & <\epsilon\mbox{ for all }x\in [a,b],\end{align*}
which shows \(g_{n}\rightarrow g\) uniformly on \([a,b]\). Now, we assume that \(\alpha\) is of bounded variation. Then \(\alpha=\alpha_{1}-\alpha_{2}\) for some increasing functions \(\alpha_{1}\) and \(\alpha_{2}\) on \([a,b]\). For \(x\in [a,b]\), we define
\[g_{n}^{(1)}(x)=\int_{a}^{x}f_{n}(t)d\alpha_{1}(t)\mbox{ and }g_{n}^{(2)}(x)=\int_{a}^{x}f_{n}(t)d\alpha_{2}(t)\.\]
The previous result says \(g_{n}^{(1)}\rightarrow g_{1}\) uniformly on \([a,b]\) and \(g_{n}^{(2)}\rightarrow g_{2}\) uniformly on \([a,b]\), where
\[g_{1}(x)=\int_{a}^{x}f(t)d\alpha_{1}(t)\mbox{ and }g_{2}(x)=\int_{a}^{x}f(t)d\alpha_{2}(t).\]
We also have \(g_{n}^{(1)}-g_{n}^{(2)}\rightarrow g_{1}-g_{2}\) uniformly on \([a,b]\). Using Lemma \ref{map40}, we have
\begin{align*}
g_{n}^{(1)}(x)-g_{n}^{(2)}(x) & =\int_{a}^{x}f_{n}(t)d\alpha_{1}(t)-\int_{a}^{x}f_{n}(t)d\alpha_{2}(t)\\
& =\int_{a}^{x}f_{n}(t)d(\alpha_{1}(t)-\alpha_{2}(t))\\ & =\int_{a}^{x}f_{n}(t)d\alpha(t)=g_{n}(x)
\end{align*}
and
\begin{align*}
g^{(1)}(x)-g^{(2)}(x) & =\int_{a}^{x}f(t)d\alpha_{1}(t)-\int_{a}^{x}f(t)d\alpha_{2}(t)\\
& =\int_{a}^{x}f(t)d(\alpha_{1}(t)-\alpha_{2}(t))\\ & =\int_{a}^{x}f(t)d\alpha(t)=g(x).
\end{align*}
It follows \(g_{n}\rightarrow g\) uniformly on \([a,b]\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{ma14}}\tag{22}\mbox{}\end{equation}
Theorem \ref{ma14}. Let \(\alpha\) be of bounded variation on \([a,b]\). Suppose that the infinite series \(\sum_{k=1}^{\infty}f_{k}\) of functions satisfies \(f_{n}\in R(\alpha )\) for all \(n\), and that the infinite series \(\sum_{k=1}^{\infty}f_{k}\) converges uniformly to \(f\) on \([a,b]\). Define
\[g_{n}(x)=\int_{a}^{x}f_{n}(t)d\alpha (t)\mbox{ for }x\in [a,b].\]
Then, we have the following properties:

(i) \(f\in R(\alpha )\) on \([a,b]\);

(ii) \(\sum_{k=1}^{\infty}g_{k}\) converges uniformly to \(g\) on \([a,b]\), where
\[g(x)=\int_{a}^{x}f(t)d\alpha (t);\]
that is,
\[\int_{a}^{x}\left [\sum_{k=1}^{\infty}f_{k}(t)\right ]d\alpha (t)=\sum_{k=1}^{\infty}\int_{a}^{x}f_{n}(t)d\alpha (t).\]
In other words, a uniformly convergent infinite series of functions can be integrated term by term.

Proof. The results follow immediately by considering the partial sum of infinite series \(\sum_{k=1}^{\infty}f_{k}\) of functions and using Theorem \ref{mat277}. \(\blacksquare\)

Example. Evaluate the convergent series

\[\sum_{k=1}^{\infty}\frac{1}{k^{2}}\mbox{ and }\sum_{k=1}^{\infty}\frac{1}{(2k-1)^{2}}.\]
For \(x,y\in (0,1)\), we have the series
\[\sum_{k=0}^{\infty}(xy)^{k}=\frac{1}{1-xy}.\]
Using Theorem \ref{ma14}, we have
\begin{align*}
\int_{0}^{1}\!\!\!\!\int_{0}^{1}\frac{1}{1-xy}dxdy & =\int_{0}^{1}\!\!\!\!\int_{0}^{1}
\sum_{k=0}^{\infty}(xy)^{k}=\sum_{k=0}^{\infty}\int_{0}^{1}\!\!\!\!\int_{0}^{1}(xy)^{k}\\
& =\sum_{k=0}^{\infty}\left (\int_{0}^{1}x^{k}dx\right )\left (\int_{0}^{1}y^{k}dy\right )
\\ & =\sum_{k=0}^{\infty}\frac{1}{(k+1)^{2}}=\sum_{k=1}^{\infty}\frac{1}{k^{2}}.
\end{align*}
Therefore, we remain to evaluate the iterated integral using the change of variables by
\[u=\frac{x+y}{2}\mbox{ and }v=\frac{y-x}{2},\]
which implies
\[x=u-v\mbox{ and }y=u+v.\]
Therefore, the Jacobian is \(|J|=2\). After some algebraic calculation, we obtain
\[\sum_{k=1}^{\infty}\frac{1}{k^{2}}=\int_{0}^{1}\!\!\!\!\int_{0}^{1}\frac{1}{1-xy}dxdy=\frac{\pi^{2}}{6}.\]
Now, we have
\[s_{2n}=\sum_{k=1}^{2n}\frac{1}{k^{2}}=\sum_{k=1}^{n}\frac{1}{(2k)^{2}}+\sum_{k=1}^{n}\frac{1}{(2k-1)^{2}},\]
which implies
\[\sum_{k=1}^{\infty}\frac{1}{k^{2}}=\lim_{n\rightarrow\infty}s_{2n}=
\frac{1}{4}\sum_{k=1}^{\infty}\frac{1}{k^{2}}+\sum_{k=1}^{\infty}\frac{1}{(2k-1)^{2}}.\]
Therefore, we obtain
\[\sum_{k=1}^{\infty}\frac{1}{(2k-1)^{2}}=\frac{3}{4}\sum_{k=1}^{\infty}\frac{1}{k^{2}}=\frac{3}{4}\cdot\frac{\pi^{2}}{6}=\frac{\pi^{2}}{8}.\]

The uniform convergence is a sufficient condition, but not a necessary condition for term-by-term integration, which is shown in the following example.

\begin{equation}{\label{maex181}}\tag{23}\mbox{}\end{equation}

Example \ref{maex181}. We consider the functions \(f_{n}(x)=x^{n}\) for \(x\in [0,1]\). The limit function is given by

\[f(x)=\lim_{n\rightarrow\infty}f_{n}(x)=\left\{\begin{array}{ll}
0 & \mbox{if \(x\in [0,1)\)}\\
1 & \mbox{if \(x=1\)}.
\end{array}\right .\]
The convergence is not uniform on \([0,1]\). However, we still have
\begin{align*} \lim_{n\rightarrow\infty}\int_{0}^{1}f_{n}(x)dx & =\lim_{n\rightarrow\infty}\int_{0}^{1}x^{n}dx\\ & =\lim_{n\rightarrow\infty}\frac{1}{n+1}=0\\ & =\int_{0}^{1}f(x)dx=\int_{0}^{1}\left [\lim_{n\rightarrow\infty}f_{n}(x)\right ]dx.\end{align*}

By a step function, we mean a function \(\psi\) which has the form
\[\psi (x)=c_{i}\mbox{ for }x_{i-1}<x<x_{i}\]
for some partition \({\cal P}\in\mathfrak{P}[a,b]\) and some set of constants \(\{c_{1},\cdots ,c_{n}\}\). Suppose that we define
\[\int_{a}^{b}\psi (x)dx=\sum_{i=1}^{n}c_{i}\left (x_{i}-x_{i-1}\right ).\]
Recall that the upper and lower Riemann integrals are given by
\[\overline{\int}_{a}^{b}f(x)dx=\inf_{\{\psi:\psi\geq f\}}\int_{a}^{b}\psi (x)dx\]
and
\begin{equation}{\label{raeq241}}\tag{24}
\underline{\int}_{a}^{b}f(x)dx=\sup_{\{\phi:\phi\leq f\}}\int_{a}^{b}\phi (x)dx.
\end{equation}

\begin{equation}{\label{ma159}}\tag{25}\mbox{}\end{equation}
Theorem \ref{ma159}. (Luxemburg Monotone Convergence Theorem). Let \(\{f_{n}\}_{n=1}^{\infty}\) be a decreasing sequence of nonnegative continuous functions defined on the compact interval \([a,b]\). Suppose that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges pointwise to \(0\) on \([a,b]\). Then, we have
\[\lim_{n\rightarrow\infty}\underline{\int}_{a}^{b}f_{n}(x)dx=0.\]

Proof. From (\ref{raeq241}), since each \(f_{n}\) is nonnegative and continuous, we have
\begin{align}
\int_{a}^{b}f_{n}(x)dx & =\underline{\int}_{a}^{b}f_{n}(x)dx=\sup_{\{\phi:\phi\leq f_{n}\}}\int_{a}^{b}\phi (x)dx\nonumber\\
& =\sup_{\{\phi:0\leq\phi\leq f_{n}\}}\int_{a}^{b}\phi (x)dx.\label{ma162}\tag{26}
\end{align}
Since \(f_{n+1}\leq f_{n}\), it follows
\[\left\{\phi:0\leq\phi\leq f_{n+1}\right\}\subseteq\left\{\phi:0\leq\phi\leq f_{n}\right\},\]
which implies
\begin{equation}{\label{ma163}}\tag{27}
\int_{a}^{b}f_{n+1}(x)dx\leq\int_{a}^{b}f_{n}(x)dx.
\end{equation}
Using Dini’s Theorem \ref{ma146}, we see that \(\{f_{n}\}_{n=1}^{\infty}\) is uniformly decreasing to zero on \([a,b]\). Since we consider \(0\leq\phi\leq f_{n}\) for all \(n\), it forces \(\phi=0\) as \(n\rightarrow\infty\). Therefore, using (\ref{ma162}) and (\ref{ma163}), the sequence \(\{\int_{a}^{b}f_{n}(x)dx\}_{n=1}^{\infty}\) is decreasing to zero, and the proof is complete. \(\blacksquare\)

We are going to present a more general theorem without needing the uniform convergence, called Arzel\'{a}’s theorem, which can include the type shown in Example \ref{maex181}.

\begin{equation}{\label{ma160}}\tag{28}\mbox{}\end{equation}
Theorem \ref{ma160}. (Arzela’s Theorem). Let \(\{f_{n}\}_{n=1}^{\infty}\) be a pointwise convergent sequence on \([a,b]\). Suppose that \(\{f_{n}\}_{n=1}^{\infty}\) is uniformly bounded on \([a,b]\), that each \(f_{n}\) is continuous on \([a,b]\), and that the limit function \(f\) is also continuous
on \([a,b]\). Then, we have
\begin{align*} \lim_{n\rightarrow\infty}\int_{a}^{b}f_{n}(x)dx & =\int_{a}^{b}f(x)dx\\ & =\int_{a}^{b}\left [\lim_{n\rightarrow\infty}f_{n}(x)\right ]dx.\end{align*}

Proof. The continuities say that the functions are Riemann-integrable on \([a,b]\). Let \(g_{n}=|f_{n}-f|\) for all \(n\). It means that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) converges pointwise to \(0\). Let
\[h_{n}(x)=\sup_{k\geq n}g_{k}(x)\mbox{ for }x\in [a,b].\]
Then, we have
\begin{align*} \lim_{n\rightarrow\infty}h_{n}(x) & =\lim_{n\rightarrow\infty}\sup_{k\geq n}g_{k}(x)=\inf_{n\geq k}\sup_{k\geq n}g_{k}(x)
\\ & =\limsup_{n\rightarrow\infty}g_{n}(x)=\lim_{n\rightarrow\infty}g_{n}(x)=0,\end{align*}
which says that the sequence \(\{h_{n}\}_{n=1}^{\infty}\) is decreasing and convergent pointwise to \(0\). Since \(h_{n}\) are continuous on \([a,b]\), using the
Luxemburg monotone convergence Theorem \ref{ma159}, we have
\[\lim_{n\rightarrow\infty}\int_{a}^{b}h_{n}(x)dx=0.\]
Since \(0\leq g_{n}\leq h_{n}\) for all \(n\), we have
\begin{align*} 0 & \leq\lim_{n\rightarrow\infty}\int_{a}^{b}g_{n}(x)dx\\ & \leq\lim_{n\rightarrow\infty}\int_{a}^{b}h_{n}(x)dx=0,\end{align*}
which implies
\[\lim_{n\rightarrow\infty}\int_{a}^{b}g_{n}(x)dx=0.\]
Now, we have
\begin{align*} 0 & \leq\left |\int_{a}^{b}\left [f_{n}(x)-f(x)\right ]dx\right |\\ & \leq\int_{a}^{b}\left |f_{n}(x)-f(x)\right |dx=\int_{a}^{b}g_{n}(x)dx,\end{align*}
which implies
\[\lim_{n\rightarrow\infty}\left |\int_{a}^{b}\left [f_{n}(x)-f(x)\right ]dx\right |=0;\]
that is \[\lim_{n\rightarrow\infty}\int_{a}^{b}\left [f_{n}(x)-f(x)\right ]dx=0.\]
This completes the proof. \(\blacksquare\)

Next, we study the uniform convergence and differentiation.

\begin{equation}{\label{mat278}}\tag{29}\mbox{}\end{equation}
Theorem \ref{mat278}. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions such that each \(f_{n}\) has a finite derivative at each point of an open interval \((a,b)\). Suppose that there exists a point \(x_{0}\in (a,b)\) such that the sequence \(\{f_{n}(x_{0})\}_{n=1}^{\infty}\) of real numbers is convergent, and that there exists a function \(g\) satisfying \(f’_{n}\rightarrow g\) uniformly on \((a,b)\). Then, we have the following results.

(i) There exists a function \(f\) such that \(f_{n}\rightarrow f\) uniformly on \((a,b)\).

(ii) For each \(x\in (a,b)\), the derivative \(f'(x)\) exists and equals to \(g(x)\).

Proof. Given any fixed \(c\in (a,b)\), we define a new sequence \(\{g_{n}\}_{n=1}^{\infty}\) given by
\begin{equation}{\label{ma124}}\tag{30}
g_{n}(x)=\left\{\begin{array}{ll}
{\displaystyle \frac{f_{n}(x)-f_{n}(c)}{x-c}} & \mbox{if \(x\neq c\)}\\
f’_{n}(c) & \mbox{if \(x=c\)},
\end{array}\right .
\end{equation}
where the sequence \(\{g_{n}\}_{n=1}^{\infty}\) depends on \(c\). Since \(f’_{n}\rightarrow g\) uniformly on \((a,b)\) and \(g_{n}(c)=f’_{n}(c)\), it follows that the sequence \(\{g_{n}(c)\}_{n=1}^{\infty}\) of real numbers is convergent. Let \(h=f_{n}-f_{m}\). For \(x\neq c\), we have
\begin{align*} g_{n}(x)-g_{m}(x) & =\frac{f_{n}(x)-f_{n}(c)-f_{m}(x)+f_{n}(c)}{x-c}\\ & =\frac{h(x)-h(c)}{x-c}.\end{align*}
Since \(h'(x)=f’_{n}(x)-f’_{m}(x)\) exists for each \(x\in (a,b)\), using the men-valued theorem for differentiation, we obtain
\begin{align*} g_{n}(x)-g_{m}(x) & =h'(x^{*})\\ & =f’_{n}(x^{*})-f’_{m}(x^{*})\end{align*}
for some \(x^{*}\) between \(x\) and \(c\). Since \(c\) can be any number in \((a,b)\), it means that \(x^{*}\) can be any number in \((a,b)\). Since \(\{f’_{n}\}_{n=1}^{\infty}\) is uniformly convergent on \((a,b)\), using the Cauchy condition in Theorem~\ref{mat273}, it follows that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) also satisfies the Cauchy condition on \((a,b)\), which shows that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) is uniformly convergent on \((a,b)\). Next, we are going to prove part (i). Since the sequence \(\{f_{n}(x_{0})\}_{n=1}^{\infty}\) of real numbers is convergent, we take \(c=x_{0}\). Then, using (\ref{ma124}), we have
\[f_{n}(x)=f_{n}(x_{0})+(x-x_{0})g_{n}(x)\mbox{ for }x\in (a,b),\]
which implies
\[f_{n}(x)-f_{m}(x)=f_{n}(x_{0})-f_{m}(x_{0})+(x-x_{0})\left [g_{n}(x)-g_{m}(x)\right ].\]
Since \(\{g_{n}\}_{n=1}^{\infty}\) is uniformly convergent on \((a,b)\), using the Cauchy condition in Theorem \ref{mat273}, it follows that the sequence
$\{f_{n}\}_{n=1}^{\infty}$ also satisfies the Cauchy condition on \((a,b)\), which shows that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) is uniformly convergent on \((a,b)\).

To prove part (ii), since \(\{g_{n}\}_{n=1}^{\infty}\) is uniformly convergent on \((a,b)\), we write
\[G(x)=\lim_{n\rightarrow\infty}g_{n}(x).\]
Since each \(f’_{n}\) exists, it follows
\[\lim_{x\rightarrow c}g_{n}(x)=f’_{n}(c)=g_{n}(c),\]
which says that each \(g_{n}\) is continuous at \(c\). Since \(\{g_{n}\}_{n=1}^{\infty}\) converges uniformly to \(G\) on \((a,b)\), it follows that \(G\) is also continuous at \(c\). For \(x\neq c\), we have
\begin{align*} G(x) & =\lim_{n\rightarrow\infty}g_{n}(x)=\lim_{n\rightarrow\infty}\frac{f_{n}(x)-f_{n}(c)}{x-c}\\ & =\frac{f(x)-f(c)}{x-c}.\end{align*}
Since \(G\) is continuous at \(c\), it follows
\[G(c)=\lim_{x\rightarrow c}G(x)=\lim_{x\rightarrow c}\frac{f(x)-f(c)}{x-c}=f'(c).\]
We also have
\[G(c)=\lim_{n\rightarrow\infty}g_{n}(c)=\lim_{n\rightarrow\infty}f’_{n}(c)=g(c).\]
Therefore, we conclude that \(f'(c)=g(c)\). Since \(c\) can be any number in the open interval \((a,b)\), the proof is complete. \(\blacksquare\)

The case of infinite series is also given below.

Theorem. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of functions such that each \(f_{n}\) has a finite derivative at each point of an open interval \((a,b)\). Suppose that there exists a point \(x_{0}\in (a,b)\) such that the infinite series \(\sum_{k=1}^{\infty}f_{k}(x_{0})\) is convergent, and that there exists a function \(g\) such that \(\sum_{k=1}^{\infty}f’_{k}\) converges uniformly to \(g\) on \((a,b)\). Then, we have the following results.

(i) There exists a function \(f\) such that the infinite series \(\sum_{k=1}^{\infty}f_{k}\) converges uniformly to \(f\) on \((a,b)\).

(ii) For each \(x\in (a,b)\), the derivative \(f'(x)\) exists and equals to \(\sum_{k=1}^{\infty}f’_{k}\).

Proof. The results follow from Theorem~\ref{mat278} by considering the partial sums. \(\blacksquare\)

Given a sequence of functions, we can consider the pointwise convergence and uniform convergence. There is another type of convergence based on the Riemann integrals, which is called mean square convergence.

Definition. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of Riemann-integrable functions defined on the compact interval \([a,b]\). The sequence \(\{f_{n}\}_{n=1}^{\infty}\) is said to converge in the mean square to \(f\) on \([a,b]\) when \(f\) is Riemann-integrable on \([a,b]\) and
\[\lim_{n\rightarrow\infty}\int_{a}^{b}\left [f_{n}(x)-f(x)\right ]^{2}dx=0.\]

Proposition. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of Riemann-integrable functions defined on \([a,b]\). Suppose that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges uniformly to \(f\) on \([a,b]\). Then, the sequence \(\{f_{n}\}_{n=1}^{\infty}\) also converges in the mean square to \(f\) on \([a,b]\).

Proof. Given any \(\epsilon >0\), the uniform convergence says that there exists an integer \(N\) such that
\[n>N\mbox{ implies }|f_{n}(x)-f(x)|<\sqrt{\frac{\epsilon}{2(b-a)}}\mbox{ for all }x\in [a,b].\]
Therefore, we have
\[\int_{a}^{b}\left [f_{n}(x)-f(x)\right ]^{2}dx\leq\int_{a}^{b}\frac{\epsilon}{2(b-a)}dx=\frac{\epsilon}{2}<\epsilon.\]
This completes the proof. \(\blacksquare\)

\begin{equation}{\label{mat280}}\tag{31}\mbox{}\end{equation}
Theorem \ref{mat280}. (Cauchy-Schwartz Inequality). Let \(\alpha\) be increasing on \([a,b]\). Suppose that \(f\in R(\alpha )\), \(f^{2}\in R(\alpha )\), \(g\in R(\alpha )\) and \(g^{2}\in R(\alpha )\) on \([a,b]\). Then, we have
\[\left (\int_{a}^{b}f(x)g(x)d\alpha (x)\right )^{2}\leq\left (\int_{a}^{b}f^{2}(x)d\alpha (x)\right )\left (\int_{a}^{b}g^{2}(x)d\alpha (x)\right ).\]

\begin{equation}{\label{map283}}\tag{32}\mbox{}\end{equation}
Proposition \ref{map283}. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a sequence of Riemann-integrable functions defined on \([a,b]\). Suppose that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges in the mean to \(f\) on \([a,b]\), and that \(g\) is Riemann-integrable on \([a,b]\). For \(x\in [a,b]\), we define
\[h(x)=\int_{a}^{x}f(t)g(t)dt\mbox{ and }h_{n}(x)=\int_{a}^{x}f_{n}(t)g(t)dt.\]
Then, the sequence \(\{h_{n}\}_{n=1}^{\infty}\) converges uniformly to \(h\) on \([a,b]\).

Proof. Define
\[A=1+\int_{a}^{b}g^{2}(t)dt>0.\]
By the definition of mean square convergence, given any \(\epsilon >0\), there exists an integer \(N\) such that
\[n\geq N\mbox{ implies }\int_{a}^{b}\left [f(t)-f_{n}(t)\right ]^{2}dt<\frac{\epsilon^{2}}{A}.\]
Using the Cauchy-Schwartz inequality in Theorem \ref{mat280}, we have
\begin{align*}
0 & \leq\left (\int_{a}^{x}\left |f(t)-f_{n}(t)\right |\cdot\left |g(t)\right |dt\right )^{2}\\ & \leq\left (\int_{a}^{x}\left [f(t)-f_{n}(t)\right ]^{2}dt\right )
\left (\int_{a}^{x}g^{2}(t)dt\right )\\
& <\frac{\epsilon^{2}}{A}\cdot (A-1)<\epsilon^{2},
\end{align*}
which says that
\[n\geq N\mbox{ implies }0\leq\left |h(x)-h_{n}(x)\right |\leq\int_{a}^{x}\left |f(t)-f_{n}(t)
\right |\cdot\left |g(t)\right |dt<\epsilon\mbox{ for every }x\in [a,b].\]
This completes the proof. \(\blacksquare\)

A more general result is given below.

Theorem. Let \(\{f_{n}\}_{n=1}^{\infty}\) and \(\{g_{n}\}_{n=1}^{\infty}\) be two sequences of Riemann-integrable functions defined on \([a,b]\). Suppose that the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges in the mean square to \(f\) on \([a,b]\), and that the sequence \(\{g_{n}\}_{n=1}^{\infty}\) converges in the mean square to \(g\) on \([a,b]\). For \(x\in [a,b]\), we define
\[h(x)=\int_{a}^{x}f(t)g(t)dt\mbox{ and }h_{n}(x)=\int_{a}^{x}f_{n}(t)g_{n}(t)dt.\]
Then, the sequence \(\{h_{n}\}_{n=1}^{\infty}\) converges uniformly to \(h\) on \([a,b]\).

Proof. We have
\begin{align*}
h_{n}(x)-h(x) & =\int_{a}^{x}\left [f(t)-f_{n}(t)\right ]\left [g(t)-g_{n}(t)\right ]dt\\ & \quad +\left (\int_{a}^{x}f_{n}(t)g(t)dt-\int_{a}^{x}f(t)g(t)dt\right )\\ & \quad +\left (\int_{a}^{x}g_{n}(t)f(t)dt-\int_{a}^{x}f(t)g(t)dt\right ).\end{align*}
From Proposition \ref{map283}, we have
\[\int_{a}^{x}f_{n}(t)g(t)dt\rightarrow\int_{a}^{x}f(t)g(t)dt\mbox{ uniformly on }[a,b]\]
and
\[\int_{a}^{x}f(t)g_{n}(t)dt\rightarrow\int_{a}^{x}f(t)g(t)dt\mbox{ uniformly on }[a,b].\]
Given any \(\epsilon >0\), there exists a common integer \(N_{1}\) such that \(n\geq N_{1}\) implies
\[\left |\int_{a}^{x}f_{n}(t)g(t)dt-\int_{a}^{x}f(t)g(t)dt\right |<\frac{\epsilon}{3}\] and
\[\left |\int_{a}^{x}f(t)g_{n}(t)dt-\int_{a}^{x}f(t)g(t)dt\right |<\frac{\epsilon}{3}\]
for every \(x\in [a,b]\). Using the Cauchy-Schwartz inequality in Theorem \ref{mat280}, we obtain
\begin{align*}
0 & \leq\left (\int_{a}^{x}\left |f(t)-f_{n}(t)\right |\cdot\left |g(t)-g_{n}(t)\right |dt\right )^{2}\\
& \leq\left (\int_{a}^{x}\left [f(t)-f_{n}(t)\right ]^{2}dt\right )\left (\int_{a}^{x}\left [g(t)-g_{n}(t)\right ]^{2}dt\right ).
\end{align*}
Since the sequence \(\{f_{n}\}_{n=1}^{\infty}\) converges in the mean square to \(f\) on \([a,b]\) and the sequence \(\{g_{n}\}_{n=1}^{\infty}\) converges in the mean square to \(g\) on \([a,b]\), given any \(\epsilon >0\), there exists a common integer \(N_{2}\) such that \(n\geq N_{2}\) implies
\[\int_{a}^{b}\left [f(t)-f_{n}(t)\right ]^{2}dt<\frac{\epsilon}{3}\] and
\[\int_{a}^{b}\left [g(t)-g_{n}(t)\right ]^{2}dt<\frac{\epsilon}{3}.\]
Therefore, we obtain
\[\int_{a}^{x}\left |f(t)-f_{n}(t)\right |\cdot\left |g(t)-g_{n}(t)\right |<\frac{\epsilon}{3}\mbox{ for every }x\in [a,b].\]
Let \(N=\max\{N_{1},N_{2}\}\). Then \(n\geq N\) implies
\begin{align*} 0 & \leq\left |h(x)-h_{n}(x)\right |\\ & <\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3}\\ & =\epsilon\mbox{ for every }x\in [a,b].\end{align*}
This completes the proof. \(\blacksquare\)

\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}

Power Series.

We consider the infinite series of the following form
\[a_{0}+\sum_{k=1}^{\infty}a_{k}(x-x_{0})^{k},\]
which can be rewritten as
\begin{equation}{\label{maeq184}}\tag{33}
\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}
\end{equation}
and is called a power series in \(x-x_{0}\). For infinite series in the form of (\ref{maeq184}), there is an interval of convergence such that the power series converges absolutely for every \(x\) in the interval and diverges for every \(x\) outside the interval. The center of the interval is at \(x_{0}\) and its radius is called the radius of convergence of the power series. We denote by \(I(x_{0};r)\) the interval of convergence with center \(x_{0}\) and radius \(r\).

\begin{equation}{\label{mat168}}\tag{34}\mbox{}\end{equation}
Lemma \ref{mat168}. (Root Test). Given a series \(\sum_{k=1}^{\infty}a_{k}\), we define
\[\rho =\limsup_{n\rightarrow\infty}\sqrt[n]{|a_{n}|}.\]
Then, we have the following results.

(i) Suppose that \(\rho <1\). Then, the series \(\sum_{k=1}^{\infty}a_{k}\) is absolutely convergent.

(ii) Suppose that \(\rho >1\). Then, the series \(\sum_{k=1}^{\infty}a_{k}\) is divergent.

(iii) Suppose that \(\rho =1\). Then, the test is inconclusive. \(\sjarp\)

The above Lemma \ref{mat168} can refer to the page Infinite Series of Real Numbers.

\begin{equation}{\label{mat285}}\tag{35}\mbox{}\end{equation}
Theorem \ref{mat285}. Given a power series \(\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}\), let
\[\lambda =\limsup_{n\rightarrow\infty}\sqrt[n]{|a_{n}|}\mbox{ and }r=\frac{1}{\lambda},\]
where \(r=0\) if \(\lambda =+\infty\) and \(r=+\infty\) if \(\lambda =0\).

  • Suppose that \(|x-x_{0}|<r\). Then, the power series is absolutely convergent.
  • Suppose that \(|x-x_{0}|>r\). Then, the power series is divergent.

Moreover, the power series is uniformly convergent on every closed and bounded subset of the interval \(I(x_{0};r)\) of convergence.

Proof. We have
\begin{align*} \limsup_{n\rightarrow\infty}\sqrt[n]{|a_{n}(x-x_{0})^{n}|} & =\limsup_{n\rightarrow\infty}
\sqrt[n]{|a_{n}|}\cdot\sqrt[n]{|x-x_{0}|^{n}}\\ & =\frac{|x-x_{0}|}{r}.\end{align*}
Applying the root test given in Lemma \ref{mat168}, we see that the power series \(\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}\) is absolutely convergent when
\[\frac{|x-x_{0}|}{r}=\limsup_{n\rightarrow\infty}\sqrt[n]{|a_{n}(x-x_{0})^{n}|}<1,\]
i.e., \(|x-x_{0}|<r\). Also, the power series is divergent when
\[\frac{|x-x_{0}|}{r}=\limsup_{n\rightarrow\infty}\sqrt[n]{|a_{n}(x-x_{0})^{n}|}>1,\]
i.e., \(|x-x_{0}|>r\). Let \(T\) be a closed and bounded subset of \(I(x_{0};r)\), i.e., a compact subset of \(I(x_{0};r)\). Since the function \(f(x)=|x-x_{0}|\) is continuous on the compact set \(T\), it follows that the supremum \(\sup_{x\in T}|x-x_{0}|\) is attained; that is to say, there exists \(x^{*}\in T\) satisfying
\[\left |x-x_{0}\right |\leq\left |x^{*}-x_{0}\right |<r\mbox{ for any }x\in T,\]
which says
\[\left |a_{n}(x-x_{0})^{n}\right |\leq\left |a_{n}(x^{*}-x_{0})^{n}\right |\]
for each \(x\in T\). Since \(|x^{*}-x_{0}|<r\), the previous result says that the series \(\sum_{n=1}^{\infty}a_{n}(x^{*}-x_{0})^{n}\) is absolutely convergent, i.e., the series \(\sum_{n=1}^{\infty}|a_{n}(x^{*}-x_{0})^{n}|\) is convergent. Using the Weierstrass M-test given in Theorem~\ref{mat284}, we conclude that the power series \(\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}\) is uniformly convergent on \(T\). This completes the proof. \(\sharp\)

Example. The two series

\[\sum_{k=0}^{\infty}x^{k}\mbox{ and }\sum_{k=1}^{\infty}\frac{x^{k}}{k^{2}}\]
have the same radius of convergence with \(r=1\). On the boundary of the interval of convergence, i.e., \(x=\pm 1\), it is clear that the first power series is not convergent at \(x=\pm 1\). However, the second power series is absolutely convergent at \(x=\pm 1\), since
\[\sum_{k=1}^{\infty}\left |\frac{x^{k}}{k^{2}}\right |=\sum_{k=1}^{\infty}\frac{1}{k^{2}}\]
for \(x=\pm 1\). \(\blacksquare\)

\begin{equation}{\label{ma4}}\tag{36}\mbox{}\end{equation}
Proposition \ref{ma4}. The function defined by
\begin{equation}{\label{maeq185}}\tag{37}
f(x)=\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}
\end{equation}
is continuous on its interval \(I(x_{0};r)\) of convergence.

Proof. Given any point \(y\in I(x_{0};r)\), there exists a closed and bounded subset \(T\) of \(I(x_{0};r)\) satisfying \(y\in T\). From Theorem~\ref{mat285}, the power series in (\ref{maeq185}) is uniformly convergent on \(T\). Since the functions \(f_{k}(x)=a_{k}(x-x_{0})^{k}\) are continuous on \(T\) for all \(k\), Theorem \ref{map286} says that the function \(f\) defined in (\ref{maeq185}) is also continuous at \(y\). This completes the proof. \(\blacksquare\)

The series in (\ref{maeq185}) is called a power series expansion of \(f\) about \(x_{0}\). Therefore, functions having power series expansions are continuous inside the interval of convergence.

\begin{equation}{\label{map192}}\tag{38}\mbox{}\end{equation}
Proposition \ref{map192}. We define a function
\[f(x)=\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}\]
on some open subset \(S\) of \(I(x_{0};r)\). Then, for each \(x^{*}\in S\), there exists \(I(x^{*};r^{*})\subseteq S\) such that \(f\) has a power series expansion of the following form
\begin{equation}{\label{maeq290}}\tag{39}
f(x)=\sum_{k=0}^{\infty}b_{k}(x-x^{*})^{k},
\end{equation}
where
\begin{equation}{\label{maeq289}}\tag{40}
b_{k}=\sum_{n=k}^{\infty}\left (\begin{array}{c}n\\ k\end{array}\right )a_{n}(x^{*}-x_{0})^{n-k}.
\end{equation}
Proof. Given any \(x\in S\), we have
\begin{align}
f(x) & =\sum_{n=0}^{\infty}a_{n}(x-x_{0})^{n}=\sum_{n=0}^{\infty}a_{n}(x-x^{*}+x^{*}-x_{0})^{n}\nonumber\\
& = \sum_{n=0}^{\infty}\left [a_{n}\sum_{k=0}^{n}\left (\begin{array}{c}n\\ k\end{array}\right )(x-x^{*})^{k}(x^{*}-x_{0})^{n-k}\right ]
=\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}c_{n}(k),\label{maeq287}\tag{41}
\end{align}
where
\[c_{n}(k)=\left\{\begin{array}{ll}
\left (\begin{array}{c}n\\ k\end{array}\right )a_{n}(x-x^{*})^{k}(x^{*}-x_{0})^{n-k},
& \mbox{if \(k\leq n\)}\\
0, & \mbox{if \(k>n\)}.
\end{array}\right .\]
Now, we take a real number \(r^{*}\) satisfying \(I(x^{*};r^{*})\subseteq S\), and assume that \(x\in I(x^{*};r^{*})\). Then, by referring to (\ref{maeq287}), we have
\begin{equation}{\label{maeq288}}\tag{42}
\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}|c_{n}(k)|=\sum_{n=0}^{\infty}|a_{n}|\left (|x-x^{*}|
+|x^{*}-x_{0}|\right )^{n}=\sum_{n=0}^{\infty}|a_{n}|\left (\hat{x}-x_{0}\right )^{n},
\end{equation}
where
\begin{equation}{\label{ma125}}\tag{43}
\hat{x}=x_{0}+\left |x-x^{*}\right |+\left |x^{*}-x_{0}\right |.
\end{equation}
Since
\[I(x^{*};r^{*})\subseteq S\subseteq I(x_{0};r),\]
it follows \(|x^{*}-x_{0}|\leq r-r^{*}\). Using (\ref{ma125}), we obtain
\[\left |\hat{x}-x_{0}\right |=\left |x-x^{*}\right |+\left |x^{*}-x_{0}\right |<r^{*}+\left |x^{*}-x_{0}\right |\leq r,\]
which says that \(\hat{x}\in I(x_{0};r)\). Therefore, the power series \(\sum_{n=0}^{\infty}|a_{n}|(\hat{x}-x_{0})^{n}\) is convergent. From (\ref{maeq288}), we see that the power series \(\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}c_{n}(k)\) is absolutely convergent. Therefore, by Theorem \ref{mat270}, we can interchange the order of summation to obtain
\begin{align*}
f(x) & =\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}c_{n}(k)=\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}c_{n}(k)\\
& =\sum_{k=0}^{\infty}\sum_{n=k}^{\infty}\left (\begin{array}{c}n\\ k\end{array}\right )a_{n}(x-x^{*})^{k}(x^{*}-x_{0})^{n-k}=\sum_{k=0}^{\infty}b_{k}(x-x^{*})^{k},
\end{align*}
where \(b_{k}\) is given in (\ref{maeq289}). This completes the proof. \(\blacksquare\)

Theorem. The function defined by
\begin{equation}{\label{maeq186}}\tag{44}
f(x)=\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}
\end{equation}
on \(I(x_{0};r)\) has a derivative \(f'(x)\) for each \(x\in \mbox{int}(I(x_{0};r))\), which is given by
\begin{equation}{\label{maeq187}}\tag{45}
f'(x)=\sum_{k=1}^{\infty}ka_{k}(x-x_{0})^{k-1}.
\end{equation}
Moreover, the power series in (\ref{maeq186}) and (\ref{maeq187}) have the same radius of convergence.

Proof. We are going to use Proposition \ref{map192} by taking \(S=\mbox{int}(I(x_{0};r))\). Given any \(x^{*}\in\mbox{int}(I(x_{0};r))\), we can expand \(f\) in a power series about \(x^{*}\) as shown in (\ref{maeq290}). For \(x\in I(x^{*};r^{*})\) with \(x\neq x^{*}\), from (\ref{maeq290}), since \(b_{0}=f(x^{*})\), we have
\[f(x)=f(x^{*})+b_{1}(x-x^{*})+\sum_{k=2}^{\infty}b_{k}(x-x^{*})^{k},\]
which implies
\[\frac{f(x)-f(x^{*})}{x-x^{*}}=b_{1}+\sum_{k=1}^{\infty}b_{k+1}(x-x^{*})^{k}\equiv g(x),\]
where \(g\) is continuous by Proposition \ref{ma4}. Therefore, we obtain
\[\lim_{x\rightarrow x^{*}}\left [b_{1}+\sum_{k=1}^{\infty}b_{k+1}(x-x^{*})^{k}\right ]=\lim_{x\rightarrow x^{*}}g(x)=g(x^{*})=b_{1},\]
which implies
\begin{align*} f'(x^{*}) & =\lim_{x\rightarrow x^{*}}\frac{f(x)-f(x^{*})}{x-x^{*}}\\ & =\lim_{x\rightarrow x^{*}}
\left [b_{1}+\sum_{k=1}^{\infty}b_{k+1}(x-x^{*})^{k}\right ]=b_{1}.\end{align*}
Using (\ref{maeq289}), we also have
\[f'(x^{*})=b_{1}=\sum_{n=1}^{\infty}na_{n}(x^{*}-x_{0})^{n-1}.\]
Since \(x^{*}\) is an arbitrary point of \(\mbox{int}(I(x_{0};r))\), we prove (\ref{maeq187}). Finally, since \(\sqrt[n]{n}\rightarrow 1\) as \(n\rightarrow\infty\), we have
\[\limsup_{n\rightarrow\infty}\sqrt[n]{na_{n}}
=\limsup_{n\rightarrow\infty}\left (\sqrt[n]{n}\cdot\sqrt[n]{a_{n}}\right )
=\limsup_{n\rightarrow\infty}\sqrt[n]{a_{n}},\]
which says that the two series have the same radius of convergence. This completes the proof. \(\blacksquare\)

By repeated application of (\ref{maeq187}), we have that the derivative \(f^{(k)}(x)\) exists on \(\mbox{int}(I(x_{0};r))\) for all \(k\), and the power series is given by
\begin{equation}{\label{maeq291}}\tag{46}
f^{(k)}(x)=\sum_{n=k}^{\infty}\frac{n!}{(n-k)!}a_{n}(x-x_{0})^{n-k}.
\end{equation}
By taking \(x=x_{0}\) in (\ref{maeq291}), we obtain the following formula
\begin{equation}{\label{maeq292}}\tag{47}
f^{(k)}(x_{0})=k!a_{k}.
\end{equation}
The equation (\ref{maeq292}) says that if two power series
\[\sum_{k=0}^{\infty}a_{k}(x-x_{0})^{k}\mbox{ and }\sum_{k=0}^{\infty}b_{k}(x-x_{0})^{k}\]
both represent the same function in \(I(x_{0};r)\), then
\[a_{k}=b_{k}=\frac{f^{(k)}(x_{0})}{k!}\mbox{ for all }k.\]
In other words, the power series expansion of a function \(f\) about a given point \(x_{0}\) is uniquely determined and is given by
\[f(x)=\sum_{k=0}^{\infty}\frac{f^{(k)}(x_{0})}{k!}(x-x_{0})^{k}.\]

\begin{equation}{\label{mat293}}\tag{48}\mbox{}\end{equation}
Lemma \ref{mat293}. Suppose that the series \(\sum_{k=0}^{\infty}a_{k}\) is absolutely convergent with sum \(A\), and that the series \(\sum_{k=0}^{\infty}b_{k}\) is convergent with sum \(B\). Then, the Cauchy product of these two series \(\sum_{k=0}^{\infty}a_{k}\) and \(\sum_{k=0}^{\infty}b_{k}\) are convergent with sum \(A\cdot B\). \(\sharp\)

The above Lemma \ref{mat293} can refer to the page Infinite Series of Real Numbers.

\begin{equation}{\label{maeq303}}\tag{49}\mbox{}\end{equation}
Proposition \ref{maeq303}. Given two power series expansions about the origin
\[f(x)=\sum_{k=0}^{\infty}a_{k}x^{k}\mbox{ for }x\in I(0;r_{1})\]
and
\[g(x)=\sum_{k=0}^{\infty}b_{k}x^{k}\mbox{ for }x\in I(0;r_{2}),\]
the product \(fg\) is given by the power series
\[f(x)g(x)=\sum_{k=0}^{\infty}c_{k}x^{k}\mbox{ for }x\in I(0;r_{1})\cap I(0,r_{2}),\]
where
\[c_{k}=\sum_{n=0}^{k}a_{n}b_{k-n}\mbox{ for }k=0,1,2,\cdots .\]

Proof. The Cauchy product of the two given series is
\[\sum_{n=0}^{\infty}\left (\sum_{k=0}^{n}a_{k}x^{k}b_{n-k}x^{n-k}\right )=\sum_{n=0}^{\infty}c_{n}x^{n}.\]
Therefore, the results follow immediately from Lemma \ref{mat293}. \(\blacksquare\)

Suppose that the two power series are identical. Then, using Proposition \ref{maeq303}, we have
\[f^{2}(x)=\sum_{k=0}^{\infty}c_{k}x^{k},\]
where
\[c_{k}=\sum_{n=0}^{k}a_{n}a_{k-n}=\sum_{m_{1}+m_{2}=k}a_{m_{1}}a_{m_{2}}.\]
Similarly, for any positive integer \(p\), we have
\[f^{p}(x)=\sum_{k=0}^{\infty}c_{k}(p)x^{k},\]
where
\[c_{k}(p)=\sum_{m_{1}+\cdots +m_{p}=k}a_{m_{1}}\cdots a_{m_{p}}.\]

\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}

Taylor’s Series.

\begin{equation}{\label{mat187}}\tag{50}\mbox{}\end{equation}
Theorem \ref{mat187}. (Taylor’s Theorem). Given any integer \(n\in\mathbb{N}\), suppose that the real-valued function \(f\) has finite derivatives of order \(n+1\) everywhere on an open interval \((a,b)\), and that \(f^{(n)}\) is continuous on the closed interval \([a,b]\). Given \(c\in [a,b]\), for every \(x\in [a,b]\) with \(x\neq c\), we have
\[f(x)=f(c)+f'(c)(x-c)+\frac{f”(c)}{2!}(x-c)^{2}+\cdots +\frac{f^{(n)}(c)}{n!}(x-c)^{n}+R_{n}(x),\]
where the remainder term \(R_{n}(x)\) is given by
\[R_{n}(x)=\frac{f^{(n+1)}(x_{0})}{(n+1)!}(x-c)^{n+1}\]
for some \(x_{0}\) in the interior of the interval joining \(c\) and \(x\). \(\sharp\)

With the help of Taylor’s formula given in Theorem \ref{mat187}, we can study the Taylor’s series generated by a function. Let \(f [a,b]\rightarrow\mathbb{R}\) be a real-valued function defined on a compact interval \([a,b]\). If \(f\) has derivatives of every order at each point of \([a,b]\), we write \(f\in C^{\infty}([a,b])\).

Definition. Given \(f\in C^{\infty}([a,b])\), for \(c\in [a,b]\), the power series

\[\sum_{k=0}^{\infty}\frac{f^{(k)}(c)}{k!}(x-c)^{k}\]
is called the Taylor’s series about \(c\) generated by \(f\). \(\sharp\)

Given \(f\in C^{\infty}([a,b])\), the Taylor’s formula says that, for \(c\in [a,b]\), we have
\[f(x)=\sum_{k=0}^{n-1}\frac{f^{(k)}(c)}{k!}(x-c)^{k}+\frac{f^{(n)}(\zeta )}{n!}(x-c)^{n},\]
where \(\zeta\) is some point between \(x\) and \(c\). The point \(\zeta\) depends on \(x,c\) and \(n\). Therefore, a necessary and sufficient condition for the Taylor’s series to converge to \(f(x)\) is
\begin{equation}{\label{maeq188}}\tag{51}
\lim_{n\rightarrow\infty}\frac{f^{(n)}(\zeta )}{n!}(x-c)^{n}=0.
\end{equation}
In practice, it may be difficult to deal with the above limit, since the position of \(\zeta\) between \(x\) and \(c\) is unknown. Therefore, a suitable bound can be obtained for \(f^{(n)}(\zeta )\), and the limit can be shown to be zero. The following example is useful.

\begin{equation}{\label{ma126}}\tag{52}\mbox{}\end{equation}

Example \ref{ma126}. For each real number \(x\), find the limit
\[\lim_{n\rightarrow\infty}\frac{x^{n}}{n!}.\]
Fix any real number \(x\) and choose an integer \(k\) satisfying \(k>|x|\). For \(n>k+1\),
\begin{align*} \frac{k^{n}}{n!} & =\left (\frac{k^{k}}{k!}\right )\left [\frac{k}{k+1}
\frac{k}{k+2}\cdots\frac{k}{n-1}\right ]\left (\frac{k}{n}\right )\\ & <\left (\frac{k^{k+1}}{k!}\right )\left (\frac{1}{n}\right )
\mbox{ (the middle term is less than \(1\))}.\end{align*}
Since \(k>|x|\), we have
\[0<\frac{|x|^{n}}{n!}<\frac{k^{n}}{n!}<\left (\frac{k^{k+1}}{k!}\right )\left (\frac{1}{n}\right ).\]
Since \(k\) is fixed and \(\lim_{n\rightarrow\infty} 1/n=0\), it follows, from the pinching theorem,
\[\lim_{n\rightarrow\infty}\frac{|x|^{n}}{n!}=0\mbox{ and thus }\lim_{n\rightarrow\infty}\frac{x^{n}}{n!}=0.\]

Theorem. Given \(f\in C^{\infty}([a,b])\) and \(c\in [a,b]\), Suppose that there is an interval \(I(c)\) of \(c\) and a constant \(M\) satisfying
\[\left |f^{(n)}(x)\right |\leq M^{n}\mbox{ for all }x\in I(c)\cap [a,b]\mbox{ and for all }n.\]
Then, for each \(x\in I(c)\cap [a,b]\), we have
\[f(x)=\sum_{k=0}^{\infty}\frac{f^{(k)}}{k!}(x-c)^{k}.\]

Proof. Since \(a^{n}/n!\rightarrow 0\) as \(n\rightarrow\infty\) for all \(a\) by referring to Example \ref{ma126}, the limit (\ref{maeq188}) will certainly hold, when we take \(a=M\cdot (x-c)\). This completes the proof. \(\blacksquare\)

Theorem. Suppose that \(f\) has a continuous derivative of order \(n+1\) in some open interval \(I\) containing a point \(c\). We define a function \(E_{n}(x)\) for \(x\in I\) according to the following equation
\begin{equation}{\label{maeq297}}\tag{53}
f(x)=\sum_{k=0}^{n}\frac{f^{(k)}(c)}{k!}(x-c)^{k}+E_{n}(x).
\end{equation}
Then \(E_{n}\) is given by
\begin{equation}{\label{maeq189}}\tag{54}
E_{n}(x)=\frac{1}{n!}\int_{c}^{x}(x-t)^{n}f^{(n+1)}(t)dt.
\end{equation}

Proof. We are going to prove this theorem by induction on \(n\). For \(n=1\), from (\ref{maeq297}), we have
\begin{align*} E_{1}(x) & =f(x)-f(c)-f'(c)(x-c)\\ & =\int_{c}^{x}\left [f'(t)-f'(c)\right ]dt=\int_{c}^{x}u(t)dv(t),\end{align*}
where
\[u(t)=f'(t)-f'(c)\mbox{ and }v(t)=t-x.\]
Using the integration by part, we also have
\begin{align*} \int_{c}^{x}u(t)dv(t) & =u(x)v(x)-u(c)v(c)-\int_{c}^{x}v(t)du(t)\\ & =\int_{c}^{x}(x-t)f”(t)dt,\end{align*}
which proves (\ref{maeq189}) for \(n=1\). Now, we assume (\ref{maeq189}) is true for \(n\) and prove it for \(n+1\). From (\ref{maeq297}), we have
\begin{equation}{\label{maeq298}}\tag{55}
E_{n+1}(x)=E_{n}(x)-\frac{f^{(n+1)}(c)}{(n+1)!}(x-c)^{n+1}.
\end{equation}
Using the induction hypothesis and the fact of
\[(x-c)^{n+1}=(n+1)\int_{c}^{x}(x-t)^{n}dt,\]
the equation (\ref{maeq298}) implies
\begin{align*}
E_{n+1}(x) & =\frac{1}{n!}\int_{c}^{x}(x-t)^{n}f^{(n+1)}(t)dt-\frac{f^{(n+1)}(c)}{n!}\int_{c}^{x}(x-t)^{n}dt\\
& =\frac{1}{n!}\int_{c}^{x}(x-t)^{n}\left [f^{(n+1)}(t)-f^{(n+1)}(c)\right ]dt\\ & =\frac{1}{n!}\int_{c}^{x}u(t)dv(t),
\end{align*}
where
\[u(t)=f^{(n+1)}(t)-f^{(n+1)}(c)\mbox{ and }v(t)=-\frac{(x-t)^{n+1}}{n+1}.\]
Using the integration by parts, we obtain
\begin{align*} E_{n+1}(x) & =-\frac{1}{n!}\int_{c}^{x}v(t)du(t)\\ & =\frac{1}{(n+1)!}\int_{c}^{x}(x-t)^{n+1}f^{(n+2)}(t)dt.\end{align*}
This completes the proof. \(\blacksquare\)

The change of variable \(t=x+(c-x)u\) transforms the integral in (\ref{maeq189}) to the following form
\begin{equation}{\label{maeq299}}\tag{56}
E_{n}(x)=\frac{(x-c)^{n+1}}{n!}\int_{0}^{1}u^{n}f^{(n+1)}\left (x+(c-x)u\right )du,
\end{equation}
which can be used to prove a more concise Taylor’s series.

Theorem. (Bernstein’s Theorem). Suppose that \(f\) and all its derivatives are nonnegative on a compact interval \([0,r]\). Given \(0\leq x<r\), the following Taylor’s series
\[\sum_{k=0}^{\infty}\frac{f^{(k)}(0)}{k!}x^{k}=f(0)+\sum_{k=1}^{\infty}\frac{f^{(k)}(0)}{k!}x^{k}\]
converges to \(f(x)\). In other words, for \(0\leq x<r\), we have
\[f(x)=f(0)+\sum_{k=1}^{\infty}\frac{f^{(k)}(0)}{k!}x^{k}.\]

Proof. For \(x=0\), the result is obvious. Therefore, we assume \(0<x<r\). Using (\ref{maeq297}) by taking \(c=0\), we have
\begin{equation}{\label{maeq301}}\tag{57}
f(x)=\sum_{k=0}^{n}\frac{f^{(k)}(0)}{k!}x^{k}+E_{n}(x).
\end{equation}
We are going to prove that the error term satisfies the following inequality
\begin{equation}{\label{maeq300}}\tag{58}
0\leq E_{n}(x)\leq\left (\frac{x}{r}\right )^{n+1}f(r).
\end{equation}
In this case, since \((x/r)^{n+1}\rightarrow 0\) for \(0<x<r\), we have \(E_{n}(x)\rightarrow 0\) as \(n\rightarrow\infty\). We plan to apply (\ref{maeq299}) to prove the inequalities (\ref{maeq300}). By taking \(c=0\) in (\ref{maeq299}), we have
\[E_{n}(x)=\frac{x^{n+1}}{n!}\int_{0}^{1}u^{n}f^{(n+1)}(x-xu)du\mbox{ for all }x\in (0,r).\]
Since \(x\neq 0\), we define
\[F_{n}(x)=\frac{E_{n}(x)}{x^{n+1}}=\frac{1}{n!}\int_{0}^{1}u^{n}f^{(n+1)}(x-xu)du.\]
Since \(f^{(n+2)}\) is nonnegative on \([0,r]\) by the assumption, it means that \(f^{(n+1)}\) is increasing on \([0,r]\). Therefore, for \(0\leq u\leq 1\), we have
\begin{align*} f^{(n+1)}(x-xu) & =f^{(n+1)}(x(1-u))\\ & \leq f^{(n+1)}(r(1-u)),\end{align*}
which implies \(F_{n}(x)\leq F_{n}(r)\) for \(0<x<r\). In other words, we obtain
\[\frac{E_{n}(x)}{x^{n+1}}\leq\frac{E_{n}(r)}{r^{n+1}},\]
which implies
\begin{equation}{\label{maeq302}}\tag{59}
E_{n}(x)\leq\left (\frac{x}{r}\right )^{n+1}E_{n}(r).
\end{equation}
When we take \(x=r\) in (\ref{maeq301}), we obtain \(E_{n}(r)\leq f(r)\), since each term in the sum is nonnegative. Therefore, the inequality (\ref{maeq300}) can be obtained from (\ref{maeq302}). This completes the proof. \(\blacksquare\)

For \(-1<x<1\), we have the convergent geometric series
\[\frac{1}{1-x}=\sum_{k=0}^{\infty}x^{k}.\]
Using Theorem \ref{ma14}, we can take integration term by term. Therefore, we obtain
\[\int_{0}^{x}\frac{1}{1-t}dt=\sum_{k=0}^{\infty}\int_{0}^{x}t^{k}dt,\]
which implies
\begin{equation}{\label{maeq193}}\tag{60}
\ln (1-x)=-\sum_{k=1}^{\infty}\frac{x^{k}}{k}\mbox{ for }-1<x<1.
\end{equation}
The following theorem says that we can also put \(x=-1\) in the left-hand side of (\ref{maeq193}).

\begin{equation}{\label{mat304}}\tag{61}\mbox{}\end{equation}
Theorem \ref{mat304}. (Abel’s Limit Theorem). Let
\begin{equation}{\label{maeq194}}\tag{62}
f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}\mbox{ for }-r<x<r.
\end{equation}
We have the following properties.

(i) Suppose that the series \(\sum_{n=0}^{\infty}a_{n}r^{n}\) is convergent. Then, we have the left-limit
\[\lim_{x\rightarrow r-}f(x)=\sum_{n=0}^{\infty}a_{n}r^{n}.\]

(ii) Suppose that the series \(\sum_{k=0}^{\infty}a_{k}(-r)^{k}\) is convergent. Then, we have the right-limit
\[\lim_{x\rightarrow -r+}f(x)=\sum_{k=0}^{\infty}a_{k}(-r)^{k}.\]

Proof. To prove part (i), it suffices to assume \(r=1\) by considering a change in scale. In this case, we are going to prove that if the series \(\sum_{n=0}^{\infty}a_{n}\) is convergent then we have
\[\lim_{x\rightarrow 1-}f(x)=\sum_{n=0}^{\infty}a_{n}\equiv R.\]
Now, for \(|x|<1\), by referring to Proposition \ref{maeq303}, we have
\begin{align*} \frac{1}{1-x}f(x) & =\left (\sum_{n=0}^{\infty}x^{n}\right )\left (\sum_{n=0}^{\infty}a_{n}x^{n}
\right )\\ & =\sum_{n=0}^{\infty}c_{n}x^{n},\mbox{ where }c_{n}=\sum_{k=0}^{n}a_{k}.\end{align*}
Therefore, we obtain
\begin{align}
f(x)-R & =(1-x)\sum_{n=0}^{\infty}c_{n}x^{n}-(1-x)R\cdot\frac{1}{1-x}\nonumber\\
& =(1-x)\sum_{n=0}^{\infty}c_{n}x^{n}-(1-x)R\cdot\sum_{n=0}^{\infty}x^{n}\nonumber\\
& =(1-x)\sum_{n=0}^{\infty}\left (c_{n}-R\right )x^{n}\mbox{ for }|x|<1.\label{ma127}\tag{63}
\end{align}
Since
\[\lim_{n\rightarrow\infty}c_{n}=\lim_{n\rightarrow\infty}\sum_{k=0}^{n}a_{k}=\sum_{k=0}^{\infty}a_{k}=R,\]
given any \(\epsilon>0\), there exists an integer \(N\) such that
\[n\geq N\mbox{ implies }\left |c_{n}-R\right |<\frac{\epsilon}{2}.\]
For this integer \(N\), using (\ref{ma127}), we have
\[f(x)-R=(1-x)\sum_{n=0}^{N-1}\left (c_{n}-R\right )x^{n}+(1-x)\sum_{n=N}^{\infty}\left (c_{n}-R\right )x^{n}.\]
Let
\[M=\max\left\{\left |c_{0}-R\right |,\left |c_{1}-R\right |,\cdots,\left |c_{N-1}-R\right |\right\}\]
and let \(\delta=\epsilon/2NM\). Then, we want to show that \(0<1-x<\delta\) implies \(|f(x)-R|<\epsilon\). Now, we obtain
\begin{align*}
\left |f(x)-R\right | & \leq (1-x)\sum_{n=0}^{N-1}\left |c_{n}-R\right |+(1-x)\sum_{n=N}^{\infty}\left |c_{n}-R\right |x^{n}\\
& =(1-x)NM+(1-x)\frac{\epsilon}{2}\sum_{n=N}^{\infty}x^{n}\\
& =(1-x)NM+(1-x)\frac{\epsilon}{2}\frac{x^{N}}{1-x}\\ & <(1-x)NM+\frac{\epsilon}{2}\\
& <\frac{\epsilon}{2NM}NM+\frac{\epsilon}{2}=\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon,
\end{align*}
which shows
\[\lim_{x\rightarrow 1-}f(x)=R=\sum_{n=0}^{\infty}a_{n}.\]

To prove part (ii), using (\ref{maeq194}), we have
\begin{align*} g(y) & \equiv f(-y)=\sum_{k=0}^{\infty}a_{k}(-y)^{k}\\ & =\sum_{k=0}^{\infty}b_{k}y^{k}\mbox{ for }-r<y<r,\end{align*}
where \(b_{k}=a_{k}\cdot (-1)^{k}\). Since the series
\[\sum_{k=0}^{\infty}b_{k}r^{k}=\sum_{k=0}^{\infty}a_{k}(-r)^{k}\]
is convergent, using part (i), we have
\begin{align*} \lim_{y\rightarrow r-}f(-y) & =\lim_{y\rightarrow r-}g(y)\\ & =\sum_{k=0}^{\infty}b_{k}r^{k}=\sum_{k=0}^{\infty}a_{k}(-r)^{k},\end{align*}
which implies
\[\lim_{x\rightarrow -r+}f(x)=\lim_{y\rightarrow r-}f(-y)=\sum_{k=0}^{\infty}a_{k}(-r)^{k}.\]
This completes the proof. \(\blacksquare\)

Example. We put \(x=-1\) in the right-hand side of (\ref{maeq193}). Then, we obtain a convergent alternating series
\[\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}.\]
Using the Abel’s limit theorem, we obtain
\[\ln 2=\lim_{x\rightarrow -1+}\ln (1-x)=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}.\]

Proposition. Let \(\sum_{k=0}^{\infty}a_{k}\) and \(\sum_{k=0}^{\infty}b_{k}\) be two convergent series, and let \(\sum_{k=0}^{\infty}c_{k}\) denote their Cauchy product. Suppose that \(\sum_{k=0}^{\infty}c_{k}\) is convergent. Then, we have
\[\sum_{k=0}^{\infty}c_{k}=\left (\sum_{k=0}^{\infty}a_{k}\right )\left (\sum_{k=0}^{\infty}b_{k}\right ).\]

Proof. The two power series \(\sum_{k=0}^{\infty}a_{k}x^{k}\) and \(\sum_{k=0}^{\infty}b_{k}x^{k}\) both are convergent for \(x=1\). Therefore, they are also convergent in the open interval \((-1,1)\). For \(|x|<1\), using Proposition \ref{maeq303}, we can write
\[\sum_{k=0}^{\infty}c_{k}x^{k}=\left (\sum_{k=0}^{\infty}a_{k}x^{k}\right )\left (\sum_{k=0}^{\infty}b_{k}x^{k}\right ).\]
We take \(x\rightarrow 1-\) and apply Abel’s Theorem \ref{mat304}. The proof is complete. \(\blacksquare\)

The converse of Abel’s theorem is false in general. In other words, if \(f\) is given by (\ref{maeq194}), the left-limit \(f(r-)\) may exist and the series \(\sum_{k=0}^{\infty}a_{k}r^{k}\) may fail to converge. For example, we take \(a_{n}=(-1)^{n}\). Then, we have
\begin{align*} f(x) & =\frac{1}{1+x}=\sum_{n=0}^{\infty}(-x)^{n}=\sum_{n=0}^{n}(-1)^{n}x^{n}\\ & =\sum_{n=0}^{n}a^{n}x^{n}\mbox{ for }|x|<1.\end{align*}
We have
\[\lim_{x\rightarrow 1-}f(x)=\frac{1}{2}.\]
However, the infinite series
\[\sum_{n=0}^{\infty}a_{n}=\sum_{n=0}^{\infty}(-1)^{n}\]
is divergent. The following theorem present the converse by placing further restrictions on the coefficients \(a_{k}\).

Theorem. (Tauber’s Theorem). Let
\[f(x)=\sum_{k=0}^{\infty}a_{k}x^{k}\mbox{ for }-1<x<1.\]
Suppose that
\[\lim_{n\rightarrow\infty}na_{n}=0\mbox{ and }\lim_{x\rightarrow 1-}f(x)=S.\]
Then, we have
\[\sum_{k=0}^{\infty}a_{k}=S.\]

Proof. Let
\[\sigma_{n}=\frac{1}{n}\sum_{k=0}^{n}k|a_{k}|.\]
Since \(\lim_{n\rightarrow\infty}n|a_{n}|=0\), Proposition \ref{ma128} says that \(\lim_{n\rightarrow\infty}\sigma_{n}=0\). Since \(\lim_{x\rightarrow 1-}f(x)=S\), we also have
\[\lim_{n\rightarrow\infty}f(x_{n})=S\mbox{ for }x_{n}=x-\frac{1}{n}.\]
Therefore, given any \(\epsilon>0\), there exists an integer \(N\) such that \(n\geq N\) implies
\[\left |f(x_{n})-S\right |<\frac{\epsilon}{3},\quad\sigma_{n}<\frac{\epsilon}{3}\mbox{ and }n|a_{n}|<\frac{\epsilon}{3}.\]
For \(-1<x<1\), we have
\begin{align}
& f(x)-S+\sum_{k=0}^{n}a_{k}\left (1-x^{k}\right )-\sum_{k=n+1}^{\infty}a_{k}x^{k}\nonumber\\ & \quad =\sum_{k=0}^{\infty}a_{k}x^{k}-S+\sum_{k=0}^{n}a_{k}-\sum_{k=0}^{n}a_{k}x^{k}-\sum_{k=n+1}^{\infty}a_{k}x^{k}\nonumber\\
& \quad=\sum_{k=0}^{n}a_{k}-S.\label{ma130}\tag{64}
\end{align}
Now, for \(0<x<1\), we also have
\[1-x^{k}=(1-x)(1+x+\cdots+x^{k-1})\leq k(1-x).\]
Using (\ref{ma130}), for \(n\geq N\) and \(0<x<1\), we obtain
\begin{align*}
\left |\sum_{k=0}^{n}a_{k}-S\right | & \leq |f(x)-S|+\sum_{k=0}^{n}\left |a_{k}\right |
\left (1-x^{k}\right )+\sum_{k=n+1}^{\infty}\left |a_{k}\right |x^{k}\\
& \leq |f(x)-S|+(1-x)\sum_{k=0}^{n}k\left |a_{k}\right |+\frac{\epsilon}{3}\sum_{k=n+1}^{\infty}\frac{x^{k}}{k}
\end{align*}
Since
\[\sum_{k=n+1}^{\infty}\frac{x^{k}}{k}\leq\sum_{k=n+1}^{\infty}\frac{x^{k}}{n}\leq\frac{1}{n}\sum_{k=0}^{\infty}x^{k}=\frac{1}{n(1-x)},\]
it follows
\[\left |\sum_{k=0}^{n}a_{k}-S\right |\leq |f(x)-S|+(1-x)\sum_{k=0}^{n}k\left |a_{k}\right |+\frac{\epsilon}{3n(1-x)}.\]
We take
\[x=x_{n}=1-\frac{1}{n}.\]
Then, we obtain
\begin{align*}
\left |\sum_{k=0}^{n}a_{k}-S\right | & =|f(x_{n})-S|+\frac{1}{n}\sum_{k=0}^{n}k\left |a_{k}\right |+\frac{\epsilon}{3}\\ & =|f(x_{n})-S|+\sigma_{n}+\frac{\epsilon}{3}\\
& <\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3}=\epsilon.
\end{align*}
This completes the proof. \(\blacksquare\)

 

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *