Pierre Auguste Cot (1837-1883) was a French painter.
Let \({\cal B}\) be a Borel \(\sigma\)-field on \(\mathbb{R}\). Then \(B\) is called a Borel set if \(B\in {\cal B}\).
Definition. Random variables \(X_{1},X_{2},\cdots ,X_{n}\) are independent when
\[\mathbb{P}(X_{1}\in B_{1},\cdots ,X_{n}\in B_{n})=\prod_{i=1}^{n}\mathbb{P}(X_{i}\in B_{i})\]
for all Borel sets \(B_{1},\cdots ,B_{n}\). A sequence of random variables \(\{X_{n}\}_{n=1}^{\infty}\) is independent when every finite subset of \(\{X_{n}\}_{n=1}^{\infty}\) is independent. \(\sharp\)
Definition. Random variables that are independent and have the same distribution function are called i.i.d. (independent and identically distributed).
Proposition. Let the random variables \(X_{1},\cdots ,X_{n}\) be independent, and let \(g_{i}:(\mathbb{R},{\cal B})\rightarrow (\mathbb{R},{\cal B})\) be measurable functions for \(i=1,\cdots ,n\) such that \(g_{i}(X_{i})\) are random variables for \(i=1,\cdots ,n\). Then, the random variables \(g_{i}(X_{i})\) for \(i=1,\cdots ,n\) are also independent. \(\sharp\)
Proposition. The random variables \(X_{1},\cdots ,X_{n}\) are independent if and only if any one of the following conditions hold.
- \(F_{X_{1},\cdots ,X_{n}}(x_{1},\cdots ,x_{n})=\prod_{i=1}^{n}F_{X_{i}}(x_{i})\);
- \(f_{X_{1},\cdots ,X_{n}}(x_{1},\cdots ,x_{n})\prod_{i=1}^{n}f_{X_{i}}(x_{i})\);
- \(\phi_{X_{1},\cdots ,X_{n}}(x_{1},\cdots ,x_{n})=\prod_{i=1}^{n}\phi_{X_{i}}(x_{i})\).
Proposition. Let the random variables \(X_{1},\cdots ,X_{n}\) be independent, and let \(g_{i}:(\mathbb{R},{\cal B})\rightarrow (\mathbb{R},{\cal B})\) be measurable functions for \(i=1,\cdots ,n\) such that \(g_{i}(X_{i})\) are random variables for \(i=1,\cdots ,n\). Then, we have
\[\mathbb{E}\left [\prod_{i=1}^{n}g_{i}(X_{i})\right ]=\prod_{i=1}^{n}\mathbb{E}[g_{i}(X_{i})].\]
However, the converse needs not be true.
Proposition. Let \(X\) and \(Y\) be independent. Then, they are uncorrelated provided they have finite moments of second order. The converse needs not be true.
Proof. By independence, we have
\[\mbox{Cov}(X,Y)=\mathbb{E}[XY]-\mathbb{E}[X]\mathbb{E}[Y]=0,\]
which says \(\rho =0\). This completes the proof. \(\blacksquare\)
Example. Let \(X_{1},X_{2},X_{3}\) be a random sample from a distribution with p.d.f. \(f(x)=e^{-x}\) for \(0<x<\infty\). The joint p.d.f. of these three random variables is given by
\[e^{-x_{1}}\cdot e^{-x_{2}}\cdot e^{-x_{3}}=e^{-x_{1}-x_{2}-x_{3}}\]
for \(0<x_{i}<\infty\) and \(i=1,2,3\). The independence of \(X_{1},X_{2},X_{3}\) gives the probability
\begin{align*} & \mathbb{P}(0<X_{1}<1,2<X_{2}<4,3<X_{3}<7)\\ & \quad =\left (\int_{0}^{1}e^{-x_{1}}dx_{1}\right )\left (\int_{2}^{4}e^{-x_{2}}dx_{2}\right )\left (\int_{3}^{7}e^{-x_{3}}dx_{3}\right )\\ & \quad =(1-e^{-1})(e^{-2}-e^{-4})(e^{-3}-e^{-7}).\end{align*}
Since the expectation
\[\mathbb{E}(X_{1})=\mathbb{E}(X_{2})=\mathbb{E}(X_{3})=1,\]
we have
\begin{align*} \mathbb{E}(X_{1}X_{2}X_{3}) & =\mathbb{E}(X_{1})\mathbb{E}(X_{2})\mathbb{E}(X_{3})\\ & =\left (\int_{0}^{\infty}xe^{-x}dx\right )^{3}=1.\end{align*}
Proposition. Let \(X\) and \(Y\) be positive and integer-valued random variables. Then, for each \(n\), we have
\[\mathbb{P}(X+Y=n)=\sum_{k=0}^{n}\mathbb{P}(X=k)\cdot \mathbb{P}(Y=n-k).\]
Proof. Using the independence of \(X\) and \(Y\), we have
\begin{align*} \mathbb{P}(X+Y=n) & =\mathbb{P}\left (\bigcup_{k=0}^{n}{X=k,Y=n-k}\right )\\
& =\sum_{k=0}^{n}\mathbb{P}(X=k,Y=n-k)\\ & =\sum_{k=0}^{n}\mathbb{P}(X=k)\cdot \mathbb{P}(Y=n-k).\end{align*}
This completes the proof. \(\blacksquare\)
Example. Let \(X\) and \(Y\) have the Poisson distributions with parameters \(\lambda_{X}\) and \(\lambda_{Y}\), respectively. Then, we have
\begin{align*} \mathbb{P}(X+Y=n) & =\sum_{k=0}^{n}\mathbb{P}(X=k)\cdot \mathbb{P}(Y=n-k)\\
& =\sum_{k=0}^{n}e^{-\lambda_{X}}\cdot\frac{\lambda_{X}^{k}}{k!}\cdot
e^{-\lambda_{Y}}\cdot\frac{\lambda_{Y}^{n-k}}{(n-k)!}\\ & =e^{-(\lambda_{X}+\lambda_{Y})}\cdot\frac{(\lambda_{X}+\lambda_{Y})^{n}}{n!}.\end{align*}
This shows that \(X+Y\) has the Poisson distribution with parameter \(\lambda_{X}+\lambda_{Y}\). \(\sharp\)
Proposition. Let \(X\) and \(Y\) be independent and continuous random variables. Then \(X+Y\) is continuous random variable with p.d.f.
\[f_{X+Y}(t)=\int_{-\infty}^{\infty}f_{X}(t-s)f_{Y}(s)ds.\]
Proof. For each \(t\), we consider the distribution function
\[F_{X+Y}(t)=\mathbb{P}(X+Y\leq t).\]
Then, we have
\begin{align*} \mathbb{P}(X+Y\leq t) & =\int\!\int_{\{(u,s):u+s\leq t\}}f(u,s)duds\\ & =\int\!\int_{\{(u,s):u+s\leq t\}}f_{X}(u)f_{Y}(s)duds\\
& =\int_{-\infty}^{\infty}\left [\int_{-\infty}^{t-s}f_{X}(u)du\right ]f_{Y}(s)ds\\ & =\int_{-\infty}^{\infty}\left [\int_{-\infty}^{t}f_{X}(v-s)dv
\right ]f_{Y}(s)ds\\ & =\int_{-\infty}^{t}\left [\int_{-\infty}^{\infty}f_{X}(v-s)f_{Y}(s)ds\right ]dv.\end{align*}
By taking differentiation and using the fundamental theorem of calculus, the proof is complete. \(\blacksquare\)
Example. Suppose that \(X\) has \(N(\mu_{X},\sigma_{X}^{2})\) and \(Y\) has \(N(\mu_{Y},\sigma_{Y}^{2})\). Then, we have
\begin{align*} f_{X+Y}(t) & =\int_{-\infty}^{\infty}f_{X}(t-s)f_{Y}(s)ds\\
& =\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi\sigma_{X}^{2}}}\cdot\exp [-(t-s-\mu_{X})^{2}/2\sigma_{X}^{2}]\cdot\frac{1}
{\sqrt{2\pi\sigma_{Y}^{2}}}\cdot\exp [-(s-\mu_{Y})^{2}/2\sigma_{Y}^{2}]ds\\
& =\frac{1}{\sqrt{2\pi (\sigma_{X}^{2}+\sigma_{Y}^{2})}}\cdot\exp [-(t-(\mu_{X}+\mu_{Y}))^{2}/2(\sigma_{X}^{2}+\sigma_{Y}^{2})],\end{align*}
which says that \(X+Y\) has \(N(\mu_{X}+\mu_{Y},\sigma_{X}^{2}+\sigma_{Y}^{2})\). In general, when \(X_{1},\cdots ,X_{n}\) are independent and normally distributed, every linear combination \(\sum_{i=1}^{n}c_{i}X_{i}\) is normally distributed. \(\sharp\)
Proposition. Consider the random variables with \(\mbox{Var}(X_{i})=\sigma_{i}^{2}\) for \(i=1,\cdots ,n\) and \(\rho_{X_{i}X_{j}}=\rho_{ij}\) for \(i\neq j\). Then, we have
\[\mbox{Var}\left (\sum_{i=1}^{n}c_{i}X_{i}\right )=\sum_{i=1}^{n}c_{i}^{2}\sigma_{i}^{2}+\sum_{i\neq j}c_{i}c_{j}\rho_{ij}\sigma_{i}\sigma_{j}.\]
In particular, when the random variables \(X_{1},\cdots ,X_{n}\) are independent, or only pairwise uncorrelated, we have
\[\mbox{Var}\left (\sum_{i=1}^{n}c_{i}X_{i}\right )=\sum_{i=1}^{n}c_{i}^{2}\sigma_{i}^{2}.\]
Proof. Since the expected value of the sum is the sum of the expected values (i.e. \(\mathbb{E}\) is a linear operator), we have
\begin{align*} \mu_{Y} & =\mathbb{E}[Y]\\ & =\mathbb{E}\left (\sum_{i=1}^{n} a_{i}X_{i}\right )\\ & =\sum_{i=1}^{n}a_{i}\mathbb{E}[X_{i}]\\ & =\sum_{i=1}^{n} a_{i}\mu_{i}.\end{align*}
We also have
\begin{align*} \sigma^{2}_{Y} & =\mathbb{E}[(Y-\mu_{Y})^{2}]\\ & =\mathbb{E}\left [\left (\sum_{i=1}^{n}a_{i}X_{i}-\sum_{i=1}^{n} a_{i}\mu_{i}\right )^{2}\right ]\\
& =\mathbb{E}\left\{\left [\sum_{i=1}^{n} a_{i}(X_{i}-\mu_{i})\right ]^{2}\right\}\\ & =\mathbb{E}\left [\sum_{i=1}^{n}\sum_{j=1}^{n} a_{i}a_{j}(X_{i}-\mu_{i})(X_{j}-\mu_{j})\right ].\end{align*}
Using the fact that \(\mathbb{E}\) is a linear operator, we obtain
\begin{align*} \sigma^{2}{Y} & =\sum_{i=1}^{n}\sum_{j=1}^{n} a_{i}a_{j}\mathbb{E}[(X_{i}-\mu_{i})(X_{j}-\mu_{j})]\\ & =\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i}a_{j}\mbox{Cov}(X_{i},X_{j}).\end{align*}
For \(i\neq j\), using the independence of \(X_{i}\) and \(X_{j}\), we have
\begin{align*} & \mathbb{E}[(X_{i}-\mu_{i})(X_{j}-\mu_{j})]\\ & \quad =\mathbb{E}[X_{i}-\mu_{i}]
\mathbb{E}[X_{j}-\mu_{j}]\\ & \quad =(\mu_{i}-\mu_{i})(\mu_{j}-\mu_{j})=0.\end{align*}
Therefore, the variance can be written as
\begin{align*} \sigma^{2}{Y} & =\sum_{i=1}^{n} a^{2}{i}\mathbb{E}[(X_{i}-\mu_{i})]\\ & =\sum_{i=1}^{n} a^{2}{i}\sigma^{2}{i}.\end{align*}
This completes the proof. \(\blacksqyare\)
Example. Let \(X_{1},X_{2},\cdots ,X_{n}\) be a random sample of size \(n\) from a distribution with mean \(\mu\) and variance \(\sigma^{2}\). Consider the sample mean
\[\bar{X}=\frac{X_{1}+X_{2}+\cdots +X_{n}}{n},\]
which is a linear function with each \(a_{i}=1/n\). Then, we have
\begin{align*} \mu_{\bar{X}} & =\sum_{i=1}^{n}\frac{1}{n}\cdot\mu\\ & =\mu\end{align*}
and
\begin{align*} \sigma^{2}_{\bar{X}} & =\sum_{i=1}^{n} \left (\frac{1}{n}\right )^{2}\sigma^{2}\\ & =\frac{\sigma^{2}}{n}.\end{align*}
Example. Let \(X_{1},X_{2},\cdots ,X_{n}\) denote the outcomes on \(n\) Bernoulli trials. The moment-generating function of \(X_{i}\) for \(i=1,2,\cdots ,n\) is \(M(t)=q+pe^{t}\). If \(Y=\sum_{i=1}^{n} X_{i}\), then we have
\begin{align*} M_{Y}(t) & =\prod_{i=1}^{n} (q+pe^{t})\\ & =(q+pe^{t})^{n},\end{align*}
which says that \(Y\) is \(B(n,p)\). \(\sharp\)
Example. Let \(X_{1},X_{2},\cdots ,X_{n}\) be the observations of a random sample of size \(n\) from the exponential distribution having mean \(\theta\) . Then, the moment-generating function is given by \(M(t)=1/(1-\theta t)\) for \(t<1/\theta\). Also, the moment-generating function of \(Y=X_{1}+X_{2}+\cdots +X_{n}\) is given by
\begin{align*} M_{Y}(t) & =[(1-\theta t)^{-1}]^{n}\\ & =(1-\theta t)^{-n}\end{align*}
for \(t<1/\theta\), which is a gamma distribution with parameters \(\alpha =n\) and \(\theta\). Therefore \(Y\) has this distribution. On the other hand, the moment-generating function of \(\bar{X}\) is given by
\begin{align*} M_{\bar{X}}(t) & =\left [\left (1-\frac{\theta t}{n}\right )^{-1}\right ]^{n}\\ & =\left (1-\frac{\theta t}{n}\right )^{-n}\end{align*}
for \(t<n/\theta\). Therefore, the distribution of \(\bar{X}\) is gamma with parameters \(\alpha =n\) and \(\theta /n\). \(\sharp\)
Proposition. Let \(X_{1},X_{2},\cdots ,X_{n}\) be \(n\) mutually independent normal random variables with means \(\mu_{1},\mu_{2},\cdots .\mu_{n}\) and variances \(\sigma_{1}^{2}, \sigma_{2}^{2},\cdots ,\sigma_{n}^{2}\), respectively, then the linear function \(Y=\sum_{i=1}^{n} c_{i}X_{i}\) has the normal distribution
\[N\left (\sum_{i=1}^{n} c_{i}\mu_{i},\sum_{i=1}^{n} c_{i}^{2}\sigma_{i}^{2}\right ).\]
Proof. Since
\[M_{X_{i}}(t)=\exp (\mu_{i}t+\sigma_{i}^{2}t^{2}/2)\]
for \(i=1,\cdots ,n\), we have
\begin{align*} M_{Y}(t) & =\prod_{i=1}^{n} M_{X_{i}}(c_{i}t)\\ & =\prod_{i=1}^{n}\exp (\mu_{i}c_{i}t+\sigma_{i}^{2}c_{i}^{2}t^{2}/2)\end{align*}
Therefore, we obtain
\[M_{Y}(t)=\exp\left [\left (\sum_{i=1}^{n} c_{i}\mu_{i}\right )t+\left (\sum_{i=1}^{n}c_{i}^{2}\sigma^{2}\right )\left (\frac{t^{2}}{2}\right )\right ],\]
which is the moment-generating function of a distribution given by
\[N\left (\sum_{i=1}^{n} c_{i}\mu_{i},\sum_{i=1}^{n} c_{i}^{2}\sigma_{i}^{2}\right ).\]
This shows that \(Y\) has this normal distribution, and the proof is complete. \(\blacksquare\)
We see that the difference of two independent normally distributed random variables, say \(Y=X_{1}-X_{2}\), has the normal distribution \(N(\mu_{1}-\mu_{2},\sigma_{1}^{2}+\sigma_{2}^{2})\).
Example. Let \(X_{1}\) and \(X_{2}\) equal the number of pounds of butterfat produced by two Holstein cows (one selected at random from those on the Koopman farm and one selected at random from those on the Vilestra farm, respectively) during the 305-day lactation period following the births of calves. Assume that the distribution of \(X_{1}\) is \(N(693.2,22820)\) and the distribution of \(X_{2}\) is \(N(631.7,19205)\). Moreover, if \(X_{1}\) and \(X_{2}\) are independent, we shall find \(\mathbb{P}(X{1}>X_{2})\). That is, we shall find the probability that the butterfat produced by the Koopman farm cow exceeds that produced by the Vliestra farm cow. If we let \(Y=X_{1}-X_{2}\), then the distribution of \(Y\) is \(N(693.2-631.7,22820+19205)\). Therefore, we have
\begin{align*} \mathbb{P}(X_{1}>X_{2}) & =\mathbb{P}(Y>0)\\ & =\mathbb{P}\left (\frac{Y-61.5}{\sqrt{42025}}>\frac{0-61.5}{205}\right )\\ & =\mathbb{P}(Z>-0.3)=0.6179.\end{align*}
Proposition. Let \(X_{1},X_{2},\cdots ,X_{n}\) be observations of a random sample of size \(n\) from the normal distribution \(N(\mu ,\sigma^{2})\). Then, the distribution of the sample mean
\[\bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_{i}\]
is \(N(\mu ,\sigma^{2}/n)\). \(\sharp\)
Proposition. Let the distributions of \(X_{1},X_{2},\cdots ,X_{n}\) be \(\chi^{2}(r_{1}),\chi^{2}(r_{2}),\cdots ,\chi^{2}(r_{n})\), respectively. Suppose that \(X_{1},X_{2},\cdots ,X_{n}\) are independent. Then \(Y=X_{1}+\cdots X_{n}\) is \(\chi^{2}(r_{1}+r_{2}+\cdots r_{n})\).
Proof. It suffices to prove the case of \(n=2\). The moment-generating function of \(Y\) is given by
\begin{align*} M_{Y}(t) & =\mathbb{E}[e^{tY}]\\ & =\mathbb{E}[e^{t(X_{1}+X_{2})}]\\ & =\mathbb{E}[e^{tX_{1}}e^{tX_{2}}].\end{align*}
Since \(X_{1}\) and \(X_{2}\) are independent, we have
\begin{align*} M_{Y}(t) & =\mathbb{E}[e^{tX_{1}}]\mathbb{E}[e^{tX_{2}}]\\ & =(1-2t)^{-r_{1}/2}(1-2t)^{-r_{2}/2}\\ & =(1-2t)^{-(r_{1}+r_{2})/2}\end{align*}
for \(t<1/2\), which is the moment-generating function for a chi-square distribution with \(r=r_{1}+r_{2}\) degrees of freedom. The uniqueness of the moment-generating
function implies \(Y\) is \(\chi^{2}(r_{1}+r_{2})\). This completes the proof. \(\blacksquare\)
Proposition. Let \(Z_{1},Z_{2},\cdots ,Z_{n}\) have standard normal distributions, \(N(0,1)\). Suppose that these random variables are independent. Then \(W=Z^{2}_{1}+Z^{2}_{2}+
\cdots +Z^{2}_{n}\) has a distribution \(\chi^{2}(n)\).
Proof. Since \(Z^{2}{i}\) is \(\chi^{2}(1)\) for \(i=1,2,\cdots ,n\) by referring to distribution functions of random variables, it follows that \(W\) is \(\chi^{2}(n)\) by the previous proposition. This completes the proof. \(\blacksquare\)
Proposition. Let \(X_{1},X_{2},\cdots ,X_{n}\) be independent and have normal distributions \(N(\mu_{i},\sigma^{2}_{i})\), \(i=1,2,\cdots ,n\), respectively. Then, the distribution of
\[W=\sum_{i=1}^{n}\frac{(X_{i}-\mu_{i})^{2}}{\sigma^{2}_{i}}\]
is \(\chi^{2}(n)\).
Proof. Since \(Z_{i}=(X_{i}-\mu_{i})/\sigma_{i}\) is \(N(0,1)\) for \(i=1,2,\cdots ,n\), using the previous proposition, the proof is complete. \(\blacksquare\)
Theorem. Let \(X_{1},X_{2},\cdots ,X_{n}\) be observations of a random sample of size \(n\) from the normal distribution \(N(\mu ,\sigma^{2})\). Given
\[\bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_{i}\]
and
\[S^{2}=\frac{1}{n-1}\sum_{i=1}^{n} (X_{i}-\bar{X})^{2},\]
we have the folloiwing properties.
(i) The random variables \(\bar{X}\) and \(S^{2}\) are independent.
(ii) We have that
\[\frac{(n-1)S^{2}}{\sigma^{2}}=\frac{\sum_{i=1}^{n} (X_{i}-\bar{X})^{2}}{\sigma^{2}}}\]
is \(\chi^{2}(n-1)\).
Proof. We just prove part (ii). We first have
\begin{align}
\sum_{i=1}^{n}\left (\frac{X_{i}-\mu}{\sigma}\right )^{2} & =\sum_{i=1}^{n}\left [\frac{(X_{i}-\bar{X})+(\bar{X}-\mu )}{\sigma}\right ]^{2}\nonumber\\ & =\sum_{i=1}^{n}\left (\frac{X_{i}-\bar{X}}{\sigma}\right )^{2}+\frac{n(\bar{X}-\mu )^{2}}{\sigma^{2}},\label{staleceq1}\tag{1}\end{align}
where the cross-produt term is equal to
\begin{align*} & 2\sum_{i=1}^{n}\frac{(\bar{X}-\mu )(X_{i}-\bar{X})}{\sigma^{2}}\\ & \quad =\frac{2(\bar{X}-\mu )}{\sigma^{2}}\sum_{i=1}^{n} (X_{i}-\bar{X})=0.\end{align*}
Since \(Y_{i}=(X_{i}-\mu )/\sigma\) for \(i=1,\cdots ,n\) are standardized normal random variables that are independent. Therefore \(W=\sum_{i=1}^{n} Y^{2}_{i}\) is \(\chi^{2}(n)\) by the previous proposition. Moreover, since \(\bar{X}\) is \(N(\mu ,\sigma^{2}/n)\), we have that
\begin{align*} Z^{2} & =\left (\frac{\bar{X}-\mu}{\sigma /\sqrt{n}}\right )^{2}\\ & =\frac{n(\bar{X}-\mu )^{2}}{\sigma^{2}}\end{align*}
is \(\chi^{2}(1)\) by the previous proposition. For this notation, equation (\ref{staleceq1}) turns into
\[W=\frac{(n-1)S^{2}}{\sigma^{2}}+Z^{2}.\]
Since \(\bar{X}\) and \(S^{2}\) are independent by part (i), it follows that \(Z^{2}\) and \(S^{2}\) are also independent. For the moment-generating function of \(W\), we have
\begin{align*} \mathbb{E}\left [e^{tW}\right ] & =\mathbb{E}\left [e^{t((n-1)S^{2}/\sigma^{2}+Z^{2})}\right ]\\ & =
\mathbb{E}\left [e^{t(n-1)S^{2}/\sigma^{2}}e^{tZ^{2}}\right ]\\ & =
\mathbb{E}\left [e^{t(n-1)S^{2}/\sigma^{2}}\right ]\mathbb{E}\left [e^{tZ^{2}}\right ].\end{align*}
Since \(W\) and \(Z^{2}\) have chi-square distributions, we can substitute their moment-generating functions to obtain
\[(1-2t)^{-n/2}=\mathbb{E}\left [e^{t(n-1)S^{2}/\sigma^{2}}\right ](1-2t)^{-1/2}.\]
Equivalently, we have
\[\mathbb{E}\left [e^{t(n-1)S^{2}/\sigma^{2}}\right ]=(1-2t)^{-(n-1)/2}\]
for \(t<1/2\), which is the moment-generating function of a \(\chi^{2}(n-1)\) random variable. This shows that \((n-1)S^{2}/\sigma^{2}\) has this distribution, and the proof is complete. \(\blacksquare\)
Combining the above results, we see that, when sampling from a normal distribution,
\[U=\sum_{i=1}^{n}\frac{(X_{i}-\mu )^{2}}{\sigma^{2}}\mbox{ is }\chi^{2}(n)\]
and
\[W=\sum_{i=1}^{n}\frac{(X_{i}-\bar{X})^{2}}{\sigma^{2}}\mbox{ is }\chi^{2}(n-1).\]
Therefore, when the population mean \(\mu\) in \(\sum_{i=1}^{n} (X_{i}-\mu )^{2}\) is replaced by the sample mean \(\bar{X}\), one degree of freedom is lost.


