Eugene de Blaas (1843-1932) was an Italian painter.
The topics are
- Concepts and Examples
- Optimality Principles
- Mixed Extension
- Existence of Nash Equilibrium
- Properties of Optimal Solutions
- Equilibrium in Joint Mixed Strategies
- Approximate Nash Equilibrium
- Different Approach I
- Different Approach II
\begin{equation}{\label{a}}\tag{A}\mbox{}\end{equation}
Concepts and Examples.
The system \(\Gamma =(N,\{X_{i}\}_{i\in N},\{u_{i}\}_{i\in N})\) is called a noncooperative game, where \(N=\{1,2,\cdots ,n\}\) is the set of players, \(X_{i}\) is the strategy set for player \(i\), \(u_{i}\) is the payoff function for player \(i\) defined on Cartesian product of the strategy sets \({\bf X}=\prod_{i=1}^{n}X_{i}\). A noncooperative \(n\)-person game is played as follows. Players choose simultaneously and independently their strategies \(x_{i}\) from the strategy sets \(X_{i}\) for \(i=1,\cdots ,n\), which generates a policy \({\bf x}=(x_{1},\cdots ,x_{n})\). Each player \(i\) receives the amount \(u_{i}({\bf x})\).
The noncooperative two-person game \(\Gamma\) is defined by the system \(\Gamma =(X_{1},X_{2},u_{1},u_{2})\), where \(X_{1}\) is the strategy set of player 1, \(X_{2}\) is the strategy set of player 2 and \(u_{1}:X_{1}\times X_{2}\rightarrow\mathbb{R}\) and \(u_{2}:X_{1}\times X_{2}\rightarrow\mathbb{R}\) are the payoff functions of players 1 and 2, respectively. The noncooperative two-person game is also called a bimatrix game. This is due to the fact that the payoff functions can be written in the form of two matrices
\[u_{1}\equiv A=\left [\begin{array}{ccc}
a_{11} & \cdots & a_{1n}\\
\vdots & \vdots & \cdots\\
a_{m1} & \cdots & a_{mn}
\end{array}\right ]\mbox{ and }
u_{2}\equiv B=\left [\begin{array}{ccc}
b_{11} & \cdots & b_{1n}\\
\vdots & \vdots & \cdots\\
b_{m1} & \cdots & b_{mn}
\end{array}\right ],\]
where the elements \(a_{ij}\) and \(b_{ij}\) of the matrices \(A\) and \(B\) are the payoffs to players 1 and 2 in the policy \((i,j)\), respectively. The bimatrix game is played as follows. Player 1 chooses number \(i\) (the row) and player 2, simultaneously and independently, chooses number \(j\) (the column). Then player 1 receives the amount \(a_{ij}=u_{1}(x_{i},y_{j})\) and player 2 receives the amount \(b_{ij}=u_{2}(x_{i},y_{j})\). The game determined by the matrices \(A\) and \(B\) will be denoted by \(\Gamma (A,B)\). If the noncooperative two-person game \(\Gamma\) satisfies \(u_{1}(x,y)=-u_{2}(x,y)\) for all \(x\in X\) and \(y\in Y\), then \(\Gamma\) appears to be a two-person zero-sum game discussed before.
Example. Consider a bimatrix game given by
\[A=\left [\begin{array}{cc}
4 & 0\\ 0 & 1
\end{array}\right ]\mbox{ and }B=\left [\begin{array}{cc}
1 & 0\\ 0 & 4
\end{array}\right ].\]
Husband (player 1) and wife (player 2) may choose one of the evening entertainments: football match \((x_{1},y_{1})\) or theater \((x_{2},y_{2})\). If they have different desires \((x_{1},y_{2})\) or \((x_{2},y_{1})\), then they stay at home. The husband shows preference to the football match, while his wife prefers to go to the theater. However, it is more important for them to spend the evening together than to be alone at the entertainment. \(\sharp\)
\begin{equation}{\label{b}}\tag{B}\mbox{}\end{equation}
Optimality Principles.
Suppose that \({\bf x}=(x_{1},\cdots ,x_{i-1},x_{i},x_{i+1},\cdots ,x_{n})\) is an arbitrary policy in the noncooperative game \(\Gamma\), where \(x_{i}\) is a strategy of player \(i\). We construct another policy \((x_{1},\cdots ,x_{i-1},\widehat{x}_{i},x_{i+1},\cdots ,x_{n})\) that is different from \({\bf x}\) only in the strategy of player \(i\), which is denoted by \(({\bf x}|\widehat{x}_{i})\). It is evident that if \(x_{i}=\widehat{x}_{i}\), then \(({\bf x}|\widehat{x}_{i})={\bf x}\). The policy \({\bf x}^{*}=(x_{1}^{*},\cdots ,x_{i}^{*},\cdots ,x_{n}^{*})\) is called the Nash equilibrium when, for all \(x_{i}\in X_{i}\) and \(i=1,\cdots ,n\), we have
\begin{equation}{\label{peteq6}}\tag{1}
u_{i}({\bf x}^{*})\geq u_{i}({\bf x}^{*}|x_{i}).
\end{equation}
The strategy \(x_{i}^{*}\in X_{i}\) is called {\bf equilibrium} if it appears at least in one Nash equilibrium.
For the noncooperative two-person game \(\Gamma =(X_{1},X_{2},u_{1},u_{2})\), the policy \((x^{*},y^{*})\) is a Nash equilibrium if and only if the following inequalities
\[u_{1}(x,y^{*})\leq u_{1}(x^{*},y^{*})\mbox{ and }u_{2}(x^{*},y)\leq u_{2}(x^{*},y^{*})\]
hold for all \(x\in X_{1}\) and \(y\in X_{2}\). In particular, for the bimatrix game \(\Gamma (A,B)\), the pair \((i^{*},j^{*})\) is the Nash equilibrium if and only if the following inequalities
\[a_{ij^{*}}\leq a_{i^{*}j^{*}}\mbox{ and }b_{i^{*}j}\leq b_{i^{*}j^{*}}\]
hold for all the rows \(i\) and columns \(j\). Recall that for the two-person zero-sum game \(\Gamma =(X_{1},X_{2},H)\), the pair \((x^{*},y^{*})\in X_{1}\times X_{2}\) is an Nash equilibrium if and only if
\[H(x,y^{*})\leq H(x^{*},y^{*})\leq H(x^{*},y)\]
for all \(x\in X_{1}\) and \(y\in X_{2}\).
Let \(S\subseteq N\) be a subset of players and let \({\bf x}=(x_{1},\cdots ,x_{n})\) be a policy in the noncooperative game \(\Gamma\). We denote by \(({\bf x}|\widehat{\bf x}^{S})\) the policy which is obtained from the original policy \({\bf x}\) by replacing the strategies \(x_{i}\) with the strategies \(\widehat{x}_{i}\in X_{i}\) for \(i\in S\). In other words, the players appearing in the coalition \(S\) replace their strategies \(x_{i}\) by \(\widehat{x}_{i}\). If \({\bf x}^{*}\) is the Nash equilibrium, then (\ref{peteq6}) does not necessary imply
\begin{equation}{\label{petrq7}}\tag{2}
u_{i}({\bf x}^{*})\geq u_{i}({\bf x}^{*}|\widehat{\bf x}^{S})\mbox{ for all }i\in S.
\end{equation}
Therefore, the policy \({\bf x}^{*}\) is called a strong equilibrium when, for any coalition \(S\subseteq N\) and \({\bf x}^{S}\in\prod_{i\in S}X_{i}\), there is a player \(i_{0}\in S\) such that the following inequality is satisfied
\begin{equation}{\label{peteq8}}\tag{3}
u_{i_{0}}({\bf x}^{*})>u_{i_{0}}({\bf x}^{*}|{\bf x}^{S}).
\end{equation}
Any strong equilibrium is a Nash equilibrium.
Consider a set of vectors \(\{{\bf u}({\bf x})\}=\{(u_{1}({\bf x}),\cdots ,u_{n}({\bf x}))\}\) with \({\bf x}\in {\bf X}=\prod_{i=1}^{n}X_{i}\), i.e., the set of possible values of vector payoffs in all possible policies \({\bf x}\in {\bf X}\). The policy \(\widehat{{\bf x}}\) in the noncooperative game \(\Gamma\) is called Pareto optimal policy if and only if there is no policy \({\bf x}\in {\bf X}\) for which the following inequalities hold:
\[\mbox{$u_{i}({\bf x})\geq u_{i}(\widehat{{\bf x}})$ for all \(i\in N\) and
$u_{i_{0}}({\bf x})>u_{i_{0}}(\widehat{{\bf x}})$ for at least one \(i_{0}\in N\).}\]
The set of all Pareto optimal policies will be denoted by \({\bf X}^{P}\). Conceptually, the belonging of the policy \(\widehat{{\bf x}}\) to the set \({\bf X}^{P}\) means that there is no other policy \({\bf x}\) which might be more preferable to all players than the policy \(\widehat{{\bf x}}\). We see that the agreement on a fixed equilibrium does not allow each individual player to deviate from it. In the Pareto optimal policy, the deviating player can occasionally obtain an essentially greater payoff. Of course, a strong equilibrium is also a Pareto optimal policy.
\begin{equation}{\label{c}}\tag{C}\mbox{}\end{equation}
Mixed Extension.
For simplicity, we now assume that the sets of strategies \(X_{i}\) are finite. Let \(\Gamma =(N,\{X_{i}\}_{i\in N},\{u_{i}\}_{i\in N})\) be an arbitrary finite noncooperative game. Suppose the player \(i\) in the game \(\Gamma\) has \(m_{i}=|X_{i}|\) strategies. We denote by \(\mathbb{P}_{i}\) an arbitrary mixed strategy of player \(i\), i.e., some probability distribution over the set of strategies \(X_{i}\). Also, we denote by \(\mathbb{P}_{i}(x_{i})\) the probability prescribed by strategy \(x_{i}\in X_{i}\). The set of all mixed strategies of player \(i\) will be denoted by \(\widehat{X}_{i}\).
Suppose that each player \(i\in N\) uses the mixed strategy \(\mathbb{P}_{i}\). The probability that a policy \({\bf x}=(x_{1},\cdots ,x_{n})\) may arise is equal to the product of the probabilities of choosing its component strategies, i.e.
\[\mathbb{P}({\bf x})=\mathbb{P}_{1}(x_{1})\cdot\mathbb{P}_{2}(x_{2})\cdots\mathbb{P}_{n}(x_{n})\]
The above formula defines the probability distribution over the set of all policies \({\bf X}=\prod_{i=1}^{n}X_{i}\) determined by the mixed strategies \(\mathbb{P}_{1},\mathbb{P}_{2},\cdots ,\mathbb{P}_{n}\). The \(n\)-tuple \(\mathbb{P}=(\mathbb{P}_{1},\cdots ,\mathbb{P}_{n})\) is also called a policy in mixed strategies. Therefore, the value of the payoff function for each player turns out to be a discrete random variable. For \(i\in N\) and \({\bf x}=(x_{1},\cdots ,x_{n})\in {\bf X}\), the value of the payoff function for player \(i\) is given by
\[\Lambda_{i}(\mathbb{P})\equiv\sum_{{\bf x}\in X}u_{i}({\bf x})\mathbb{P}({\bf x})=\sum_{x_{1}\in X_{1}}\cdots\sum_{x_{n}\in X_{n}}u_{i}(x_{1},\cdots ,x_{n})\cdot\mathbb{P}_{1}(x_{1})\cdots\mathbb{P}_{n}(x_{n}).\]
The game \(\widehat{\Gamma}=(N,\{\widehat{X}_{i}\}_{i\in N},\{\Lambda_{i}\}_{i\in N})\) is called a mixed extension of the game \(\Gamma\).
We introduce the notation
\begin{equation}{\label{peteq9}}\tag{4}
\Lambda_{i}(\mathbb{P}|\widehat{x}_{j})=\sum_{x_{1}\in X_{1}}\cdots\sum_{x_{j-1}
\in X_{j-1}}\sum_{x_{j+1}\in X_{j+1}}\cdots\sum_{x_{n}\in X_{n}}
u_{i}({\bf x}|\widehat{x}_{j})\cdot\prod_{k\neq j}\mathbb{P}_{k}(x_{k}).
\end{equation}
Let \(\widehat{\mathbb{P}}_{j}\) be an arbitrary mixed strategy for player \(j\) in the game \(\Gamma\). Multiplying (\ref{peteq9}) by \(\widehat{\mathbb{P}}_{j}(\widehat{x}_{j})\) and summing over all \(\widehat{x}_{j}\in X_{j}\), we obtain
\[\sum_{\widehat{x}_{j}\in X_{j}}\widehat{\mathbb{P}}_{j}(\widehat{x}_{j})\cdot
\Lambda_{i}(\mathbb{P}|\widehat{x}_{j})=\Lambda_{i}(\mathbb{P}|\widehat{\mathbb{P}}_{j}).\]
If the inequality \(\Lambda_{i}(\mathbb{P}|\widehat{x}_{j})\leq a\) holds for any strategy \(\widehat{x}_{j}\in X_{j}\) of player \(j\), then the inequality \(\Lambda_{i}(\mathbb{P}|\widehat{\mathbb{P}}_{j})\leq a\) holds for any mixed strategy \(\widehat{\mathbb{P}}_{j}\). The policy \(\mathbb{P}^{*}\) is called a Nash equilibrium of the game \(\Gamma\) in mixed strategies when, for any player \(i\), and for any mixed strategies \(\mathbb{P}_{i}\), the following inequality holds
\[\Lambda_{i}(\mathbb{P}^{*}|\mathbb{P}_{i})\leq
\Lambda_{i}(\mathbb{P}^{*})\mbox{ for }i=1,\cdots ,n.\]
For the bimatrix game \(\Gamma (A,B)\) we may define the respective sets of mixed strategies \({\bf X}_{1}\) and \({\bf X}_{2}\) for players 1 and 2 to be
\[{\bf X}_{1}=\left\{{\bf x}\in\mathbb{R}^{m}:\sum_{i=1}^{m}x_{i}=1\mbox{ for }{\bf x}\geq {\bf 0}\right\}\mbox{ and }
{\bf X}_{2}=\left\{{\bf y}\in\mathbb{R}^{n}:\sum_{j=1}^{n}y_{j}=1\mbox{ for }{\bf y}\geq {\bf 0}\right\}.\]
We also define the payoffs \(\Lambda_{1}\) and \(\Lambda_{2}\) at \(({\bf x},{\bf y})\) in mixed strategies to be the expectations
\[\Lambda_{1}({\bf x},{\bf y})\equiv {\bf x}^{\top}A{\bf y}\mbox{ and }
\Lambda_{2}({\bf x},{\bf y})\equiv {\bf x}^{\top}B{\bf y}\]
for \({\bf x}\in {\bf X}_{1},{\bf y}\in {\bf X}_{2}\), Thus we have constructed formally a mixed extension \(\widehat{\Gamma}(A,B)\) of the game \(\Gamma (A,B)\), i.e., the noncooperative two-person game \(\widehat{\Gamma}(A,B)=({\bf X}_{1},{\bf X}_{2},\Lambda_{1},\Lambda_{2})\).
\begin{equation}{\label{d}}\tag{D}\mbox{}\end{equation}
Existence of Nash Equilibrium.
In theory of zero-sum games, the continuity of a payoff function and the compactness of strategy sets sufficed for the existence of an equilibrium in mixed strategies. It turns out that these conditions also suffice for the existence of a Nash equilibrium in mixed strategies where a noncooperative two-person game is concerned.
Theorem. Let \(\Gamma (A,B)\) be an \(m\times n\) bimatrix game. Then, there are mixed strategies \({\bf x}^{*}\in {\bf X}_{1}\) and \({\bf y}^{*}\in {\bf X}_{2}\) for players 1 and 2, respectively, such that the pair \(({\bf x}^{*},{\bf y}^{*})\) is a Nash equilibrium. \(\sharp\)
Theorem. Let \(\Gamma =(X_{1},X_{2},u_{1},u_{2})\) be a noncooperative two-person game. Suppose that the following conditions are satisfied.
- The strategy sets \(X_{1}\subseteq\mathbb{R}^{m}\) and \(X_{2}\subseteq\mathbb{R}^{n}\) are convex and compact subsets and the set \(X_{1}\times X_{2}\) has an interior.
- The payoff functions \(u_{1}(x,y)\) and \(u_{2}(x,y)\) are continuous in \(X_{1}\times X_{2}\) with \(u_{1}(x,y)\) being concave in \(x\) at every fixed \(y\) and \(u_{2}(x,y)\) being concave in \(y\) at every fixed \(x\).
Then, the noncooperative game \(\Gamma\) has the Nash equilibrium \((x^{*},y^{*})\). \(\sharp\)
Theorem. Let \(\Gamma =(X_{1},X_{2},u_{1},u_{2})\) be a noncooperative two-person noncooperative game. Suppose that the following conditions are satisfied.
- The functions \(u_{1}\) and \(u_{2}\) are continuous on \(X_{1}\times X_{2}\).
- The strategy sets \(X_{1}\) and \(X_{2}\) are compact subsets in finite-dimensional Euclidean spaces.
Then, the two-person noncooperative game \(\Gamma\) has a Nash equilibrium \(\mathbb{P}^{*}=(\mathbb{P}_{1}^{*},\mathbb{P}_{2}^{*})\) in mixed strategies. \(\sharp\)
Theorem. Given any finite \(n\)-person noncooperative game \(\Gamma\), there exists at least one Nash equilibrium in mixed strategies. \(\sharp\)
\begin{equation}{\label{e}}\tag{E}\mbox{}\end{equation}
Properties of Optimal Solutions.
Proposition. In order for a mixed strategy situation \((\mu^{*},\nu^{*})\) in the game \(\Gamma =(X_{1},X_{2},u_{1},u_{2})\) to be an equilibrium, it is necessary and sufficient that for all the players’ pure strategies \(x\in X_{1}\) and \(y\in X_{2}\) the following inequalities be satisfied
\[\Lambda_{1}(x,\nu^{*})\leq \Lambda_{1}(\mu^{*},\nu^{*})\mbox{ and }
\Lambda_{2}(\mu^{*},y)\leq \Lambda_{2}(\mu^{*},\nu^{*}). \sharp\]
For the bimatrix \((m\times n)\) game \(\Gamma (A,B)\) the above inequalities become
\[\Lambda_{1}(i,{\bf y}^{*})\equiv {\bf a}_{i}({\bf y}^{*})^{T}\leq
{\bf x}^{*}A{\bf y}^{*}=\Lambda_{1}({\bf x}^{*},{\bf y}^{*})\mbox{ and }
\Lambda_{2}({\bf x}^{*},j)\equiv {\bf x}^{*}{\bf b}_{j}^{T}\leq
{\bf x}^{*}B{\bf y}^{*}=\Lambda_{2}({\bf x}^{*},{\bf y}^{*}),\]
where \({\bf a}_{i}\) (resp. \({\bf b}_{j}\)) are rows (resp. columns) of the matrix \(A\) (resp. \(B\)) for \(i=1,\cdots ,m\) and \(j=1,\cdots ,n\).
\begin{equation}{\label{petp15}}\tag{5}\mbox{}\end{equation}
Proposition \ref{petp15}. Let \(\Gamma (A,B)\) be a bimatrix \((m\times n)\) game and let \(({\bf x},{\bf y})\in Z(\Gamma )\) be a Nash equilibrium in mixed strategies. Then the equations
\[\Lambda_{1}(i,{\bf y})=\Lambda_{1}({\bf x},{\bf y})\mbox{ and }\Lambda_{2}({\bf x},j)=\Lambda_{2}({\bf x},{\bf y})\]
hold for all \(i\in M_{\bf x}\) and \(j\in N_{\bf y}\), where \(M_{\bf x}\) and \(N_{\bf y}\) are the spectrums of mixed strategies \(x\) and \(y\), respectively. \(\sharp\)
Strategy \({\bf x}\) (resp. \({\bf y}\)) of player 1 (resp. player 2) is completely mixed if its spectrum \(M_{\bf x}=M\) (resp. \(N_{\bf y}=N\)).
Proposition \ref{petp15} provides a means of finding equilibrium strategies of players in the game \(\Gamma (A,B)\). Indeed, suppose we are looking for an equilibrium \(({\bf x},{\bf y})\) with the strategy spectra \(M_{\bf x}\) and \(N_{\bf y}\) being given. The optimal strategies must then satisfy a system of linear equations
\begin{equation}{\label{peteq12}}\tag{6}
{\bf y}{\bf a}_{i}^{T}=v_{1}\mbox{ and }{\bf x}{\bf b}_{j}^{T}=v_{2},
\end{equation}
where \(i\in M_{\bf x}\) and \(j\in N_{\bf y}\), \(v_{1}\) and \(v_{2}\) are some numbers. If, however, the equilibrium \(({\bf x},{\bf y})\) is completely mixed, then the system (\ref{peteq12}) becomes
\[A{\bf y}=v_{1}{\bf u}\mbox{ and }{\bf x}B=v_{2}{\bf w},\]
where \({\bf u}=(1,\cdots ,1)\) and \({\bf w}=(1,\cdots ,1)\) are the vectors of suitable dimensions composed of unit elements, and the numbers \(v_{1}={\bf x}A{\bf y}\) and \(v_{2}={\bf x}B{\bf y}\) are the players’ payoff in the situation \(({\bf x},{\bf y})\).
Proposition. Let \(\Gamma (A,B)\) be a bimatrix \((m\times m)\) game, where \(A\) and \(B\) are nonsigular matrices. If the game \(\Gamma\) has a completely mixed equilibrium, then it is unique and is defined by formulas
\begin{equation}{\label{peteq14}}\tag{7}
{\bf x}=v_{2}{\bf u}B^{-1}\mbox{ and }{\bf y}=v_{1}A^{-1}{\bf u},
\mbox{ where }v_{1}=1/({\bf u}A^{-1}{\bf u})\mbox{ and }v_{2}=1/({\bf u}B^{-1}{\bf u}).
\end{equation}
Conversely, if \({\bf x}\geq {\bf 0}\) and \({\bf y}\geq {\bf 0}\) for the vectors \({\bf x},{\bf y}\in\mathbb{R}^{m}\) defined by (\ref{peteq14}), then the pair \(({\bf x},{\bf y})\) forms an equilibrium in mixed strategies in the game \(\Gamma (A,B)\) with the equilibrium payoff vector \((v_{1},v_{2})\). \(\sharp\)
Proposition. Let \(\Gamma (A,B)\) be a bimatrix \((m\times n)\) game. Then the following assertion is true for almost all \((m\times n)\) games (except for no more than a countable set of games). Nash equilibrium situations in mixed strategies, which are not equilibrium in the original game, are not Pareto optimal in the mixed extension. \(\sharp\)
\begin{equation}{\label{f}}\tag{F}\mbox{}\end{equation}
Equilibrium in Joint Mixed Strategies.
A joint mixed strategy of players is the probability distribution over the set of all possible pairs \((i,j)\) (a situation in pure strategies), which is not necessary generated by the players’ independent random choices of pure strategies. Such strategies can be realized by a mediator before the game starts. Denote by \(M\) a joint mixed strategy in the game \(\Gamma (A,B)\). If this strategy is played by players 1 and 2, their expected payoffs \(K_{1}(M)\) and \(K_{2}(M)\) respectively are
\[K_{1}(M)=\sum_{i,j}a_{ij}\mu_{ij}\mbox{ and }K_{2}(M)=\sum_{i,j}b_{ij}\mu_{ij},\]
where \(A=(a_{ij})\) and \(B=(b_{ij})\) are the players’ payoff matrices, \(M=(\mu_{ij})\), and \({\bf u}M{\bf w}=1\), \(M\geq 0\), \({\bf u}=(1,\cdots, 1)\in \mathbb{R}^{m}\), \({\bf w}=(1,\cdots ,1)\in\mathbb{R}^{n}\). Geometrically, the set of vector payoffs corresponding to the joint mixed strategies is the convex hull of the set of possible vector payoffs in pure strategies.
Definition. For the bimatrix \((m\times n)\) game \(\Gamma (A,B)\), denote by \(M=(\mu_{ij})\) the joint probability distribution over the pairs \((i,j)\) for \(i=1,\cdots ,m\) and \(j=1,\cdots ,n\). Denote by \(\mu_{i}(j)\) the conditional probability of realizing strategy \(j\) provided strategy \(i\) has been realized. Similarly, denote by \(\nu_{j}(i)\) the conditional probability of realizing strategy \(i\) provided strategy \(j\) has been realized. Then
\[\mu_{i}(j)\equiv\left\{\begin{array}{ll}
\mu_{ij}/\sum_{j=1}^{n}\mu_{ij}, & \mbox{if }\sum_{j=1}^{n}\mu_{ij}\neq 0\\
0, & \mbox{if }\mu_{ij}=0, j=1,\cdots ,n;
\end{array}\right .\]
\[\nu_{j}(i)\equiv\left\{\begin{array}{ll}
\nu_{ij}/\sum_{i=1}^{m}\nu_{ij}, & \mbox{if }\sum_{i=1}^{m}\nu_{ij}\neq 0\\
0, & \mbox{if }\nu_{ij}=0, i=1,\cdots ,m.
\end{array}\right .\]
We say that \(M^{*}=(\mu_{ij}^{*})\) is an equilibrium in joint mixed strategies in the game \(\Gamma (A,B)\) if the inequalities
\begin{equation}{\label{peteq17}}\tag{8}
\sum_{j=1}^{n}a_{ij}\mu_{i}^{*}(j)\geq\sum_{j=1}^{n}a_{i’j}\mu_{i}^{*}(j)
\mbox{ and }
\sum_{i=1}^{m}b_{ij}\nu_{j}^{*}(i)\geq\sum_{i=1}^{m}b_{ij’}\nu_{j}^{*}(i)
\end{equation}
hold for all \(i,i’\in\{1,2,\cdots ,m\}\) and \(j,j’\in\{1,2,\cdots ,n\}\). \(\sharp\)
The game \(\Gamma (A,B)\) in joint mixed strategies can be interpreted as follows. Suppose the players have reached an agreement on joint strategy \(M^{*}=(\mu_{ij})\) and a chance device has yielded the pair \((i,j)\), i.e., player 1 (resp. player 2) has received the strategy number \(i\) (resp. \(j\)). Note that each player knows only his (her) own course of action. In general, he (she) may not agree on the realization \(i\) (resp. \(j\)) of the joint mixed strategy and choose the strategy \(i’\) (resp. \(j’\)). If \(M^{*}\) is an equilibrium, then it is disadvantageous for each player to deviate from the proposed realization \(i\) (resp. \(j\)), which follows from (\ref{peteq17}), where the left-hand sides of the inequalities coincide with the expected payoff to player 1 (resp. player 2) provided he (she) agrees on the realization \(i\) (resp. \(j\)).
Suppose the strategy \(i\) of player 1 is such that \(\mu_{ij}=0\) for all \(j=1,\cdots ,n\). Then the first of the inequalities (\ref{peteq17}) seems to be satisfied. Similarly, if \(\mu_{ij}=0\) for all \(i=1,\cdots ,m\), then the second inequality in (\ref{peteq17}) is satisfied. We substitute the expressions for \(\mu_{i}(j)\) and \(\nu_{j}(i)\) in terms of \(\mu_{ij}\) into (\ref{peteq17}). Then it follows that the necessary and sufficient condition for the situation \(M^{*}=(\mu_{ij}^{*})\) to be equilibrium is that the inequalities
\[\sum_{j=1}^{n}a_{ij}\mu_{ij}^{*}\geq\sum_{j=1}^{n}a_{i’j}\mu_{ij}^{*},\sum_{i=1}^{m}\sum_{j=1}^{n}\mu_{ij}^{*}=1,
\sum_{i=1}^{m}b_{ij}\mu_{ij}^{*}\geq\sum_{i=1}^{m}b_{ij’}\mu_{ij}^{*},\mu_{ij}^{*}\geq 0\]
hold for all \(i,i’\in\{1,\cdots ,m\}\) and \(j,j’\in\{1,\cdots ,n\}\).
Denote by \(Z_{c}(\Gamma )\) the set of equilibria in joint mixed strategies.
Proposition. We have the following properties.
(i) The set \(Z_{c}(\Gamma )\) of equilibria in joint mixed strategies in the bimatrix \((m\times n)\) game \(\Gamma (A,B)\) is a nonempty convex compact set in the space \(\mathbb{R}^{mn}\).
(ii) If \((x,y)\) is a situation in mixed strategies in the game \(\Gamma (A,B)\), then the joint mixed strategy situation \(M=(\mu_{ij})\) generated by the situation \((x,y)\), is equilibrium if and only if \((x,y)\) is the Nash equilibrium in mixed strategies in the game \(\Gamma (A,B)\). \(\sharp\)
\begin{equation}{\label{g}}\tag{G}\mbox{}\end{equation}
Approximate Nash Equilibrium.
Of course, for many noncooperative games, the Nash equilibrium does not exist. In this case, we are interested in obtaining a so-called \(\epsilon\)-Nash equilibrium. Given a noncooperative game \(\Gamma =(N,\{X_{i}\}_{i\in N},\{u_{i}\}_{i\in N})\) and \(\epsilon >0\), we say that \({\bf x}^{*}\in {\bf X}\) is an \(\epsilon\)-Nash equilibrium when
\[u_{i}({\bf x}^{*}|x_{i})\leq u_{i}({\bf x}^{*})+\epsilon\]
for all \(x_{i}\in X_{i}\) and for all \(i\in N\). It means that a unilateral deviation by a player does not improve the payoff for more than \(\epsilon\). For any \({\bf x}\in {\bf X}\), we also write
\[{\bf x}_{(-i)}=(x_{j})_{j\in N\setminus\{i\}}\in\prod_{j\in N\setminus\{i\}}X_{j}\mbox{ and }{\bf x}=(x_{i},{\bf x}_{(-i)}).\]
In this case, we see that \({\bf x}^{*}\in {\bf X}\) is an \(\epsilon\)-Nash equilibrium if and only if
\[u_{i}(x_{i},{\bf x}_{(-i)}^{*}\leq u_{i}({\bf x}^{*})+\epsilon\]
for all \(x_{i}\in X_{i}\) and for all \(i\in N\).
Given \(\epsilon >0\), for each \(i\in N\), the \(\epsilon\)-best response set-valued function \(B_{i,\epsilon}:\prod_{j\in N\setminus\{i\}}X_{j}\rightarrow 2^{X_{i}}\) is defined by
\[B_{i,\epsilon}({\bf x}_{(-i)})=\left\{x_{i}\in X_{i}:u_{i}(x_{i},{\bf x}_{(-i)})\geq\sup_{t_{i}\in X_{i}}
u_{i}(t_{i},{\bf x}_{(-i)})-\epsilon\right\}.\]
The aggregate \(\epsilon\)-best response set-valued function \(B_{\epsilon}:{\bf X}\rightarrow {\bf X}\) is defined by
\[B_{\epsilon}({\bf x})=\prod_{i\in N}B_{i,\epsilon}({\bf x}_{(-i)}).\]
We can see that \({\bf x}^{*}\in B_{\epsilon}({\bf x}^{*})\) if and only if \({\bf x}^{*}\) is an \(\epsilon\)-Nash equilibrium. Therefore, if the set-valued function \({\bf B}_{\epsilon}\) has a fixed point, then we have an \(\epsilon\)-Nash equilibrium.
Let \(V\) be a real Banach space and let \(X\) be a subset of \(V\). Given a set-valued function \(F:X\rightarrow 2^{X}\) and \(\epsilon >0\),
we say that \(x_{0}\in X\) is a \(\epsilon\)-fixed point of \(F\) if and only if
\[d(x_{0},F(x_{0}))=\inf_{x\in F(x_{0})}\parallel x-x_{0}\parallel\leq\epsilon .\]
\begin{equation}{\label{gat71}}\tag{9}\mbox{}\end{equation}
Theorem \ref{gat71}. Let \(\Gamma =(N,\{X_{i}\}_{i\in N},\{u_{i}\}_{i\in N})\) be a noncooperative game such that the following conditions are satisfied.
- For each \(i\in N\), the strategy sets \(X_{i}\) is endowed with a metric \(d_{i}\).
- The payoff functions \(\{u_{i}\}_{i\in N}\) are uniformly continuous on \({\bf X}\), where is also endowed with a metric \(d\) defined by
\begin{equation}{\label{ga65}}\tag{10}
d({\bf x},{\bf y}=\sum_{i\in N}d_{i}(x_{i},y_{i})
\end{equation}
for all \({\bf x},{\bf y}\in {\bf X}\). - For each \(\epsilon >0\) and \(\delta >0\), the aggregate \(\epsilon\)-best response set-valued function \(B_{\epsilon}\) has at least one \(\delta\)-fixed point.
Then, there exists at least one \(\epsilon\)-Nash equilibrium.
Proof. For \(\epsilon >0\), by the second assumption, there exists \(\eta >0\) such that \(d({\bf x},{\bf y})<\eta\) implies \(|u_{i}({\bf x})-u_{i}({\bf y})|<\epsilon /2\) for each \(i\in N\). By the third assumption, there exists a \(\eta /2\)-fixed point \({\bf x}^{*}\) of \(B_{\epsilon /2}\). In this case, by the concept of infimum, there also exists a \(\hat{\bf x}\in B_{\epsilon /2}({\bf x}^{*})\) satisfying \(d({\bf x}^{*},\hat{\bf x})<\eta\), which implies \(d_{i}(x_{i}^{*},\hat{x}_{i})<\delta\) for each \(i\in N\). Therefore, we obtain
\[d\left (x_{i}^{*},x_{(-i)}^{*}),(\hat{x}_{i},x_{(-i)}^{*})\right )
=d_{i}(x_{i}^{*},\hat{x}_{i})+\sum_{j\neq i}d_{j}(x_{j}^{*},x_{j}^{*})=d_{i}(x_{i}^{*},\hat{x}_{i})<\delta ,\]
which implies
\begin{equation}{\label{ga66}}\tag{11}
u_{i}\left (x_{i}^{*},{\bf x}_{(-i)}^{*})\right )\geq u_{i}\left (\hat{x}_{i},{\bf x}_{(-i)}^{*})\right )-\frac{\epsilon}{2}
\end{equation}
for all \(i\in N\). On the other hand, \(\hat{\bf x}\in B_{\epsilon /2}({\bf x}^{*})\) implies
\begin{equation}{\label{ga67}}\tag{12}
u_{i}\left (x_{i}^{*},{\bf x}_{(-i)}^{*})\right )\geq\sup_{t_{i}\in X_{i}}u_{i}\left (t_{i},{\bf x}_{(-i)}^{*})\right )-\frac{\epsilon}{2}
\end{equation}
for all \(i\in N\). Combining (\ref{ga66}) and (\ref{ga67}), we obtain
\[u_{i}\left (x_{i}^{*},{\bf x}_{(-i)}^{*})\right )\geq\sup_{x_{i}\in X_{i}}u_{i}\left (x_{i},{\bf x}_{(-i)}^{*})\right )-\epsilon\]
for all \(i\in N\), which says that \(u_{i}({\bf x}^{*})+\epsilon\geq u_{i}(x_{i},{\bf x}_{(-i)}^{*})\) for all \(x_{i}\in X_{i}\). This shows that \({\bf x}^{*}\) is an \(\epsilon\)-Nash equilibrium, and the proof is complete. $latex \blacksquare
Now, we present some interesting results about the existence of \(\epsilon\)-fixed point of set-valued functions.
\begin{equation}{\label{ga72}}\tag{13}\mbox{}\end{equation}
Proposition \ref{ga72}. Let \(V\) be a real reflexive Banach space and let \(X\) be a bounded and convex subset of \(V\) with nonempty interior. Given the set-valued function \(F:X\rightarrow 2^{X}\) such that \(F(x)\neq\emptyset\) is a convex subset of \(X\) and is closed with respect to the weak topology, then, given any \(\epsilon >0\), the \(\epsilon\)-fixed point of \(F\) exists. \(\sharp\)
Proposition. Let \(V\) be a real reflexive and separable Banach space and let \(X\) be a bounded and convex subset of \(V\) with nonempty interior. Given the set-valued function \(F:X\rightarrow 2^{X}\) such that \(F\) is upper semicontinuous with respect to the weak topology and \(F(x)\neq\emptyset\) is a convex subset of \(X\), then, given any \(\epsilon >0\), the \(\epsilon\)-fixed point of \(F\) exists. \(\blacksquare\)
Also, the \(\epsilon\)-fixed point can also be obtained based on the strong topology.
Proposition. Let \(V\) be a real Banach space and let \(X\) be a totally bounded and convex subset of \(V\) with nonempty interior. Suppose that the set-valued function \(F:X\rightarrow 2^{X}\) satisfies the following conditions.
- \(F(x)\neq\emptyset\) is a convex subset of \(X\).
- \(F\) is upper semicontinuous or \(F(x)\) is closed for each \(x\in X\).
Then, given any \(\epsilon >0\), the \(\epsilon\)-fixed point of \(F\) exists. \(\sharp\)
Let \(V\) be a real normed space and let \(X\subseteq V\) with \(\theta\in X\). The set-valued function \(F:X\rightarrow 2^{X}\) is called a tame set-valued function when, for each \(\epsilon >0\), there exists \(r>0\) such that, for each \(x\in B(\theta ,r)\cap X\), the set \(F(x)\cap B(\theta ,r+\epsilon )\neq\emptyset\), where \(B(\theta ,r)=\{x\in V:\parallel x\parallel\leq r\}\).
\begin{equation}{\label{ga69}}\tag{14}\mbox{}\end{equation}
Proposition \ref{ga69}. Let \(V\) be a real reflexive Banach space and let \(X\) be a convex subset of \(V\) with \(\theta\in X\) and nonempty interior. Suppose that the set-valued function \(F:X\rightarrow 2^{X}\) satisfies the following conditions:
- \(F(x)\neq\emptyset\) is a convex subset of \(X\) and is closed with respect to the weak topology;
- \(F\) is a tame set-valued function.
Then, given any \(\epsilon >0\), the \(\epsilon\)-fixed point of \(F\) exists. \(\sharp\)
\begin{equation}{\label{ga70}}\tag{15}\mbox{}\end{equation}
Proposition \ref{ga70}. Let \(V\) be a real reflexive and separable Banach space and let \(X\) be a convex subset of \(V\) with \(\theta\in X\) and nonempty interior. Suppose that the set-valued function \(F:X\rightarrow 2^{X}\) satisfies the following conditions.
- \(F(x)\neq\emptyset\) is a convex subset of \(X\) and is closed with respect to the weak topology.
- \(F\) is a tame and weakly upper semicontinuous set-valued function.
Then, given any \(\epsilon >0\), the \(\epsilon\)-fixed point of \(F\) exists. \(\sharp\)
Example. The set-valued function \(F:\mathbb{R}_{+}\rightarrow 2^{\mathbb{R}_{+}}\) defined by
\[F(x)=\left [x+\frac{1}{x+1},\infty\right )\]
is a tame set-valued function on the unbounded set \(\mathbb{R}_{+}\). Moreover, \(F\) has \(\epsilon\)-fixed points for each \(\epsilon >0\) by Theorems \ref{ga69} and \ref{ga70}. \(\sharp\)
Example. Let \(V\) be a real normed space and let \(F:V\rightarrow 2^{V}\) be a set-valued function with \(F(x)\neq\emptyset\) for each \(x\in V\). Suppose that the image \(F(V)=\bigcup_{x\in V}F(x)\) is bounded. Then \(F\) is a tame set-valued function, where we can take
\[r=1+\sup_{y\in F(V)}\parallel y\parallel\]
for each \(\epsilon >0\). This also says that, given a bounded subset \(X\) of \(V\), the set-valued function \(F:X\rightarrow X\) is a tame set-valued function when \(F(x)\neq\emptyset\) for each \(x\in X\). \(\sharp\)
Example. Let \(\Gamma =((0,1),(0,1),u_{1},u_{2})\) be a two-person noncooperative game. such that the payoff functions \(u_{1}\) and \(u_{2}\) are uniform continuous. We also assume that \(u_{1}\) is concave in the first coordinate and \(u_{2}\) is concave in the second coordinate. For each \(\epsilon >0\), according to Theorem \ref{gat71}, the game has an \(\epsilon\)-Nash equilibrium. Indeed, the first and second conditions in Theorem \ref{gat71} are satisfied by taking the standard metric on \((0,1)\) and the third condition in Theorem \ref{gat71} follows from Proposition \ref{ga72} by considering the set-valued function \(B_{\epsilon}\). \(\sharp\)
Example. Let \(A\) and \(B\) be \(m\times n\) matrices of real numbers. Consider the two-person game \((X_{1},X_{2},u_{1},u_{2})\) defined by \(u_{1}({\bf p},{\bf q})={\bf p}^{\top}A{\bf q}\) and \(u_{2}({\bf p},{\bf q})={\bf p}^{\top}B{\bf q}\) for all \({\bf p}\in X_{1}\) and \({\bf q}\in X_{2}\), where
\[X_{1}=\left\{{\bf p}\in\mathbb{R}^{m}:\sum_{i=1}^{m}p_{i}=1\mbox{ for \(p_{i}>0\) and \(i=1,\cdots ,m\)}\right\}\]
and
\[X_{2}=\left\{{\bf q}\in\mathbb{R}^{n}:\sum_{j=1}^{n}q_{j}=1\mbox{ for \(q_{j}>0\) and \(j=1,\cdots ,n\)}\right\}.\]
Then, for each \(\epsilon >0\), this game has an \(\epsilon\)-Nash equilibrium. Such an \(\epsilon\)-Nash equilibrium is called completely mixed, since both players use each of their pure strategies with a positive probability. The proof follows from Theorem \ref{gat71} and
Proposition \ref{ga72} by taking the standard Euclidean metric. \(\sharp\)
Example. Let \(X\) be a real normed space such that there exists \(a\in X\) with \(a\neq\theta\). We consider the two-person noncooperative game \(\Gamma =(X_{1},X_{2},u_{1},u_{2})\) defined by
\[u_{1}(x_{1},x_{2})=-\parallel x_{1}-x_{2}\parallel\mbox{ and }
u_{2}(x_{1},x_{2})=-\left |\!\left |x_{1}-x_{2}-\frac{a}{1+\parallel x_{1}\parallel}\right |\!\right |\]
for all \((x_{1},x_{2})\in X\times X\). For each \(\epsilon >0\), we have
\[B_{1,\epsilon}(x_{2})=\{x_{2}\}\mbox{ and }B_{2,\epsilon}(x_{1})=
\left\{x_{1}-\frac{a}{1+\parallel x_{1}\parallel}\right\},\]
which says that
\[B_{\epsilon}(x_{1},x_{2})=\left\{\left (x_{2},x_{1}-\frac{a}{1+\parallel x_{1}\parallel}\right )\right\}.\]
For each \(\delta >0\), there exists at least one \(\delta\)-fixed point of \(B_{\epsilon}\), since one can take \(x\in X\) with \(\parallel x\parallel\geq\parallel a\parallel /\delta\). In this case, we see that \((x,x)\) is a \(\delta\)-fixed point of \(B_{\epsilon}\), since
\[\left |\!\left |(x,x)-\left (x,x-\frac{a}{1+\parallel x\parallel}\right )\right |\!\right |=\frac{a}{1+\parallel x\parallel}
\leq\frac{\parallel a\parallel}{\parallel x\parallel}\leq\delta .\]
Moreover, the payoff functions \(u_{1}\) and \(u_{2}\) are uniform continuous on \(X\times X\). In fact, we have
\begin{align*}
|u_{2}(x_{1},x_{2})-u_{2}(y_{1},y_{2})| & \leq\left |\!\left |(x_{1}-y_{1})-(x_{2}-y_{2})+\frac{\parallel x_{1}\parallel –
\parallel y_{1}\parallel}{(1+\parallel x_{1}\parallel )(1+\parallel y_{1}\parallel )}a\right |\!\right |\\
& \leq\left (\parallel x_{1}-y_{1}\parallel +\parallel x_{2}-y_{2}\parallel\right )\left (1+\parallel a\parallel\right ).
\end{align*}
Therefore, by Theorem \ref{gat71}, we can conclude that, for each \(\epsilon >0\), the \(\epsilon\)-Nash equilibrium exists. Indeed, for \(\parallel x\parallel\) sufficiently large, since
\[u_{2}(x,x_{2})-u_{2}(x,x)\leq\frac{\parallel a\parallel}{1+\parallel x\parallel}\]
the element \((x,x)\) is an \(\epsilon\)-Nash equilibrium. \(\sharp\)
\begin{equation}{\label{h}}\tag{H}\mbox{}\end{equation}
Different Approach I.
This approach follows from Abdou (1995) \cite{abd95}. A two-player game form is an array \(\Gamma =(X_{1},X_{2},A,g)\), where \(X_{1}\) is the strategy set of player 1, \(X_{2}\) is the strategy set of player 2, \(A\) the alternative set and \(g:X_{1}\times X_{2}\rightarrow A\) is the outcome function. Throughout this section, all the sets involved are finite, strategies are always pur and the outcome function \(g\) is assumed to be onto.
$2^{A}$ will denote the set of all subsets of \(A\). A preference relation is a linear order \(R\) on \(A\) that is a binary relation which is complete, transitive and antisymmetric. We denote by \(L(A)\) the set of all preference relation on \(A\). We adopt the following notations
\begin{align*}
g(x_{1},X_{2}) & =\{g(x_{1},x_{2}):x_{2}\in X_{2}\}\mbox{ for every }x_{1}\in X_{1}\\
g(X_{1},x_{2}) & =\{g(x_{1},x_{2}):x_{1}\in X_{1}\}\mbox{ for every }x_{2}\in X_{2}\\
P(a,R) & =\{x\in A:xRa,x\neq a\}\mbox{ for every }a\in A\mbox{ and }R\in L(A).\\
P^{c}(a,R) & =A\setminus P(a,R)=\{x\in A:aRx\}\mbox{ (since antisymmetric) for every }a\in A\mbox{ and }R\in L(A).
\end{align*}
A preference assignment is an ordered pair of preference relations \(R=(R_{1},R_{2})\in L(A)\times L(A)\). Given the game form \(\Gamma\), every preference assignment \(R\) gives rise to a game in strategic form \(\Gamma (R)=(X_{1},X_{2},A,g,R)\). \((x_{1},x_{2})\in X_{1}\times X_{2}\) is a Nash equilibrium of \(\Gamma (R)\) if for any \((y_{1},y_{2})\in X_{1}\times X_{2}\), \(g(x_{1},x_{2})R_{1}g(y_{1},x_{2})\) and \(g(x_{1},x_{2})R_{2}g(x_{1},y_{2})\). \((x_{1},x_{2})\in X_{1}\times X_{2}\) is a Pareto optimun for \(\Gamma (R)\) if for any \((y_{1},y_{2})\in X_{1}\times X_{2}\) either \(g(x_{1},x_{2})R_{1}g(y_{1},y_{2})\) or \(g(x_{1},x_{2})R_{2} g(y_{1},y_{2})\). \((x_{1},x_{2})\) is a strong equilibrium of \(\Gamma (R)\) if \(({\bf x}_{1},x_{2})\) is both a Nash-equilibrium and a Pareto equilibrium. \(a\in A\) is a Nash outcome (resp. Pareto outcome, strong outcome) of \(\Gamma (R)\) if for some Nash equilibrium (resp. Pareto optimum, strong equilibriu,) \((x_{1},x_{2})\in X_{1}\times X_{2}\) one has \(g(x_{1},x_{2})=a\). \(a\in A\) is in the {\bf core} (rigorously the beta-core) of \(\Gamma (R)\) if \(a\) is a Pareto outcome such that: \(\exists x_{2}\in X_{2}\), $\forall x_{1}\in X_{1}$, \(aR_{1}g(x_{1},x_{2})\) and \(\exists x_{1}\in X_{1}\), \(\forall x_{2}\in X_{2}\), \(aR_{2}g(x_{1},x_{2})\).
Denote by \(NE(\Gamma ,R)\), \(PA(\Gamma ,R)\) and \(SE(\Gamma ,R)\) respectively the set of Nash equilibria, Pareto equilibria and strong equilibria of \(\Gamma (R)\) and by \(NEO(\Gamma ,R)\), \(PAO(\Gamma ,R)\), \(SEO(\Gamma ,R)\) and \(CO(\Gamma ,R)\) respectively the set of Nash outcomes, Pareto outcomes, strong outcomes, and the core of \(\Gamma (R)\). \(\Gamma\) is said to be Nash-consistent (resp. strongly consistent, stable) if \(NE(\Gamma ,R)\) (resp. \(SE(\Gamma ,R)\), \(CO(\Gamma ,R)\)) is nonempty for every \(R\in L(A)\times L(A)\).
Let \(a\in A\). The players are said to be jointly effective at \(a\) for the ordered pair of subsets \((B_{1},B_{2})\in 2^{A}\times 2^{A}\) if there exists \((x_{1},x_{2})\in X_{1}\times X_{2}\) satisfying \(g(x_{1},x_{2})=a\), \(g(x_{1},X_{1})\subset B_{1}\) and \(g(X_{1},x_{2})\subset B_{2}\). We denote by \(J_{\Gamma}(a)\) the set of couples of subsets for which the players are jointly effective at \(a\). \(J_{\Gamma}(a)\) will be called the jointly effective set of \(\Gamma\) at \(a\). The collection \(J_{\Gamma}=\{J_{\Gamma}(a):a\in A\}\) will be called the jointly effective structure associated with the game form \(\Gamma\).
We shall also need a reduced version of these notions. Let \({\cal B}=\{(B_{1},B_{2})\in 2^{A}\times 2^{A}:B_{1}\cup B_{2}=A\}\). The set \(\bar{J}_{\Gamma}(a)=J_{\Gamma}(a)\cap {\cal B}\), where \(a\in A\), is called the reduced joint effectivity set of \(\Gamma\) at \(a\) and \(\bar{J}_{\Gamma}=\{\bar{J}_{\Gamma}(a):a\in A\}\) is called the reduced joint effectivity structure of \(\Gamma\).
\begin{equation}{\label{abd95l21}}\tag{15}\mbox{}\end{equation}
Proposition \ref{abd95l21}. Let \(R\in L(A)\times L(A)\) and \(a\in A\). Then, we have the following properties.
(i) \(a\) is a Nash outcome of \(\Gamma (R)\) if and only if \((P^{c}(a,R_{2}),P^{c}(a,R_{1}))\in J_{\Gamma}(a)\).
(ii) \(a\) is a strong outcome of \(\Gamma (R)\) if and only if \((P^{c}(a,R_{2}),P^{c}(a,R_{1}))\in\bar{J}_{\Gamma}(a)\). \(\sharp\)
It follows from Proposition \ref{abd95l21} that two game forms \(\Gamma_{1}\) and \(\Gamma_{2}\) on \(A\) have the same Nash outcome correspondence if and only if \(J_{\Gamma_{1}}=J_{\Gamma_{2}}\); and they have the same strong outcome correspondence if and only if \(\bar{J}_{\Gamma_{1}}=\bar{J}_{\Gamma_{1}}\). Any property of \(NEO(\Gamma ,\cdot)\) (resp. \(SEO(\Gamma ,\cdot)\)) can be found in \(J_{\Gamma}\) (resp. \(\bar{J}_{\Gamma}\)). All notions of effectivity defined here are byproducts of the joint effectivity structure.
The purpose is to characterize Nash and strong consistency through properties involving the joint effectivity structure. The \(\alpha\)-effectivity function is the ordered pair \(E_{\Gamma ,\alpha}=(E_{\Gamma ,\alpha}(1),E_{\Gamma ,\alpha}(2))\), where \[E_{\Gamma ,\alpha}(1)=\{B\subset A:\exists x_{1}\in X_{1}, g(x_{1},X_{2})\subset B\}\]
and
\\(
The [latex]\beta\)-effectivity function is the ordered pair \(E_{\Gamma ,\beta}=(E_{\Gamma ,\beta}(1),E_{\Gamma ,\beta}(2))\), where \(E_{\Gamma ,\beta}(1)=\{B\subset A:\forall x_{2}\in X_{2}, g(X_{1},x_{2})\cap B\neq\emptyset\}\) and \(E_{\Gamma ,\beta}(2)=\{B\subset A:\forall x_{1}\in X_{1}, g(x_{1},X_{2})\cap B\neq\emptyset\}\).
Proposition. Let \(R\in L(A)\times L(A)\). \(a\in A\) is in the core of \(\Gamma (R)\) if and only if \((P^{c}(a,R_{2}),P^{c}(a,R_{1}))\in E_{\Gamma ,\alpha}(1)\times E_{\Gamma ,\alpha}(2)\cap {\cal B}\). \(\sharp\)
$E_{\Gamma ,\alpha}$ plays with respect to the core the same role as the one played by \(J\) with respect to Nash outcomes and \(\bar{J}\) with respect to the strong outcomes. Clearly the various notions of effectivity defined above are not independent from each other. One can exhibit the following relation:
\[(B_{1},B_{2})\in E_{\Gamma ,\alpha}(1)\times E_{\Gamma ,\alpha}(2)
\mbox{ if and only if there exists \(a\in A\) such that }(B_{1},B_{2})\in J_{\Gamma}(a)\]
which can be set otherwise as
\[E_{\Gamma ,\alpha}(1)\times E_{\Gamma ,\alpha}(2)=\bigcup_{a\in A}J_{\Gamma}(a).\]
In view of this relation the alpha-effectivity function can be computed if the local joint effectivity structure is given but the converse need not be true. The following relation between the alpha- and the beta-effectivity functions is rather classical:
\[B\in E_{\Gamma ,\alpha}(i)\mbox{ if and only if }B^{c}\not\in E_{\Gamma ,\beta}(i)\mbox{ for }i,j=1,2;i\neq j.\]
The game form \(\Gamma\) is said to be tight if for all \(B\in 2^{A}\), \(B\in E_{\Gamma ,\alpha}(1)\) or \(B^{c}\in E_{\Gamma ,\alpha}(2)\). It is easy to see that \(\Gamma\) is tight and only if \(E_{\Gamma ,\alpha}=E_{\Gamma ,\beta}\).
Theorem. Let \(\Gamma\) be a two-player game form. The following statements are equivalent.
(a) \(\Gamma\) is Nash-consistent.
(b) \(\Gamma\) is tight.
(c) \(\Gamma\) is stable. \(\sharp\)
Only the effectivity function is needed to know whether a game form is Nash consistent or not. It is not true however that the effectivity function characterizes the Nash correspondence. A stronger statement is that even restricted on the class of tight game forms the effectivity function does not fully determine the Nash correspondence. (It can be seen by an example).
In order to formulate a characterization of strongly consistent game form we need another effectivity notion that we now present. The players are said to be jointly exactly effective for the couple of subsets \((B_{1},B_{2})\in 2^{A}\times 2^{A}\) if \(B_{1}\cup B_{2}=A\), \(B_{1}\cap B_{2}\neq\emptyset\) and for every \(a\in B_{1}\cap B_{2}\) there exists \((x_{1},x_{2})\in X_{1}\times X_{2}\) such that \(g(x_{1},x_{2})=a\), \(g(x_{1},X_{2})\subset B_{1}\) and \(g(X_{1},x_{2})\subset B_{2}\). We denote by \(JX_{\Gamma}\) the set of ordered pairs of subsets for which the players are effecive. \(JX\) will be called the joint exact effectivity set of \(\Gamma\). By definition, one has
\begin{equation}{\label{abd95eq5}}\tag{16}
\mbox{$(B_{1},B_{2})\in JX$ if and only if
for all \(a\in B_{1}\cap B_{2}\), \((B_{1},B_{2})\in\bar{J}_{\Gamma}(a)\)}.
\end{equation}
It follows that theoretically \(JX\) conveys less information (in a large sense) than \(J\). On the other hand note that \(JX\) conveys more information (in a large sense) than the effectivity functions, indeed the latter can be computed from the former: \(E_{\Gamma ,\alpha}(1)=\{B:\exists B_{1}\subset B\mbox{ such that }(B_{1},A)\in JX\}\) and similarly for player 2.
The game form \(\Gamma\) is said to be {\bf jointly exact} if \(JX=(E_{\Gamma ,\alpha}(1)\times E_{\Gamma ,\alpha}(2))\cap {\cal B}\). Since \(E_{\Gamma ,\alpha}\) can be computed frm \(JX\) it follows that joint exactness and tightness are properties of \(JX\). Two game forms with the same \(JX\) will be both jointly exact or both not and similarly for tightness.
Theorem. Let \(\Gamma\) be a two-player game form. The following statements are equivalent.
(a) \(\Gamma\) is strongly consistent.
(b) \(\Gamma\) is tight and joint exact.
(c) \(\Gamma\) is stable \((\)tight$)$ and the strong equilibrium outcome correspondence coincides with the core correspondence. \(\sharp\)
Nash consistency depends on the effectivity functions, strong consistency depends on the joint exact effectivity set but here we have more. Although we do not know whether on the whole class of two-player game forms on$A$, the joint effectivity set \(JX\) fully determines the strong equilibrium correspondence, nevertheless it does determine it on the subclass of tight exact game forms: If \(\Gamma_{1}\) and \(\Gamma_{2}\) are two tight and jointly exact game forms and if \(JX_{\Gamma_{1}}=JX_{\Gamma_{2}}\) then \(SEO(\Gamma_{1},\cdot)=SEO(\Gamma ,\cdot)\). This is an easy consequence of joint exactness and Proposition \ref{abd95l21}(ii). One can also remark that on that subclass of game forms there is a converse to relation (\ref{abd95eq5}): \(\bar{J}_{\Gamma}(a)=JX\cap\{(B_{1},B_{2}):a\in B_{1}\cap B_{2}\}\) for \(a\in A\).
A two-player social choice correspondence is a mapping \(H:L(A)\times L(A)\rightarrow 2^{A}\setminus\{\emptyset\}\). A two-player social choice correspondence \(H\) is said to be strongly implementable if there exists some two-player game form \(\Gamma\) such that for any \((R_{1},R_{2})\in L(A)\times L(A)\) we have the equality \(SEO(\Gamma ,R_{1},R_{2})=H(R_{1},R_{2})\). Such a game form is said to be strongly implement \(H\).
We recall that a two-player (abstract) effectivity fucntion is an ordered pair \(E=(E(1),E(2))\), where for \(i=1,2\), \(E(i)\) is a nonempty subset of \(2^{A}\setminus\{\emptyset\}\) and such that \(B\in E(i)\) and \(B\subset C\) implies \(C\in E(i)\). \(E\) is {\bf regular} if \(B\in E(1)\) implies \(B^{c}\not\in E(2)\). \(E\) is {\bf maximal} if \(B\not\in E(1)\) implies \(B^{c}\in E(2)\). Given \((R_{1},R_{2})\in L(A)\times L(A)\), the core of \(E\) for \((R_{1},R_{2})\) is the following set
\[C(E,R_{1},R_{2})=\{a\in A:P(a,R_{i})\not\in E(i),i=1,2\}\cap PAO(R_{1},R_{2}).\]
$E$ is said to be stable if for any \((R_{1},R_{2})\in L(A)\times L(A)\), \(C(E,R_{1},R_{2})\neq\emptyset\). It is eay to see that \(E\) is stable if and only if \(E\) is regular. For a game form \(\Gamma\), \(E_{\Gamma ,\alpha}\) and \(E_{\Gamma ,\beta}\) are special cases of effectivity functions. Given a social choice correspondence \(H\), another effectivity function, denoted by \(E_{H}^{*}\) is defined as follows: For \(i\in\{1,2\}\), \(B\in E_{H}^{*}(i)\) if and only if for any \((R_{1},R_{2})\in L(A)\times L(A)\) satisfying \(bR_{i}c\) for all \(b\in B\) and \(c\in B^{c}\), one has \(H(R_{1},R_{2})\subset B\).
It is well-known \cite{mou82} that the core correspondence of a maximal and stable effectivity function is strongly implementable. Actually this result holds for any finite number of players. However in the case of two players any strongly implementable social choice correspondence is of that form as it could be seen in the following.
Theorem. Let \(H\) be a two-player social choice correspondence. The following statements are equivalent
(a) \(H\) is strongly implementable.
(b) \(H\) is the core correspondence of some maximal and stable (maximal and regular) effectivity function.
(c) \(E_{H}^{*}\) is maximal and \(H\) is the core correspondence of \(E_{H}^{*}\). \(\sharp\)
It follows that if \(H_{1}\) and \(H_{2}\) are two strongly implementable social choice correspondences such that \(H_{1}(R)\subset H_{2}(R)\) for all \(R\in L(A)\times L(A)\), then necessarily \(H_{1}=H_{2}\). It follows also that given any regular and maximal effectivity function \(E\) one construct a strongly consistent game form \(\Gamma\) such that \(E_{\Gamma ,\alpha}=E\) (true for \(n\) players in \cite{mou82}) and moreover two such game forms have the same exact joint effectivity set \(JX\) and even the same reduced joint effectivity structure \(\bar{J}\).
\begin{equation}{\label{i}}\tag{I}\mbox{}\end{equation}
Different Approach II.
This approach follows from Tan et al. (1995) \cite{tan95}. Let \(N=\{1,\cdots ,N\}\) be a set of players. A noncooperative \(n\)-person game is an ordered \(2n\)-tuple \((X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) where for each player \(i\in N\), the nonempty set \(X_{i}\) is his (pure) strategy space and \(f_{i}:X=\prod_{i=1}^{n}X_{i}\rightarrow\mathbb{R}\) is his payoff function. The set \(X\) is called an outcome space. For each \(i\in N\), denote \(X^{(i)}=\prod_{j\in N\setminus\{i\}}X_{j}\). If \({\bf x}=(x_{1},\cdots , x_{n})\in X\), we shall write \({\bf x}^{(i)}=(x_{1},\cdots ,x_{i-1},x_{i+1},\cdots ,x_{n})\in X^{(i)}\). If \(x_{i}\in X_{i}\) and \({\bf x}^{(i)}\in X^{(i)}\), we shall use \((x_{i},{\bf x}^{(i)})\) to denote \({\bf y}=(y_{1},\cdots ,y_{n})\in X\) satisfying \(y_{i}=x_{i}\) and \({\bf y}_{\widehat{i}}={\bf x}^{(i)}\). Let \(\epsilon_{1},\cdots ,\epsilon_{n}\) be nonnegative real numbers. Then, we shall call a point \((x_{1}^{*},\cdots ,x_{n}^{*})\in X\) an \((\epsilon_{1},\cdots ,\epsilon_{n})\)-equilibrium point of the game \(\Gamma\) if for each \(i\in N\), we have \(f_{i}(x_{i}^{*},{\bf x}^{*}_{\widehat{i}})\geq\sup_{u_{i}\in X_{i}}f_{i}(u_{i},{\bf x}^{*}_{\widehat{i}})-\epsilon_{i}\). An \((0,\cdots ,0)\)-equilibrium point of \(\Gamma\) is also called a Nash equilibrium point for the game \(\Gamma\). Nikaido and Isoda \cite [Theorem 3.1]{nik} proved the following existence theorem for Nash equilibrium points
\begin{equation}{\label{tanta}}\tag{17}\mbox{}\end{equation}
Theorem \ref{tanta}. Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game satisfying the following conditions.
- For each \(i\in N\), \(X_{i}\) is a nonempty compact convex subset of a Hausdorff topological vector space \(E_{i}\).
- For each \(i\in N\), \(f_{i}\) is continuous on \(X\).
- For each \(i\in N\) and each fixed \({\bf x}^{(i)}\in X^{(i)}\), the function \(u_{i}\mapsto f_{i}(u_{i},{\bf x}^{(i)})\) is concave on \(X_{i}\).
Then \(\Gamma\) has a Nash equilibrium point. \(\sharp\)
Recall that a family \({\cal F}\) of real-valued functions on a set \(X\) is said to be upper bounded if there is \(c\in\mathbb{R}\) such that \(f(x)\leq c\) for all \(x\in X\) and \(f\in {\cal F}\). Tijs \cite [Theorem 4.7]{tij} proved the following existence theorem for \(\epsilon\)-equilibrium points
\begin{equation}{\label{tantb}}\tag{18}\mbox{}\end{equation}
Theorem \ref{tantb}. Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game satisfying the following conditions.
- For each \(i\in N\), \(X_{i}\) is a nonempty convex subset of a Hausdorff topological vector space \(E_{i}\).
- For each \(i\in\setminus\{n\}\), \(X_{i}\) is compact.
- For each \(i\in N\), \(f_{i}\) is continuous on \(X\).
- For each \(i\in N\) and each fixed \({\bf x}^{(i)}\in X^{(i)}\), the function \(u_{i}\mapsto f_{i}(u_{i},{\bf x}^{(i)})\) is concave on \(X_{i}\).
- The family \({\cal F}=\{f_{n}(x_{n},\cdot):x_{n}\in X_{n}\}\) is an upper bounded and equi-continuous family of functions on \(X_{\widehat{n}}\).
Then \(\Gamma\) has an \((0,\cdots ,0,\epsilon )\)-equilibrium point for each \(\epsilon >0\). \(\sharp\)
Proposition. Let \(X\) and \(Y\) be two Hausdorff topological spaces and \(X\) be compact, and let \(f\) be a real-valued function defined on \(X\times Y\) satisfying the following conditions.
- \(f\) is upper semicontinuous on \(X\times Y\).
- for each fixed \(x\in X\), the function \(y\mapsto f(x,y)\) is lower semicontinuous.
Then, the function \(\psi :Y\rightarrow\mathbb{R}\) defined by \(\psi (y)=\max_{u\in X}f(u,y)\) is continuous on \(Y\). \(\sharp\)
Recall that a real-valued function \(f\) on a nonempty convex subset \(X\) of a vector space is said to be quasi-concave (resp. quasi-convex) on \(X\) if the set \(\{x\in X:f(x)\geq\lambda\}\) (resp. \(\{x\in X:f(x)\leq\lambda\}\)) is convex for each \(\lambda\in\mathbb{R}\), or equivalently, if the set \(\{x\in X:f(x)>\lambda\}\) (resp. \(\{x\in X:f(x)<\lambda\}\)) is convex for each \(\lambda\in\mathbb{R}\).
\begin{equation}{\label{tant21}}\tag{19}\mbox{}\end{equation}
Theorem \ref{tant21}. Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game satisfying the following conditions.
- For each \(i\in N\), \(X_{i}\) is a nonempty compact convex subset of a Hausdorff topological vector space \(E_{i}\).
- For each \(i\in N\), \(f_{i}\) is upper semicontinuous on \(X=\prod_{j\in N}X_{j}\).
- For each \(i\in N\) and each fixed \(x_{i}\in X_{i}\), the function \(u_{\widehat{i}}\mapsto f_{i}(x_{i},{\bf u}_{\widehat{i}})\)
is lower semicontinuous on \(X^{(i)}\). - For each \(i\in N\) and each fixed \({\bf x}^{(i)}\in X^{(i)}\), the function \(u_{i}\mapsto f_{i}(u_{i},{\bf x}^{(i)})\) is quasi-concave on \(X_{i}\).
Then \(\Gamma\) has a Nash equilibrium point. \(\sharp\)
\begin{equation}{\label{tant22}}\tag{20}\mbox{}\end{equation}
Theorem \ref{tant22}. Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game satisfying the following conditions.
- For each \(i\in N\), \(X_{i}\) is a nonempty compact convex subset of a Hausdorff topological vector space \(E_{i}\).
- \(\sum_{i=1}^{n}f_{i}\) is upper semicontinuous on \(X\).
- For each \(i\in N\) and each fixed \(x_{i}\in X_{i}\), the function \(u_{\widehat{i}}\mapsto f_{i}(x_{i},{\bf u}_{\widehat{i}})\) is lower semicontinuous on \(X^{(i)}\).
- For each \(i\in N\) and each fixed \({\bf x}^{(i)}\in X^{(i)}\), the function \(u_{i}\mapsto f_{i}(u_{i},{\bf x}^{(i)})\) is concave on \(X_{i}\).
Then \(\Gamma\) has a Nash equilibrium point. \(\sharp\)
Theorems \ref{tant21} and \ref{tant22} both show Theorem \ref{tanta}. In order to obtain the existence theorems of \(\epsilon\)-equilibrium points, we need the concept of \(\epsilon\)-domination introduced by Tijs \cite{tij}. Let \(\epsilon >0\) and \({\cal F}\) and \({\cal G}\) be two families of real-valued functions on a set \(K\). We shall say that \({\cal G}\) \(\epsilon\)-dominate \({\cal F}\) if for each \(f\in {\cal F}\), there is a \(g\in {\cal G}\) such that \(f(x)\leq g(x)+\epsilon\) for all \(x\in K\). Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game and let \(Y_{1},\cdots ,Y_{n}\) be nonempty subsets of \(X_{1},\cdots ,X_{n}\),
respectively. Then the game \(\Gamma_{1}=(Y_{1},\cdots ,Y_{n};f_{1},\cdots ,f_{n})\) in which the payoff functions are restrictions of the payoff functions in \(\Gamma\) to the set \(\prod_{i=1}^{n}Y_{i}\) is called a sub-game of \(\Gamma\). Let \(\boldsymbol{\epsilon}=(\epsilon_{1}, \cdots ,\epsilon_{n})\), we shall say that \(\Gamma_{1}\) \(\epsilon\)-dominates \(\Gamma\) when, for each \(i\in N\) and each \(x_{i}\in X_{i}\), there exists \(y_{i}\in Y_{i}\) such that \(f_{i}(x_{i},u_{\widehat{i}})\leq f_{i}(y_{i},u_{\widehat{i}})+\epsilon\) for all \(u_{\widehat{i}}\in X^{(i)}\).
\begin{equation}{\label{tant23}}\tag{21}\mbox{}\end{equation}
Theorem \ref{tant23}. Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game satisfying the following conditions.
- For each \(i\in N\), \(X_{i}\) is a nonempty convex subset of a Hausdorff topological vector space \(E_{i}\).
- For each \(i\in\setminus\{n\}\), \(X_{i}\) is compact.
- For each \(i\in N\), \(f_{i}\) is upper semicontinuous on \(X\) and for fixed \(x_{i}\in X_{i}\), the function \(u_{\widehat{i}}\mapsto f_{i}(x_{i}, u_{\widehat{i}})\) is lower semicontinuous on \(X^{(i)}\).
- For each \(i\in N\) and each fixed \({\bf x}^{(i)}\in X^{(i)}\), the function \(u_{i}\mapsto f_{i}(u_{i},{\bf x}^{(i)})\) is concave on \(X_{i}\).
- The family \({\cal F}=\{f_{n}(x_{n},\cdot):x_{n}\in X_{n}\}\) is an upper bounded and equicontinuous family of functions on \(X_{\widehat{n}}\).
Then \(\Gamma\) has an \((0,\cdots ,0,\epsilon )\)-equilibrium point for each \(\epsilon >0\). \(\blacksquare\)
\begin{equation}{\label{tant24}}\tag{22}\mbox{}\end{equation}
Theorem \ref{tant24}. Let \(\Gamma =(X_{1},\cdots ,X_{n};f_{1},\cdots ,f_{n})\) be an \(n\)-person game satisfying the following conditions.
- For each \(i\in N\), \(X_{i}\) is a nonempty convex subset of a Hausdorff topological vector space \(E_{i}\).
- For each \(i\in\setminus\{n\}\), \(X_{i}\) is compact.
- \(\sum_{i=1}^{n}f_{i}\) is upper semicontinuous on \(X\) and for fixed \(i\in N\) and each \(x_{i}\in X_{i}\), the function \(u_{\widehat{i}}\mapsto f_{i}(x_{i},u_{\widehat{i}})\) is lower semicontinuous on \(X^{(i)}\).
- For each \(i\in N\) and each fixed \({\bf x}^{(i)}\in X^{(i)}\), the function \(u_{i}\mapsto f_{i}(u_{i},{\bf x}^{(i)})\) is concave on \(X_{i}\).
- The family \({\cal F}=\{f_{n}(x_{n},\cdot):x_{n}\in X_{n}\}\) is an upper bounded and equicontinuous family of functions on \(X_{\widehat{n}}\).
Then \(\Gamma\) has an \((0,\cdots ,0,\epsilon )\)-equilibrium point for each \(\epsilon >0\). \(\sharp\)
Theorems \ref{tant23} and \ref{tant24} both show Theorem \ref{tantb}.
Corollary. Let \(X\) be an nonempty convex subset of a Hausdorff topological vector space \(E\), and let \(Y\) be a nonempty compact convex subset of a Hausdorff topological vector space \(F\). Let \(f\) be a real-valued continuous function defined on \(X\times Y\) which is quasi-concave in its first variable and quasi-convex in its second variable and satisfies \(\sup_{x\in X}\min_{y\in Y}f(x,y)<\infty\). Let the family \(\{f(x,\cdot):x\in X\}\) of real-valued continuous functions on \(Y\) be equicontinuous and closed in the Banach space \(C(Y)\) of all real-valued continuous functions on \(Y\) with uniform norm. Then \(f\) has a saddle point \((x^{*},y^{*})\in X\times Y\); that is,
\[\max_{x\in X}f(x,y^{*})=f(x^{*},y^{*})=\min_{y\in Y}f(x^{*},y).\]


