Vector Optimization Problems in Near Vector Space

Karl Paul Themistocles von Eckenbrecher (1842-1921) was a German painter.

H.-C. Wu, Vector Optimization Problems in Near Vector Space, Optimization, to appear

Abstract.

The conventional vector optimization problems intend to optimize the objective functions, which are defined on a vector space into another vector space. In this paper, we are going to study the vector optimization problems by minimizing the objective functions, which are defined on a vector space into another so-called near vector space. The concept of near vector space is axiomatically defined, and it does not necessarily contain the additive inverse element. A so-called pointed-like convex cone is introduced to propose the concept of minimizer of this new type vector optimization problem. Its corresponding scalar optimization problem is also formulated. Finally, we show that the optimal solution of the corresponding scalar optimization problem is also a minimizer of the original vector optimization problem based on the near vector space.

Keywords: Minimal element; Maximal element; Near vector space; Pointed-like convex cone; Partial ordering.

1 Introduction.

The vector optimization problem is more general than the multiobjective optimization problem. The main issue of vector optimization problem is to assume that the objective values are in a vector space, which is not necessarily a finite-dimensional Euclidean space. In other words, the objective functions are defined on a vector space into another vector space. In this paper, the objective values are assumed to be in a so-called near vector space, which is a more general concept than that of (conventional) vector space. Roughly speaking, the near vector space is a universal set endowed with the vector structure (i.e. the vector addition and scalar multiplication) and satisfies some elementary laws for operations. The formal definition will be given in the context. The main characteristic of near vector space is that the additive inverse element may not exist. In this case, the concept of null set will be introduced to play the role of “zero element”.

We provide three well-known spaces that cannot be the (conventional) vector spaces. However, they can be checked to be near vector spaces.

  • Let \({\cal I}\) be the space of all bounded and closed intervals in \(\mathbb{R}\). The addition of intervals \([x_{1},x_{2}]\oplus [y_{1},y_{2}]\) and scalar multiplication of interval \(\lambda [x_{1},x_{2}]\) can be treated as the vector addition and scalar multiplication. An interval which subtracts itself, i.e. \([x_{1},x_{2}]\ominus [x_{1},x_{2}]\), is not a zero element in \({\cal I}\). In this case, we cannot consider the concept of inverse elements in \({\cal I}\).
  • Let \({\cal F}_{cc}(\mathbb{R})\) be the space of all fuzzy numbers in \(\mathbb{R}\). The addition of fuzzy numbers \(\tilde{x}\oplus\tilde{y}\) and scalar multiplication of fuzzy number \(\lambda\tilde{x}\) can be treated as the vector addition and scalar multiplication. A fuzzy number which subtracts itself, i.e. \(\tilde{x}\ominus\tilde{x}\), is not a zero element in \({\cal F}_{cc}(\mathbb{R})\). In this case, we cannot consider the concept of inverse elements in \({\cal F}_{cc}(\mathbb{R})\).
  • Let \(U\) be a (conventional) vector space, and let \({\cal P}(U)\) be a collection of all subsets of \(U\). The collection \({\cal P}(U)\) is also called a hyperspace. The addition of sets \(A\oplus B\) and scalar multiplication of set \(\lambda A\) can be treated as the vector addition and scalar multiplication. A set which subtracts itself, i.e. \(A\ominus A\) is not a zero element in \({\cal P}(U)\). In this case, we cannot consider the concept of inverse elements in \({\cal P}(U)\).

The scalarization technique is frequently used in solving the vector optimization problem and multiobjective programming problems. Many interesting results can be seen in [1]-[14] and the references therein. In this paper, we also use the scalarization technique to solve the proposed problem, which is based on the so-called near vector space. The methodology proposed in this paper is a completely new attempt to solve the vector optimization problem whose objective values are assumed to be in a so-called near vector space. The set optimization problems and interval-valued optimization problems can be treated as the special cases of the vector optimization problems proposed in this paper. More detailed description can be realized in Section 6.

Considering the comparison of sets in optimization problems with set-valued objective functions was first proposed by Kuroiwa, Tanaka and Ha \ref{kur96}\ref{kur01}\ref{kur03}\ref{kur97}, which also initiated the study of set optimization problems by referring to Jahn and Ha \ref{jah11}. Using the Demyanov difference, Jahn \ref{jahb}\ref{jahc} also studied the directional derivatives in set optimization. On the other hand, Jahn \ref{jah} considered the vectorization by transforming set optimization problem into a vector optimization problem, and Jahn \ref{jahc} studied the Karush-Kuhn-Tucker conditions in set optimization. Alonso, Rodríguez-Marín and Hernández \ref{alo05}\ref{alo09}\ref{her07}\ref{her07b} studied the optimality conditions, existence theorems and nonconvex scalarization in set optimization problems. Löhne and Schrage \ref{loh} designed an algorithm to solve so-called polyhedral convex set optimization problems. By following the well-posedness studied in scalar and vector optimization problems, Long and Peng \ref{lon13} also discussed the well-posedness for set optimization problems. Wu \ref{wu19} used the concept of null set to study the set optimization problems.

The duality theorems and optimality conditions for interval-valued optimization problems were studied by Wu \ref{wu}\ref{wu08}\ref{wu08b}\ref{wu10} using the so-called Hukuhara derivative. Chalco-Cano et al. \ref{cha13} and Osuna-Gómez et al. \ref{gom} extend to study the optimality conditions by using the so-called generalized Hukuhara derivative. Also, Jayswal et al. \ref{jay} studied the duality theorems and optimality conditions by using the concept of generalized convexity. Many other interesting articles regarding the interval-valued optimization problems can also be consulted from the references therein. On the other hand, Wu \ref{wu18} used the concept of null set to study the interval-valued optimization problems.

Since the near vector space still owns the vector structure, we can also propose the similar concept of convex cone by only considering the vector addition and scalar multiplication. The notions of convex cone and partial ordering on a vector space are essentially equivalent. This also inspires us to introduce the orderings on a near vector space using the similar notion of convex cone in near vector space. Also, a scalarization technique is proposed to obtain the minimizers of the original vector optimization problem. In particular, the interval-valued linear programming problems consider the objective values taken in the space of all bounded and closed intervals in \(\mathbb{R}\), and the linear set optimization problems consider the objective values taken in the hyperspace, which can be solved by using the methodology proposed in this paper.

In Section 2, we introduce the concept of near vector space in which the additive inverse element does not necessarily exist. In Section 3, we define the concept of convex cone in near vector space. Based on this concept, two orderings are proposed. Some useful order-preserving properties are obtained for the purpose of finding the minimizers. In Section 4, we formulate the vector optimization problem such that its objective values are assumed to be in near vector space. Two solution concepts are proposed by considering two orderings proposed in section 3. In Section 5, we apply the scalarization technique to solve the vector optimization problem proposed in section 4. In Section 6, a special type of linear problem is introduced. Especially, the interval-valued linear programming problems and the linear set optimization problems are solved using the proposed methodology.

2 Near Vector Space.

The near vector space proposed in this paper is completely different from the quasilinear space. The concept of quasilinear space can be seen in Assev \ref{ass}, Lupulescu and O’Reagan \ref{lup} and the references therein. The quasilinear space owns a zero element, which means that the additive inverse element can be naturally realized. However, the near vector space does not necessarily own the zero element, which also means that the concept of additive inverse element is not considered. The formal Definition of near vector space provided below can show that the quasilinear space is a near vector space and the converse is not true.

Given a universal set \(X\), we assumed that \(X\) is endowed with the vector addition \(x\oplus y\) and scalar multiplication \(\alpha x\) for any \(x,y\in X\) and \(\alpha\in\mathbb{R}\) satisfying \(x\oplus y\in X\) and \(\alpha x\in X\); that is, the universal set \(X\) is closed under the vector addition and scalar multiplication. In this case, we also say that \(X\) is a universal set over \(\mathbb{R}\).

In the conventional vector space, the additive inverse element of \(x\) is denoted by \(-x\), and it can also be shown \(-x=(-1)x\), where \((-1)x\) means the scalar multiplication. Here, we shall not consider the concept of inverse element. However, for convenience, we still adopt \(-x=(-1)x\).

\begin{equation}{\label{lsv100}}\tag{1}\mbox{}\end{equation}

Example \ref{lsv100}. Let \({\cal I}\) be a collection of all bounded closed intervals in \(\mathbb{R}\).
The vector addition is defined by
\[[a,b]\oplus [c,d]=[a+c,b+d],\]
and the scalar multiplication is defined by
\[k[a,b]=\left\{\begin{array}{ll}\mbox{$[ka,kb]$} & \mbox{if \(k\geq 0\)}\\ \mbox{$[kb,ka]$} & \mbox{if \(k<0\).}\end{array}\right .\]
The above definitions show
\[-[a,b]=-1[a,b]=[-b,-a].\]
We also have
\[[a,b]-[a,b]=[a,b]+[-b,-a]=[a-b,b-a],\]
which says that the additive inverse element does not exist. One of the reasons is that the concept of zero element of \({\cal I}\) is not clear. In other words, \({\cal I}\) is not a (conventional) vector space under the above vector addition and scalar multiplication. \(\sharp\)

Given any \(x\) and \(y\) in the universal set \(X\) over \(\mathbb{R}\), the subtraction is denoted and defined by
\[x\ominus y=x\oplus (-y),\]
where \(-y\) means the scalar multiplication \((-1)y\). Given any \(x,y\in X\), if there exists a unique \(z\in X\) satisfying
\[x=y\oplus z=z\oplus y,\]
then \(z\) is called the Hukuhara difference between \(x\) and \(y\). In this case, we also write \(z=x\ominus_{H}y\). It is not necessary that \(x\ominus_{H}y=x\ominus y\) holds true. However, if \(X\) happens to be a real vector space, then the concepts of subtraction and Hukuhara difference are equivalent.

\begin{equation}{\label{lsvex1}}\tag{2}\mbox{}\end{equation}

Example \ref{lsvex1}. Continued from Example \ref{lsv100}, assume that
\[I_{x}=[x_{1},x_{2}],\quad I_{y}=[y_{1},y_{2}]\mbox{ and }I_{z}=[z_{1},z_{2}]\]
are three bounded closed intervals in \(\mathbb{R}\). For \(I_{x}=I_{y}\oplus I_{z}\), we have
\[[x_{1},x_{2}]=[y_{1}+z_{1},y_{2}+z_{2}].\]
Therefore, we obtain
\[z_{1}=x_{1}-y_{1}\mbox{ and }z_{2}=x_{2}-y_{2},\]
which shows that if \(x_{1}-y_{1}\leq x_{2}-y_{2}\), i.e. \(x_{1}-x_{2}\leq y_{1}-y_{2}\), then the Hukuhara difference \(I_{x}\ominus_{H}I_{y}\) exists and
\[I_{x}\ominus_{H}I_{y}=[x_{1}-y_{1},x_{2}-y_{2}].\]
Since \(-I_{y}=[-y_{2},-y_{1}]\), we also have
\[I_{x}\ominus I_{y}=I_{x}\oplus (-I_{y})=[x_{1}-y_{2},x_{2}-y_{1}]\]
which is not equal to \(I_{x}\ominus_{H}I_{y}\). \(\sharp\)

Given a universal set \(X\) over \(\mathbb{R}\) and any \(x\in X\), the subtraction \(x\ominus x\) is not necessarily a zero element by referring to Example \ref{lsv100}. Therefore, we propose the following definition.

Definition. Given a universal set \(X\) over \(\mathbb{R}\), the following set

\[\Omega =\{x\ominus x:x\in X\}\]
is called the null set of \(X\). We say that the null set \(\Omega\) satisfies the neutral condition when \(\omega\in\Omega\) implies \(-\omega\in\Omega\).

Definition. Let \(X\) be a universal set over \(\mathbb{R}\).

  • We say that the commutative law for vector addition holds true in \(X\) when
    \[x\oplus y=y\oplus x\]
    for any \(x,y\in X\).
  • We say that the associative law for vector addition holds true in \(X\) when
    \[(x\oplus y)\oplus z=x\oplus (y\oplus z)\]
    for any \(x,y,z\in X\). \(\sharp\)

Now, we are in a position to define the concept of near vector space.

Definition. Let \(X\) be a universal set over \(\mathbb{R}\). We say that \(X\) is a near vector space when the following conditions are satisfied:

  • \(1x=x\) for any \(x\in X\);
  • \(x=y\) implies \(x\oplus z=y\oplus z\) and \(\alpha x=\alpha y\) for any \(x,y,z\in X\) and \(\alpha\in\mathbb{R}\);
  • the commutative and associative laws for vector addition hold true in \(X\). \(\sharp\)

It is obvious that if \(X\) is a (conventional) vector space, then it is also a near vector space. From Example \ref{lsv100}, although \({\cal I}\) is not a vector space, we can see that \({\cal I}\) is a near vector space.

\begin{equation}{\label{null29}}\tag{3}\mbox{}\end{equation}

Example \ref{null29}. Let \(X\) be a (conventional) vector space, and let \({\cal P}(X)\) be a family of all subsets of \(X\). Given any \(A,B,C\in {\cal P}(X)\) and \(\lambda\in\mathbb{R}\), the vector addition and scalar multiplication are defined by
\[A\oplus B=\left\{a+b:a\in A,b\in B\right\}\mbox{ and }\lambda A=\left\{\lambda a:a\in A\right\}.\]
The null set of \({\cal P}(X)\) is given by
\[\Omega =\left\{A\ominus A=A\oplus (-A):A\in {\cal P}(X)\right\}.\]
Although \({\cal P}(X)\) is not a vector space, we can show that \({\cal P}(X)\) is a near vector space. \(\sharp\)

\begin{equation}{\label{null171}}\tag{4}\mbox{}\end{equation}

Proposition \ref{null171}. Let \(X\) be a near vector space. Assume
\[-(x\oplus y)=(-x)\oplus (-y)\mbox{ and }-(-x)=x\]
for any \(x,y\in X\). Then, we have \(\Omega\oplus\Omega\subseteq\Omega\) and \(\omega =-\omega\) for any \(\omega\in\Omega\).

Proof. For \(\omega =x\ominus x\in\Omega\), we have
\begin{align*}-\omega & =-(x\ominus x)=-(x\oplus (-x))=(-x)\oplus (-(-x))=(-x)\oplus x\\& =x\oplus (-x)\mbox{ (by the commutative law for vector addition)}\\& =x\ominus x=\omega .\end{align*}
For \(\omega_{1}=x\ominus x\in\Omega\) and \(\omega_{2}=y\ominus y\in\Omega\), we have
\begin{align*}\omega_{1}\oplus\omega_{2} & =(x\ominus x)\oplus (y\ominus y)\\& =(x\oplus y)\ominus x\ominus y\mbox{ (by the commutative and associative laws for vector addition)}\\& =(x\oplus y)\oplus[(-x)\oplus (-y)]\mbox{ (by the associative law for vector addition)}\\& =(x\oplus y)\oplus[-(x\oplus y)]\\& =(x\oplus y)\ominus (x\oplus y)\in\Omega\mbox{ (since \(x\oplus y\in X\)}),\end{align*}
which shows \(\Omega\oplus\Omega\subseteq\Omega\). This completes the proof. \(\blacksquare\)

We remark that the assumptions in Proposition \ref{null171} hold true when \(X\) happens to be a (conventional) vector space. However, in the near vector space, the laws do not guarantee those assumptions. Let \(X\) be a near vector space. We say that \(\theta\) is a zero element of \(X\) when
\[x=x\oplus\theta=\theta\oplus x.\]
for any \(x\in X\).

\begin{equation}{\label{null45}}\tag{5}\mbox{}\end{equation}

Proposition \ref{null45}. Let \(X\) be a near vector space with the zero element \(\theta\). Then, the following statements hold true.

(i) We have \(-\theta\in\Omega\).

(ii) Suppose that \(\theta =-(-\theta )\), and that \(\Omega\) satisfies the neutral condition. Then \(\theta\in\Omega\). If we further assume \(-(x\oplus y)=(-x)\oplus (-y)\) for any \(x,y\in X\), then \(\theta =-\theta\).

Proof. We have
\[-\theta =\theta\oplus (-\theta )=\theta\ominus \theta\in\Omega ,\]
which proves part (i). To prove part (ii), using part (i), we immediately have \(\theta\in\Omega\). Finally, using Proposition \ref{null171}, we complete the proof. \(\blacksquare\)

Definition. Let \(X\) be a near vector space with the null set \(\Omega\). Given any \(x,y\in X\), we say that \(x\) and \(y\) are almost identical when any one of the following conditions is satisfied:

  • \(x=y\);
  • there exists \(\omega\in\Omega\) satisfying \(x=y\oplus\omega\) or \(x\oplus\omega =y\);
  • there exist \(\omega_{1},\omega_{2}\in\Omega\) satisfying \(x\oplus\omega_{1}=y\oplus\omega_{2}\).

In this case, we write \(x\stackrel{\Omega}{=}y\). \(\sharp\)

Suppose that null set \(\Omega\) contains the zero element \(\theta\). Then, we can simply say that \(x\stackrel{\Omega}{=}y\) if and only if there exist \(\omega_{1},\omega_{2}\in\Omega\) satisfying \(x\oplus\omega_{1}=y\oplus\omega_{2}\) (i.e. only the third condition is satisfied), since the first and second conditions can be rewritten as the third condition by adding the zero element \(\theta\). We also remark that if we want to discuss some properties based on \(x\stackrel{\Omega}{=}y\), it suffices to consider the third condition \(x\oplus\omega_{1}=y\oplus\omega_{2}\), even though \(\Omega\) does not contain the zero element \(\theta\). The reason is that the same arguments are still applicable for the first and second conditions.

\begin{equation}{\label{ch1p*106}}\tag{6}\mbox{}\end{equation}

Proposition \ref{ch1p*106}. Let \(X\) be a near vector space with the null set \(\Omega\). Then, we have the following properties.

(i) If \(x\ominus y\in\Omega\), then \(x\stackrel{\Omega}{=}y\).

(ii) Assume \(\Omega\oplus\Omega\subseteq\Omega\). If \(x\stackrel{\Omega}{=}y\), then there exists \(\omega\in\Omega\) satisfying \(x\ominus y\oplus\omega\in\Omega\).

Proof. To prove part (i), we have \(x\oplus (-y)=\omega_{1}\) for some \(\omega_{1}\in\Omega\), which implies
\[x\oplus (-y)\oplus y=\omega_{1}\oplus y\]
by adding \(y\) on both sides. This shows \(x\oplus\omega_{2}=\omega_{1}\oplus y\) for \(\omega_{1},\omega_{2}\in\Omega\).

To prove part (ii), since \(x\stackrel{\Omega}{=}y\), we have \(x\oplus\omega_{2}=\omega_{1}\oplus y\) for some \(\omega_{1},\omega_{2}\in\Omega\). By adding \(-y\) on both sides, we obtain
\[x\ominus y\oplus\omega_{2}=\omega_{1}\oplus\omega_{3}\in\Omega,\]
where \(\omega_{3}=y\ominus y\in\Omega\). This completes the proof. \(\blacksquare\)

Definition. Let \(V\) be a (conventional) vector space, and let \(X\) be a near vector space. We consider the function \(L:X\rightarrow V\)

  • We say that \(L\) is additive when \(L(x\oplus y)=L(x)+L(y)\).
  • We say that \(L\) is homogeneous when \(L(\lambda x)=\lambda L(x)\) for \(\lambda\in\mathbb{R}\).
  • We say that \(L\) is positively homogeneous}when \(L(\lambda x)= \lambda L(x)\) for \(\lambda\geq 0\).
  • We say that \(L\) is linear when it is additive and homogeneous.

Example. Continued from Example \ref{lsvex1}, we define a function

\[L:{\cal I}\rightarrow\mathbb{R}\mbox{ by }L([x_{1},x_{2}])=x_{1}+x_{2}.\]
We see that \(L\) is linear, since
\[L([x_{1},x_{2}]\oplus [y_{1},y_{2}])=L([x_{1}+y_{1},x_{2}+y_{2}])=
x_{1}+x_{2}+y_{1}+y_{2}=L([x_{1},x_{2}])+L([y_{1},y_{2}])\]

and
\[L(\alpha [x_{1},x_{2}]=\left\{\begin{array}{ll}L([\alpha x_{1},\alpha x_{2}]) & \mbox{if \(\alpha\geq 0\)}\\L([\alpha x_{2},\alpha x_{1}]) & \mbox{if \(\alpha <0\)}\end{array}\right\}=\alpha (x_{1}+x_{2})=\alpha L([x_{1},x_{2}]).\]
We also define a function
\[L:{\cal I}\rightarrow\mathbb{R}^{2}\mbox{ by }L([x_{1},x_{2}])=(x_{1},x_{2}).\]
Then, we can show that \(L\) is additive and positively homogeneous. \(\sharp\)

We denote by \(\theta_{V}\) the zero element of a (conventional) vector space \(V\).

\begin{equation}{\label{lsvp3}}\tag{7}\mbox{}\end{equation}

Proposition \ref{lsvp3}. Let \(V\) be a (conventional) vector space, and let \(X\) be a near vector space with the null set \(\Omega\). Consider the function \(L:X\rightarrow V\). Then, we have the following properties.

(i) Assume \(-\omega =\omega\) and \(-L(\omega )=L(-\omega )\) for any \(\omega\in\Omega\). Then, we have \(L(\omega )=\theta_{V}\) for any \(\omega\in\Omega\).

(ii) Assume \(L(\omega )=\theta_{V}\) for any \(\omega\in\Omega\). If \(L\) is additive, then \(L(x\ominus y)=L(x)-L(y)\).

(iii) Suppose that \(L\) is additive and the Hukuhara difference \(x\ominus_{H}y\) exists for \(x,y\in X\). Then, we have \(L(x\ominus_{H}y)=L(x)-L(y)\).

Proof. To prove part (i), we have
\[-L(\omega )=L(-\omega )=L(\omega ),\]
which shows \(L(\omega )=\theta_{V}\), since \(V\) is a vector space.

To prove part (ii), let \(z=x\ominus y\). By adding \(y\) on both sides, we have \(z\oplus y=x\oplus\omega\), where \(\omega=y\ominus y\in\Omega\). Using the additivity, we obtain
\[L(z)+L(y)=L(z\oplus y)=L(x\oplus\omega)=L(x)+L(\omega)=L(x),\]
which implies the equality \(L(x\ominus y)=L(x)-L(y)\).

To prove part (iii), let \(z=x\ominus_{H}y\), i.e. \(x=z\oplus y\). Then, we have \(L(x)=L(z)+L(y)\), i.e. \(L(z)=L(x)-L(y)\). This completes the proof. \(\blacksquare\)

Remark. Part (i) of Proposition \ref{lsvp3} needs the assumption of \(-\omega =\omega\) for \(\omega\in\Omega\). Proposition \ref{null171} shows \(-\omega =\omega\) when the following equalities
\[-(x\oplus y)=(-x)\oplus (-y)\mbox{ and }-(-x)=x\]
are satisfied for any \(x,y\in X\). \(\sharp\)

Although the near vector space is not a (conventional) vector space, we can also consider the concept of convexity based on the vector addition and scalar multiplication.

Definition. Let \(X\) be a near vector space, and let \(C\) be a subset of \(X\). We say that \(C\) is convex when \(\lambda x\oplus (1-\lambda )y\in C\) for \(x,y\in C\) and \(\lambda\in [0,1]\). We say that \(C\) is a cone when \(\lambda x\in C\) for \(x\in C\) and \(\lambda >0\). A cone \(C\) is said to be a convex cone when it is also convex. \(\sharp\)

The following result is obvious.

\begin{equation}{\label{lsvr1}}\tag{8}\mbox{}\end{equation}

Proposition \ref{lsvr1}. Let \(V\) be a vector space, and let \(X\) be a near vector space. We consider a function \(L:X\rightarrow V\). Let \(C\) be a convex cone in \(X\). If \(L\) is additive and positively homogeneous, then the set \(L(C)=\{L(x):x\in C\}\) is a convex cone in the vector space \(V\). \(\sharp\)

3 Order-Preserving Properties.

We shall consider many kinds of partial orderings, and present the order-preserving properties under a transformation. First of all, we introduce the pointed-like convex cone. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a subset of \(X\). We define
\begin{equation}{\label{eq3}}\tag{9}
C^{-}=\left\{c^{-}\in X:\mbox{there exist \(c\in C\) and \(\omega\in\Omega\) satisfying \(c\oplus c^{-}\stackrel{\Omega}{=}\omega\)}\right\}.
\end{equation}

Let us recall
\[-C=\{-c:c\in C\}.\]
It is clear to see \(-C\subseteq C^{-}\), since \(-c\oplus c\in\Omega\). If \(X\) happens to be a (conventional) vector space, then we have \(-C=C^{-}\).

\begin{equation}{\label{p20}}\tag{10}\mbox{}\end{equation}

Example \ref{p20}. Let \(\eta :X\rightarrow\mathbb{R}\) be an additive and homogeneous real-valued function defined on a near vector space \(X\) with the null set \(\Omega\) satisfying
\begin{equation}{\label{eq1}}\tag{11}
\ker\eta=\left\{x\in X:\eta(x)=0\right\}=\Omega.
\end{equation}

Let
\[C=\left\{c\in X:\eta(c)\geq 0\right\}.\]
It is clear to see that \(C\) is a convex cone in \(X\) satisfying \(\Omega\subseteq C\). Given any \(c^{-}\in C^{-}\), the Definition says \(c\oplus c^{-}\stackrel{\Omega}{=}\omega\) for some \(c\in C\) and \(\omega\in\Omega\). It also means that there exist \(\omega_{1},\omega_{2}\in\Omega\) satisfying
\[c\oplus c^{-}\oplus\omega_{1}=\omega_{2}\oplus\omega.\]
Therefore, using (\ref{eq1}), we obtain
\[\eta(c)+\eta\left (c^{-}\right )=\eta(c)+\eta\left (c^{-}\right )+\eta\left (\omega_{1}\right )
=\eta\left (c\oplus c^{-}\oplus\omega_{1}\right )=\eta\left (\omega_{2}\oplus\omega\right )=\eta\left (\omega_{2}\right )+
\eta\left (\omega\right )=0,\]

which implies
\[\eta\left (c^{-}\right )=-\eta(c)\leq 0.\]
This shows the following inclusion
\[C^{-}\subseteq\left\{c^{-}\in X:\eta\left (c^{-}\right )\leq 0\right\}.\]

Next, we want to prove another direction of inclusion. Given any \(c^{-}\in X\) with \(\eta(c^{-})\leq 0\), we have
\[\eta(-c^{-})=-\eta(c^{-})\geq 0,\]
which says \(-c^{-}\in C\). Let \(\hat{c}=-c^{-}\in C\). Then, we have
\[\eta\left (\hat{c}\oplus c^{-}\right )=\eta\left (\hat{c}\right )
+\eta\left (c^{-}\right )=-\eta\left (c^{-}\right )+\eta\left (c^{-}\right )=0,\]

which says \(\hat{c}\oplus c^{-}\in\Omega\) by (\ref{eq1}), i.e. \(\hat{c}\oplus c^{-}=\omega\) for some \(\omega\in\Omega\). Therefore, we obtain \(\hat{c}\oplus c^{-}\stackrel{\Omega}{=}\omega\). Since \(\hat{c}\in C\), it follows \(c^{-}\in C^{-}\). Therefore, we obtain the
following equality
\[C^{-}=\left\{c^{-}\in X:\eta\left (c^{-}\right )\leq 0\right\}=\left\{c\in X:\eta\left (c\right )\leq 0\right\}. \sharp\]

The following properties will be used for the further study.

Proposition. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a subset of \(X\).

(i) Assume \(\Omega\oplus\Omega\subseteq\Omega\). Then, we have \(C^{-}\oplus\Omega\subseteq C^{-}\).

(ii) Suppose that \(\Omega\) contains the unique zero element \(\theta\). Then, we have \(C^{-}\subseteq C^{-}\oplus\Omega\).

Proof. To prove part (i), given any \(c^{-}\in C^{-}\), the Definition says \(c\oplus c^{-}\stackrel{\Omega}{=}\omega_{1}\) for some \(c\in C\) and \(\omega_{1}\in\Omega\), which also says
\[c^{-}\oplus c\oplus\omega_{2}=\omega_{1}\oplus\omega_{3}\]
for some \(\omega_{2},\omega_{3}\in\Omega\). Given any \(\omega\in\Omega\), we have
\[(c^{-}\oplus\omega )\oplus c\oplus\omega_{2}=\omega\oplus\omega_{1}\oplus\omega_{3}\]
by adding \(\omega\) on both sides, which says \((c^{-}\oplus\omega )\oplus c\stackrel{\Omega}{=}\omega\) since \(\Omega\oplus\Omega\subseteq\Omega\). Therefore, we obtain \(c^{-}\oplus\omega\in C^{-}\) for any \(\omega\in\Omega\), which shows the inclusion \(C^{-}\oplus\Omega\subseteq C^{-}\).

To prove part (ii), given any \(c^{-}\in C^{-}\), we have
\[c^{-}=c^{-}\oplus\theta\in C^{-}\oplus\Omega,\]
which shows the inclusion \(C^{-}\subseteq C^{-}\oplus\Omega\). This completes the proof. \(\blacksquare\)

Suppose that \(X\) happens to be a (conventional) vector space; that is, \(\Omega =\{\theta\}\) is the zero element of \(X\). Then, we have \(-C=C^{-}\). In this case, a convex cone \(C\) is pointed-like if and only if
\[C\cap (-C)=C\cap C^{-}=\{\theta\}.\]
Since the null set \(\Omega\) in near vector space plays the role of “zero element”, we propose the similar concepts as follows,

\begin{equation}{\label{lsvd29}}\tag{12}\mbox{}\end{equation}

Definition \ref{lsvd29}. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a convex cone in \(X\).

  • We say that \(C\) is pointed-like when \(x\in C\cap C^{-}\) implies \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\).
  • We say that \(C\) is \(\Omega\)-pointed-like when \(x\in (C\oplus\Omega )\cap C^{-}\) implies \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\).  \(\sharp\)

\begin{equation}{\label{ex34}}\tag{13}\mbox{}\end{equation}

Example \ref{ex34}. Let \({\cal I}\) be a collection of all bounded closed intervals in \(\mathbb{R}\). It is clear to see that the following set
\[C=\left\{\left [c_{1},c_{2}\right ]\in {\cal I}:c_{1}+c_{2}\geq 0\right\}\]
is a convex cone and is pointed-like in \({\cal I}\) satisfying \(\Omega\subseteq C\). \(\sharp\)

\begin{equation}{\label{p21}}\tag{14}\mbox{}\end{equation}

Example \ref{p21}. Continued from Example \ref{p20}, we want to show that \(C\) is \(\Omega\)-pointed-like. Given any \(x\in (C\oplus\Omega )\cap C^{-}\), we have \(x=c\oplus\omega\) and \(\eta(x)\leq 0\) for some \(c\in C\) and \(\omega\in\Omega\). Since \(\eta(c)\geq 0\), we have
\[\eta(x)=\eta(c\oplus\omega)=\eta(c)+\eta(\omega)=\eta(c)\geq 0.\]
Therefore, we must have \(\eta(x)=0\), which says \(x\in\ker\eta\), i.e. \(x\in\Omega\). Therefore, we have \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\) and conclude that \(C\) is \(\Omega\)-pointed-like. We can similarly show that \(C\) is also pointed-like. \(\sharp\)

In order to consider the minimal elements in near vector space, and minimizer of vector optimization problems, we need to consider the partial orderings in near vector space.

Definition. Let \(X\) be a near vector space, and let \(\precsim\) be a binary relation on \(X\).

  • We say that \(\precsim\) is reflexive when \(x\precsim x\) for all \(x\in X\).
  • We say that \(\precsim\) is transitive when \(x\precsim y\) and \(y\precsim z\) imply \(x\precsim z\) for any \(x,y,z\in X\).
  • We say that \(\precsim\) is antisymmetric when \(x\precsim y\) and \(y\precsim x\) imply \(x=y\).
  • We say that \(\precsim\) is compatible with vector addition when
    \[x\precsim y\mbox{ and }a\precsim b\mbox{ imply }x\oplus a\precsim y\oplus b\]
    for any \(x,y,a,b\in X\).
  • We say that \(\precsim\) is compatible with scalar multiplication when
    \[x\precsim y\mbox{ and }\lambda>0\mbox{ imply }\lambda x\precsim\lambda y\]
    for any \(x,y\in X\).

Each binary relation \(\precsim\) on \(X\) is called a partial ordering on \(X\) when it is reflexive and transitive. \(\sharp\)

Proposition. Let \(X\) be a near vector space such that the null set \(\Omega\) is a convex cone. Suppose that the binary relation \(\precsim\) on \(X\) is compatible with the vector addition and scalar multiplication. We consider the following set
\[C=\{x:x\succsim\omega\mbox{ for some }\omega\in\Omega\}\]
Then, we have the following properties.

(i) The set \(C\) is a convex cone.

(ii) Assume that \(\Omega\oplus\Omega\subseteq\Omega\), and that
\begin{equation}{\label{eq2}}\tag{15}
x\oplus\omega_{3}\precsim\omega_{1}\mbox{ and }\omega_{2}\precsim x\oplus\omega_{3}
\mbox{ for some }\omega_{1},\omega_{2},\omega_{3}\in\Omega\mbox{ imply }
x\stackrel{\Omega}{=}\omega\mbox{ for some }\omega\in\Omega.
\end{equation}

Then \(C\) is both pointed-like and \(\Omega\)-pointed-like.

Proof. To prove part (i), given any \(x\in C\), i.e. \(x\succsim\omega\) for some \(\omega\in\Omega\), we have \(\lambda x\succsim\lambda\omega\) for \(\lambda >0\) by the compatibility with scalar multiplication. Since \(\Omega\) is a cone, i.e. \(\lambda\omega\in\Omega\), it follows \(\lambda x\in C\). This shows that \(C\) is a cone. Given any \(x,y\in C\), i.e. \(x\succsim\omega_{1}\) and \(y\succsim\omega_{2}\) for some \(\omega_{1},\omega_{2}\in\Omega\), using the compatibility with the vector addition and scalar multiplication, since \(\Omega\) is convex, we have
\[\lambda x\oplus (1-\lambda )y\succsim\lambda\omega_{1}\oplus (1-\lambda )\omega_{2}\in\Omega,\]
This shows that \(C\) is a convex cone.

To prove part (ii), given any \(x\in (C\oplus\Omega )\cap C^{-}\), we have \(x\in C^{-}\) and \(x=c\oplus\omega_{1}\) for some \(c\in C\) and \(\omega_{1}\in\Omega\). Therefore, we have \(c\succsim\omega_{2}\) for some \(\omega_{2}\in\Omega\), which implies \(c\oplus\omega_{1}\succsim\omega_{1}\oplus\omega_{2}\) by adding \(\omega_{1}\) on both sides. This shows
\begin{equation}{\label{ch1eq225}}\tag{16}
x\succsim\omega_{1}\oplus\omega_{2}.
\end{equation}

On the other hand, since \(x\in C^{-}\), the Definition says \(x\oplus y\stackrel{\Omega}{=}\omega_{3}\) for some \(y\in C\) and \(\omega_{3}\in\Omega\). Therefore, there exist \(\omega_{4},\omega_{5}\in\Omega\) satisfying
\[x\oplus y\oplus\omega_{4}=\omega_{3}\oplus\omega_{5}.\]
Since \(y\in C\), we also have \(y\succsim\omega_{6}\) for some \(\omega_{6}\in\Omega\), which implies
\[x\oplus y\oplus\omega_{4}\succsim x\oplus\omega_{4}\oplus\omega_{6}\]
by adding \(x\oplus\omega_{4}\) on both sides, i.e.
\begin{equation}{\label{ch1eq226}}\tag{17}
\omega_{3}\oplus\omega_{5}\succsim x\oplus\omega_{4}\oplus\omega_{6}.
\end{equation}

By adding \(\omega_{4}\oplus\omega_{6}\) on both sides of (\ref{ch1eq225}), we can obtain
\begin{equation}{\label{ch1eq227}}\tag{18}
x\oplus\omega_{4}\oplus\omega_{6}\succsim\omega_{1}\oplus\omega_{2}\oplus\omega_{4}
\oplus\omega_{6}.
\end{equation}

Since \(\Omega\oplus\Omega\subseteq\Omega\), using assumption (\ref{eq2}), we see that (\ref{ch1eq226}) and (\ref{ch1eq227}) imply \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\). This shows that \(C\) is \(\Omega\)-pointed-like. We can similarly show that \(C\) is pointed-like. This completes the proof. \(\blacksquare\)

Suppose that \(X\) happens to be a (conventional) vector space, and that \(X\) is endowed with a binary relation \(\precsim\). Recall that the binary relation \(\precsim\) on \(X\) is antisymmetric when
\[x\precsim y\mbox{ and }y\precsim x\mbox{ imply }x=y.\]
This convenient usage is based on the zero element of \(X\). Therefore, in near vector space, we can propose the similar concept based on the null set \(\Omega\) as follows.

Definition. Let \(X\) be a near vector space with the null set \(\Omega\). We say that the binary relation \(\precsim\) on \(X\) is antisymmetric-like when
\[x\precsim y\mbox{ and }y\precsim x\mbox{ imply }x\stackrel{\Omega}{=}y. \sharp\]

Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). For further study, some assumptions for \(C\) are needed and given below:

(a) \(\lambda (x\oplus y)=\lambda x\oplus\lambda y\) for \(\lambda >0\) and \(x,y\in C\);

(b) \(\lambda\left (\frac{1}{\lambda}x\right )=x\) for \(\lambda >0\) and \(x\in C\).

In (conventional) vector space, the above assumptions (a) and (b) are satisfied automatically. However, the laws imposed upon near vector space cannot guarantee these assumptions. It is clear to see that the assumptions (a) and (b) are automatically satisfied when we take \(X={\cal I}\), which is the collection of all bounded closed interval in \(\mathbb{R}\).

\begin{equation}{\label{lsvp1}}\tag{19}\mbox{}\end{equation}

Lemma \ref{lsvp1}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). Suppose that the assumptions (a) and (b) are satisfied. Then, the convex cone \(C\) is closed under the vector addition. More precisely, we have \(x\oplus y\in C\) for any \(x,y\in C\).

Proof. Since \(C\) is a cone, we have \(\frac{1}{2}x,\frac{1}{2}y\in C\). Also, since \(C\) is convex, we have
\[\frac{1}{2}(x\oplus y)=\frac{1}{2}x\oplus\frac{1}{2}y\in C\]
by the assumption. Therefore, we obtain
\[x\oplus y=2\cdot\frac{1}{2}(x\oplus y)\in C,\]
since \(C\) is a cone. This completes the proof. \(\blacksquare\)

In this paper, we are going to consider two orderings using Hukuhara difference and subtraction, which are separately presented below.

3.1 Partial Ordering Using Hukuhara Difference.

Let \(V\) be a (conventional) vector space, and let \(C\) be a convex cone in \(V\). Given any \(x,y\in V\), recall that the binary relation \(x\preceq y\) defined by \(y-x+\theta_{V}\in C\) is a partial ordering, where \(\theta_{V}\) is a zero element of \(V\). Since the null set \(\Omega\) in near vector space \(X\) can play the role of “zero element”, we propose an ordering below.

\begin{equation}{\label{lsvd11}}\tag{20}\mbox{}\end{equation}

Definition \ref{lsvd11}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). Given any \(x,y\in X\), we define \(y\preceq_{H}x\) when the Hukuhara difference \(x\ominus_{H}y\) exists and \((x\ominus_{H}y)\oplus\omega\in C\) for some \(\omega\in\Omega\). \(\sharp\)

\begin{equation}{\label{ex10}}\tag{21}\mbox{}\end{equation}

Example \ref{ex10}. Continued from Example \ref{lsvex1}, given \(I_{x}=[x_{1},x_{2}]\) and \(I_{y}=[y_{1},y_{2}]\) in \({\cal I}\), the Hukuhara difference \(I_{x}\ominus_{H}I_{y}\) exists when \(x_{1}-x_{2}\leq y_{1}-y_{2}\). In this case, we have
\[I_{x}\ominus_{H}I_{y}=[x_{1}-y_{1},x_{2}-y_{2}].\]
Example \ref{ex34} has shown that the following set
\[C=\left\{\left [c_{1},c_{2}\right ]\in {\cal I}:c_{1}+c_{2}\geq 0\right\}\]
is a pointed-like convex cone. Since \([-k,k]\in\Omega\) for any \(k>0\), we see that
\[I_{y}\preceq_{H}I_{x}\mbox{ if and only if }(I_{x}\ominus_{H}I_{y})\oplus [-k,k]\in C.\]
Since
\[(I_{x}\ominus_{H}I_{y})\oplus [-k,k]=[x_{1}-y_{1},x_{2}-y_{2}]\oplus [-k,k]=[x_{1}-y_{1}-k,x_{2}-y_{2}+k]\in C,\]
it means
\[(x_{1}-y_{1}-k)+(x_{2}-y_{2}+k)\geq 0,\mbox{ i.e. }x_{1}+x_{2}\geq y_{1}+y_{2}.\]
Therefore, we have
\[I_{y}\preceq_{H}I_{x}\mbox{ if and only if }x_{1}+x_{2}\geq y_{1}+y_{2}\mbox{ and }x_{1}-x_{2}\leq y_{1}-y_{2}. \sharp\]

Under the above assumptions, we present some useful results.

\begin{equation}{\label{lsvp333}}\tag{22}\mbox{}\end{equation}

Proposition \ref{lsvp333}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). We have the following properties.

(i) Suppose that \(X\) has a zero element \(\theta\), and that \(C\) contains \(\theta\). Then, the binary relation \(\preceq_{H}\) is reflexive.

(ii) Suppose that \(\Omega\oplus\Omega\subseteq\Omega\), and that the assumptions (a) and (b) are satisfied. Then, the binary relation \(\preceq_{H}\) is transitive.

Proof. To prove part (i), since \(x=x\oplus\theta\), we have
\[x\ominus_{H}x=\theta\in C\]
by the assumption and the Definition of \(\ominus_{H}\). This shows \(x\preceq_{H}x\).

To prove part (ii), assume \(x\preceq_{H}y\preceq_{H}z\). We want to show \(x\preceq_{H}z\). The Definition says that the Hukuhara differences
\[a=y\ominus_{H}x\mbox{ and }b=z\ominus_{H}y\]
exist, and that
\[a\oplus\omega_{1}\in C\mbox{ and }b\oplus\omega_{2}\in C\]
for some \(\omega_{1},\omega_{2}\in\Omega\). Therefore, we have \(y=a\oplus x\) and \(z=b\oplus y\). Let
\[c=a\oplus\omega_{1}\oplus b\oplus\omega_{2}.\]
Using Lemma \ref{lsvp1}, we have \(c\in C\). Therefore, we obtain
\[z=b\oplus y=b\oplus (a\oplus x)=(b\oplus a)\oplus x,\]
which says that the Hukuhara difference \(z\ominus_{H}x=b\oplus a\) exists. We also have
\[(z\ominus_{H}x)\oplus\omega_{1}\oplus\omega_{2}=a\oplus\omega_{1}\oplus b\oplus\omega_{2}=c\in C.\]
Since \(\omega_{1}\oplus\omega_{2}\in\Omega\oplus\Omega\subseteq\Omega\), it follows \(x\preceq_{H}z\) by definition. This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lsvp333*}}\tag{23}\mbox{}\end{equation}

Proposition \ref{lsvp333*}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). We have the following properties.

(i) Suppose that \(\lambda\Omega\subseteq\Omega\) for any \(\lambda >0\), and that the assumption (a) is satisfied. Then, the binary relation \(\preceq_{H}\) is compatible with scalar multiplication.

(ii) Suppose that \(\Omega\oplus\Omega\subseteq\Omega\), and that the assumptions (a) and (b) are satisfied. Then, the binary relation \(\preceq_{H}\) is compatible with vector addition.

Proof. To prove part (i), assume \(x\preceq_{H}y\) and \(\lambda >0\). The Definition says \(z=y\ominus_{H}x\in C\) and \(z\oplus\omega\in C\) for some \(\omega\in\Omega\). Therefore, we have \(y=x\oplus z\). Using assumption (a), we have \(\lambda y=\lambda x\oplus\lambda z\),  i.e. the Hukuhara difference \(\lambda z=\lambda y\ominus_{H}\lambda x\) exists. Since \(C\) is a cone, using assumption (a), we have
\[\lambda z\oplus\lambda\omega =\lambda(z\oplus\omega)\in C;\]
that is,
\[\left (\lambda y\ominus_{H}\lambda x\right )\oplus\lambda\omega=\lambda z\oplus\lambda\omega\in C,\]
which says \(\lambda x\preceq_{H}\lambda y\), since \(\lambda\omega\in\lambda\Omega\subseteq\Omega\) for any \(\lambda >0\).

To prove part (ii), assume \(x\preceq_{H}y\) and \(a\preceq_{H}b\). The Definition says that the Hukuhara differences
\[p=y\ominus_{H}x\mbox{ and }q=b\ominus_{H}a\]
exist, and that
\[p\oplus\omega_{1}\in C\mbox{ and }q\oplus\omega_{2}\in C\]
for some \(\omega_{1},\omega_{2}\in\Omega\). Therefore, we have \(y=p\oplus x\) and \(b=q\oplus a\). Let \(r=p\oplus\omega_{1}\oplus q\oplus\omega_{2}\). Using Lemma \ref{lsvp1}, we have \(r\in C\). Therefore, we obtain
\[y\oplus b=(p\oplus x)\oplus (q\oplus a)=(x\oplus a)\oplus (p\oplus q).\]
which says that the Hukuhara difference
\[(y\oplus b)\ominus_{H} (x\oplus a)=p\oplus q\]
exists. We also have
\[(y\oplus b)\ominus_{H} (x\oplus a)\oplus\omega_{1}\oplus\omega_{2}=
p\oplus\omega_{1}\oplus q\oplus\omega_{2}=r\in C.\]

Since \(\omega_{1}\oplus\omega_{2}\in\Omega\oplus\Omega\subseteq\Omega\), it follows \(x\oplus a\preceq_{H}y\oplus b\) by definition. This completes the proof. \(\blacksquare\)

Example. Continued from Example \ref{lsvex1}, since the null set \(\Omega\) is given by

\[\Omega=\left\{[-k,k]:k\geq 0\right\},\]
it is clear to see that all of the assumptions in Propositions \ref{lsvp333} and \ref{lsvp333*} are satisfied. It follows that the binary relation \(\preceq_{H}\) is a partial ordering such that it is also compatible with vector addition and scalar multiplication. \(\sharp\)

\begin{equation}{\label{p230}}\tag{24}\mbox{}\end{equation}

Proposition \ref{p230}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone in \(X\). Suppose that \(C\) is pointed-like. Then, we have the following properties.

(i) Suppose that \(X\) owns a unique zero element \(\theta\) satisfying \(\theta\in\Omega\). Then, the binary relation \(\preceq_{H}\) is antisymmetric-like.

(ii) \(x\preceq_{H}\omega_{1}\) and \(\omega_{2}\preceq_{H}x\) for some \(\omega_{1},\omega_{2}\in\Omega\) imply \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\).

Proof. To prove part (i), assume \(x\preceq_{H}y\) and \(y\preceq_{H} x\). The Definition says that the Hukuhara differences
\[a=y\ominus_{H}x\mbox{ and }b=x\ominus_{H}y\]
exist, and that
\[\left (x\ominus_{H}y\right )\oplus\omega_{1}\in C\mbox{ and }\left (y\ominus_{H}x\right )\oplus\omega_{2}\in C\]
for some \(\omega_{1},\omega_{2}\in\Omega\). Therefore, we have \(y=x\oplus a\) and \(x=y\oplus b\). Using the associative law, we obtain
\[x=y\oplus b=(x\oplus a)\oplus b=x\oplus (a\oplus b).\]
This shows \(a\oplus b=\theta\), since \(\theta\) is the unique zero element satisfying \(x=\theta\oplus x=x\oplus\theta\). Since \(\theta\in\Omega\) and \(\Omega\oplus\Omega\subseteq\Omega\), we also obtain
\[[(x\ominus_{H}y)\oplus\omega_{1}]\oplus [(y\ominus_{H}x)\oplus\omega_{2}]
=a\oplus b\oplus\omega_{1}\oplus\omega_{2}=\theta\oplus\omega_{1}\oplus\omega_{2}\in\Omega.\]

This shows \((y\ominus_{H}x)\oplus\omega_{2}\in C^{-}\), since \((x\ominus_{H}y)\oplus\omega_{1}\in C\). In other words, we have
\[\left (y\ominus_{H}x\right )\oplus\omega_{2}\in C^{-}\cap C.\]
Since \(C\) is pointed-like, we have \((y\ominus_{H}x)\oplus\omega_{2}\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\), i.e. \(a\oplus\omega_{2}\stackrel{\Omega}{=}\omega\), which says \(a\oplus\omega_{2}\oplus\omega_{3}=\omega_{4}\oplus\omega\) for some \(\omega_{3},\omega_{4}\in\Omega\). Since \(y=x\oplus a\), we have
\[y\oplus\omega_{2}\oplus\omega_{3}=x\oplus a\oplus\omega_{2}\oplus\omega_{3}=x\oplus\omega_{4}\oplus\omega.\]
This shows \(x\stackrel{\Omega}{=}y\), since \(\Omega\oplus\Omega\subseteq\Omega\).

To prove part (ii), assume \(x\preceq_{H}\omega_{1}\) and \(\omega_{2}\preceq_{H}x\) for some \(\omega_{1},\omega_{2}\in\Omega\).
The Definition says that the Hukuhara differences
\[y=\omega_{1}\ominus_{H}x\mbox{ and }z=x\ominus_{H}\omega_{2}\]
exist, and that
\[y\oplus\omega_{3}\in C\mbox{ and }z\oplus\omega_{4}\in C\]
for some \(\omega_{3},\omega_{4}\in\Omega\). Therefore, we have \(\omega_{1}=x\oplus y\) and \(x=\omega_{2}\oplus z\), which imply
\[\omega_{1}=x\oplus y=\omega_{2}\oplus z\oplus y.\]
Since \(\Omega\oplus\Omega\subseteq\Omega\), we obtain
\[[(\omega_{1}\ominus_{H}x)\oplus\omega_{3}]\oplus [(x\ominus_{H}\omega_{2})
\oplus\omega_{4}]\oplus\omega_{2}=y\oplus\omega_{3}\oplus z\oplus\omega_{4}\oplus
\omega_{2}=\omega_{3}\oplus\omega_{4}\oplus\omega_{1}\in\Omega.\]

It shows \((\omega_{1}\ominus_{H}x)\oplus\omega_{3}\in C^{-}\), since \((x\ominus_{H}\omega_{2})\oplus\omega_{4}\in C\). In other words, we have
\[\left (\omega_{1}\ominus_{H}x\right )\oplus\omega_{3}\in C^{-}\cap C.\]
Since \(C\) is pointed-like, we have \((\omega_{1}\ominus_{H}x)\oplus\omega_{3}\stackrel{\Omega}{=}\omega\) for some
\(\omega\in\Omega\), i.e. \(y\oplus\omega_{3}\stackrel{\Omega}{=}\omega\), which says
\[y\oplus\omega_{5}\oplus\omega_{3}=\omega\oplus\omega_{6}\]
for some \(\omega_{5},\omega_{6}\in\Omega\). Since \(\omega_{1}=x\oplus y\), we obtain
\[\omega_{1}\oplus\omega_{5}\oplus\omega_{3}=x\oplus y\oplus\omega_{5}\oplus\omega_{3}=x\oplus\omega\oplus\omega_{6},\]
which says \(x\stackrel{\Omega}{=}\omega_{1}\), since \(\Omega\oplus\Omega\subseteq\Omega\). This completes the proof. \(\blacksquare\)

Example. Continued from Example \ref{lsvex1}, since \(C\) is a pointed-like convex cone and \({\cal I}\) owns a unique zero element \([0,0]\) satisfying \([0,0]\in\Omega\), Proposition \ref{p230} says that the partial ordering \(\preceq_{H}\) is antisymmetric-like satisfying, given any \([x_{1},x_{2}]\in {\cal I}\),
\[[x_{1},x_{2}]\preceq_{H}[-k_{1},k_{1}]\mbox{ and }[-k_{2},k_{2}]\preceq_{H}[x_{1},x_{2}]\]
for some nonnegative real numbers \(k_{1}\) and \(k_{2}\) imply \([x_{1},x_{2}]\stackrel{\Omega}{=}[-k,k]\) for some nonnegative real number \(k\). \(\sharp\)

Let \(L:X\rightarrow V\) be a function from the near vector space \(X\) into the (conventional) vector space \(V\), and let \(C\) be a convex cone in \(X\). By Proposition \ref{lsvr1}, if \(L\) is additive and positively homogeneous, then \(L(C)\) is a convex cone in the vector space \(V\). Therefore, we can define a binary relation \(\preccurlyeq_{H}\) on \(L(X)\subseteq V\) by
\begin{equation}{\label{lsveq2}}\tag{25}
L(y)\preccurlyeq_{H}L(x)\mbox{ if and only if }L(x)-L(y)\in L(C)\mbox{ and the Hukuhara difference }x\ominus_{H}y\mbox{ exists}.
\end{equation}

\begin{equation}{\label{ex50}}\tag{26}\mbox{}\end{equation}

Example \ref{ex50}. Continued from Example \ref{lsvex1}, we consider the function \(L:{\cal I}\rightarrow\mathbb{R}^{2}\) defined by
\[L\left (\left [a_{1},a_{2}\right ]\right )=\left (-a_{1}-a_{2},a_{1}+a_{2}\right )\in\mathbb{R}^{2},\]
It is clear to see that \(L\) is additive and positively homogeneous. We consider the pointed-like convex cone \(C\) in Example \ref{ex34}. Then, we have
\[L(C)=\left\{\left (-c_{1}-c_{2},c_{1}+c_{2}\right )\in\mathbb{R}^{2}:c_{1}+c_{2}\geq 0\right\}.\]
Given \(I_{x}=[x_{1},x_{2}]\) and \(I_{y}=[y_{1},y_{2}]\) in \({\cal I}\), Example \ref{lsvex1} says that the Hukuhara difference \(I_{x}\ominus_{H}I_{y}\) exists when \(x_{1}-x_{2}\leq y_{1}-y_{2}\). Now, we have
\begin{align*}L(I_{x})-L(I_{y}) & =\left (-x_{1}-x_{2},x_{1}+x_{2}\right )-\left (-y_{1}-y_{2},y_{1}+y_{2}\right )\\& =\left (-x_{1}-x_{2}+y_{1}+y_{2},x_{1}+x_{2}-y_{1}-y_{2}\right ).\end{align*}
It says that \(L(I_{x})-L(I_{y})\in L(C)\) means \(x_{1}+x_{2}\geq y_{1}+y_{2}\). Therefore, we obtain that
\[L(I_{y})\preccurlyeq_{H}L(I_{x})\mbox{ if and only if }x_{1}+x_{2}\geq y_{1}+y_{2}
\mbox{ and }x_{1}-x_{2}\leq y_{1}-y_{2}. \sharp\]

The following results will be used for further study.

\begin{equation}{\label{lsvp334}}\tag{27}\mbox{}\end{equation}

Proposition \ref{lsvp334}. Consider the function \(L:X\rightarrow V\) from a near vector space \(X\) into a (conventional) vector space \(V\). Let \(C\) be a convex cone in \(X\). Suppose that \(L\) is additive and positively homogeneous. Then, we have the following properties.

(i) Suppose that \(X\) has a unique zero element \(\theta\), and that \(C\) contains \(\theta\). Then, the binary relation \(\preccurlyeq_{H}\) defined in \((\ref{lsveq2})\) is reflexive in \(L(X)\).

(ii) Suppose that the assumptions (a) and (b) are satisfied. Then, the binary relation \(\preccurlyeq_{H}\) defined in \((\ref{lsveq2})\) is transitive in \(L(X)\).

Proof. We first have that \(L(C)\) is a convex cone in \(V\) by Proposition \ref{lsvr1}. To prove part (i), using the additivity, we have
\[L\left (\theta\right )=L\left (\theta\oplus\theta\right )=L\left (\theta\right )+L\left (\theta\right ),\]
which implies \(\theta_{V}=L(\theta)\in L(C)\) and is the unique zero element of vector space \(V\). Therefore, we obtain
\[L(x)-L(x)=\theta_{V}\in L(C)\]
for any \(x\in X\). From the proof of part (i) of Proposition \ref{lsvp333}, we see that the Hukuhara difference \(x\ominus_{H}x\) exists. Therefore, we obtain \(L(x)\preccurlyeq_{H}L(x)\).

To prove part (ii), assume \(L(x)\preccurlyeq_{H}L(y)\preccurlyeq_{H}L(z)\). Then, the Hukuhara differences \(y\ominus_{H}x\) and \(z\ominus_{H}y\) exist. From the proof of part (ii) of Proposition \ref{lsvp333}, we see that the Hukuhara difference \(z\ominus_{H}x\) exists. We also have
\[L(y)-L(x)\in C\mbox{ and }L(z)-L(y)\in C.\]
Since \(V\) is a vector space and \(L(C)\) is a convex cone in \(V\) by Proposition \ref{lsvr1}, it follows \(L(z)-L(x)\in C\), which says \(L(x)\preccurlyeq_{H}L(z)\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lsvp334*}}\tag{28}\mbox{}\end{equation}

Proposition \ref{lsvp334*}. Consider the function \(L:X\rightarrow V\) from a near vector space \(X\) into a (conventional) vector space \(V\). Let \(C\) be a convex cone in \(X\). Suppose that \(L\) is additive and positively homogeneous. Then, we have the following properties.

(i) Suppose that the assumption (a) is satisfied. Then, the binary relation \(\preccurlyeq_{H}\) is compatible with scalar multiplication in \(L(X)\).

(ii) Suppose that the assumptions (a) and (b) are satisfied. Then, the binary relation \(\preccurlyeq_{H}\) is compatible with vector addition in \(L(X)\).

Proof. We first have that \(L(C)\) is a convex cone in \(V\) by Proposition \ref{lsvr1}. To prove part (i), assume \(L(x)\preccurlyeq_{H}L(y)\) and \(\lambda >0\). The Definition says that \(L(y)-L(x)\in L(C)\), and that the Hukuhara difference \(y\ominus_{H}x\) exists. From the proof of part (i) of Proposition \ref{lsvp333*}, we see that the Hukuhara difference \(\lambda y\ominus_{H}\lambda x\) exists. Since \(V\) is a vector space and \(L(C)\) is a convex cone in \(V\), it follows
\[\lambda L(y)-\lambda L(x)\in L(C),\mbox{ i.e. }L(\lambda y)-L(\lambda x)\in L(C)\]
by the positive homogeneity, which says
\[\lambda L(x)=L(\lambda x)\preccurlyeq_{H}L(\lambda y)=\lambda L(y).\]

To prove part (ii), assume \(L(x)\preccurlyeq_{H}L(y)\) and \(L(a)\preccurlyeq_{H}L(b)\). The Definition says that the Hukuhara differences \(y\ominus_{H}x\) and \(b\ominus_{H}a\) exist. From the proof of part (ii) of Proposition \ref{lsvp333*}, we see that the Hukuhara difference \((y\oplus b)\ominus_{H}(x\oplus a)\) exists. Now, we also have
\[L(y)-L(x)\in C\mbox{ and }L(b)-L(a)\in C.\]
Since \(V\) is a vector space and \(L(C)\) is a convex cone in \(V\), we obtain
\[L(y)+L(b)-L(x)-L(a)\in C.\]
By the additivity, we also obtain
\[L(y\oplus b)-L(x\oplus a)\in C,\]
which says
\[L(x)+L(a)=L(x\oplus a)\preccurlyeq_{H}L(y\oplus b)=L(y)+L(b).\]
This completes the proof. \(\blacksquare\)

Example. Continued from Example \ref{ex50}, it is clear to see that all of the assumptions in Propositions \ref{lsvp334} and \ref{lsvp334*} are satisfied. It follows that the binary relation \(\preccurlyeq_{H}\) is a partial ordering such that it is also compatible with vector addition and scalar multiplication. \(\sharp\)

Let \(L:X\rightarrow V\) be a function from the near vector space \(X\) into the (conventional) vector space \(V\). The {\em kernel} of \(L\) is defined by
\[\ker L=\{x:L(x)=\theta_{V}\},\]
where \(\theta_{V}\) is the zero element of \(V\). Then, we are in a position to present the order-preserving property.

\begin{equation}{\label{lsvp5}}\tag{29}\mbox{}\end{equation}

Proposition \ref{lsvp5} (Order-Preserving Property). Let the function \(L:X\rightarrow V\) be additive and positively homogeneous from a near vector space \(X\) into a (conventional) vector space \(V\), and let \(C\) be a convex cone in \(X\). Assume \(\Omega\subseteq\ker L\). Then, we have the following properties.

(i) \(x\preceq_{H}y\) implies \(L(x)\preccurlyeq_{H}L(y)\).

(ii) Suppose that the assumptions (a) and (b) are satisfied. If \(\ker L\subseteq C\), then \(L(x)\preccurlyeq_{H}L(y)\) implies \(x\preceq_{H}y\).

Proof. Since \(\Omega\subseteq\ker L\), it means \(L(\omega)=\theta_{V}\) for all \(\omega\in\Omega\). To prove part (i), the Definition of \(x\preceq_{H}y\) says that the Hukuhara difference \(y\ominus_{H}x\) exists and \((y\ominus_{H}x)\oplus\omega\in C\) for some \(\omega\in\Omega\), which implies \(L(y\ominus_{H}x)\in L(C)\) since \(L(\omega)=\theta_{V}\). Using part (iii) of Proposition \ref{lsvp3}, we have \(L(y)-L(x)\in L(C)\), which says \(L(x)\preccurlyeq_{H}L(y)\).

To prove part (ii), the Definition of \(L(x)\preccurlyeq_{H}L(y)\) says that the Hukuhara difference \(y\ominus_{H}x\) exists and \(L(y)-L(x)=L(c)\) for some \(c\in C\). Using part (iii) of Proposition \ref{lsvp3}, we have
\[L(y\ominus_{H}x)=L(c),\mbox{ i.e. }L(y\ominus_{H}x)-L(c)=\theta_{V}.\]
Using part (ii) of Proposition \ref{lsvp3}, we obtain \((y\ominus_{H}x)\ominus c\in\ker L\), which also says \((y\ominus_{H}x)\ominus c=k\) for some \(k\in\ker L\). By adding \(c\) on both sides, we obtain \((y\ominus_{H}x)\oplus\omega=c\oplus k\), where \(\omega=c\ominus c\in\Omega\). Since \(\ker L\subseteq C\), using Lemma \ref{lsvp1}, it follows \((y\ominus_{H}x)\oplus\omega\in C\), which also says \(x\preceq_{H}y\). This completes the proof. \(\blacksquare\)

Example. Continued from Example \ref{ex50}, given any \(\omega =[-k,k]\in\Omega\), we have \(L(\omega )=(0,0)\), which is the zero element of \(\mathbb{R}^{2}\). We also see that
\[L([a_{1},a_{2}])=(0,0)\mbox{ implies }-a_{1}=a_{2},\]
which shows \([a_{1},a_{2}]\in\Omega\). Therefore, we obtain \(\ker L=\Omega\subseteq C\). Proposition \ref{lsvp5} says that
\[x\preceq_{H}y\mbox{ if and only if }L(x)\preccurlyeq_{H}L(y),\]
which shows the order-preserving property. \(\sharp\)

3.2 Partial Ordering Using Subtraction.

Next, we are going to propose another binary relation using subtraction.

Definition. Let \(X\) be a near vector space, and let \(F\) be a subset of \(X\). Given any \(x\in X\), we write \(x\in^{\Omega}F\) when \(x\stackrel{\Omega}{=}a\) for some \(a\in F\). \(\sharp\)

It is clear that \(x\in F\) implies \(x\in^{\Omega}F\).

\begin{equation}{\label{lsvd12}}\tag{30}\mbox{}\end{equation}

Definition \ref{lsvd12}. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a convex cone in \(X\). For \(x,y\in X\), we define \(x\preceq y\) when \(y\ominus x\in^{\Omega}C\). \(\sharp\)

We provide some results, which will be used for the later discussion.

\begin{equation}{\label{ch1p208}}\tag{31}\mbox{}\end{equation}

Proposition \ref{ch1p208}. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a convex cone in \(X\). We have the following properties.

(i) Assume \(\Omega\subseteq C\). Then, the binary relation \(\preceq\) is reflexive.

(ii) Suppose that \(\Omega\oplus\Omega\subseteq\Omega\), and that the assumptions (a) and (b) are satisfied. Then, the binary relation \(\preceq\) is transitive.

Proof. To prove part (i), we have \(x\ominus x\in\Omega\subseteq C\) for any \(x\in X\) by the assumption, i.e. \(x\ominus x\in^{\Omega}C\), which shows \(x\preceq x\) by definition.

To prove part (ii), assume \(x\preceq y\preceq z\). We want to show \(x\preceq z\). We first have
\[y\ominus x\in^{\Omega}C\mbox{ and }z\ominus y\in^{\Omega}C,\]
which says
\[y\ominus x\stackrel{\Omega}{=}c_{1}\mbox{ and }z\ominus y\stackrel{\Omega}{=}c_{2}\]
for some \(c_{1},c_{2}\in C\). Therefore, we obtain
\[y\ominus x\oplus\omega_{1}=c_{1}\oplus\omega_{2}\mbox{ and }
z\ominus y\oplus\omega_{3}=c_{2}\oplus\omega_{4}\]

for some \(\omega_{i}\in\Omega\) and \(i=1,\cdots ,4\). By adding \(x\) and \(y\), respectively, to the above two equations on both sides, we obtain
\begin{equation}{\label{ch1eq22}}\tag{32}
y\oplus\omega_{5}\oplus\omega_{1}=c_{1}\oplus x\oplus\omega_{2}
\end{equation}

and
\begin{equation}{\label{ch1eq23}}\tag{33}
z\oplus\omega_{6}\oplus\omega_{3}=c_{2}\oplus y\oplus\omega_{4},
\end{equation}

where \(x\ominus x=\omega_{5}\) and \(y\ominus y=\omega_{6}\) are in \(\Omega\). By adding \(\omega_{1}\oplus\omega_{5}\) to (\ref{ch1eq23}) on both sides and using (\ref{ch1eq22}), we obtain
\begin{equation}{\label{ch1eq24}}\tag{34}
z\oplus\omega_{6}\oplus\omega_{3}\oplus\omega_{1}\oplus\omega_{5}=
c_{2}\oplus\omega_{4}\oplus y\oplus\omega_{1}\oplus\omega_{5}=
c_{1}\oplus c_{2}\oplus x\oplus\omega_{4}\oplus\omega_{2}.
\end{equation}

By adding \(-x\) to (\ref{ch1eq24}) on both sides, we obtain
\begin{equation}{\label{ch1eq25}}\tag{35}
(z\ominus x)\oplus (\omega_{6}\oplus\omega_{3}\oplus\omega_{1}\oplus\omega_{5})=
(c_{1}\oplus c_{2})\oplus\omega_{4}\oplus\omega_{2}\oplus\omega_{5}.
\end{equation}

From Lemma \ref{lsvp1}, we have
\[c\equiv c_{1}\oplus c_{2}\in C.\]
Since \(\Omega\oplus\Omega\subseteq\Omega\), the equality (\ref{ch1eq25}) says
\[(z\ominus x)\oplus\omega_{7}=c\oplus\omega_{8}\]
for some \(\omega_{7},\omega_{8}\in\Omega\), which says \(z\ominus x\in^{\Omega}C\), i.e. \(x\preceq z\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{ch1p208*}}\tag{36}\mbox{}\end{equation}

Proposition \ref{ch1p208*}. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a convex cone in \(X\). We have the following properties.

(i) Suppose that \(\lambda\Omega\subseteq\Omega\) for \(\lambda >0\), and that the assumption (a) is satisfied. Then, the binary relation \(\preceq\) is compatible with scalar multiplication.

(ii) Suppose that \(\Omega\oplus\Omega\subseteq\Omega\), and that the assumptions (a) and (b) are satisfied. Then, the binary relation \(\preceq\) is compatible with vector addition.

Proof. To prove part (i), assume \(x\preceq y\) and \(\lambda >0\). We want to show \(\lambda x\preceq \lambda y\). The Definition says \(y\ominus x\in^{\Omega}C\), i.e. \(y\ominus x\stackrel{\Omega}{=}c\) for some \(c\in C\), which also says
\[y\oplus (-x)\oplus\omega_{1}=c\oplus\omega_{2}\]
for some \(\omega_{1},\omega_{2}\in\Omega\). Using assumption (a), we have
\[\lambda y\oplus (-\lambda x)\oplus\lambda\omega_{1}=\lambda c\oplus\lambda\omega_{2}.\]
Since \(\lambda\Omega\subseteq\Omega\) for \(\lambda >0\), we also have \(\lambda y\ominus\lambda x\stackrel{\Omega}{=}\lambda c\). Since \(C\) is a cone, i.e. \(\lambda c\in C\), it follows \(\lambda x\preceq \lambda y\).

To prove part (ii), assume \(x\preceq y\) and \(a\preceq b\). We want to show \(x\oplus a\preceq y\oplus b\). The Definition says \(y\ominus x\in^{\Omega}C\) and \(b\ominus a\in^{\Omega}C\), i.e.
\[y\ominus x\stackrel{\Omega}{=}c_{1}\mbox{ and }b\ominus a\stackrel{\Omega}{=}c_{2}\]
for some \(c_{1},c_{2}\in C\). Therefore, we obtain
\[y\ominus x\oplus\omega_{1}=c_{1}\oplus\omega_{2}\mbox{ and }b\ominus a\oplus\omega_{3}=c_{2}\oplus\omega_{4}\]
for some \(\omega_{i}\in\Omega\) and \(i=1,\cdots ,4\). By adding \(x\) and \(a\), respectively, to the above two equalities on both sides, we obtain
\[y\oplus\omega_{5}\oplus\omega_{1}=c_{1}\oplus x\oplus\omega_{2}\mbox{ and }
b\oplus\omega_{6}\oplus\omega_{3}=c_{2}\oplus a\oplus\omega_{4}\]

where \(x\ominus x=\omega_{5}\) and \(a\ominus a=\omega_{6}\) are in \(\Omega\). Furthermore, by adding them together side by side, we also obtain
\[(y\oplus b)\oplus (\omega_{5}\oplus\omega_{1}\oplus\omega_{6}\oplus\omega_{3})
=(x\oplus a)\oplus (c_{1}\oplus c_{2})\oplus (\omega_{2}\oplus\omega_{4}).\]

By adding \(-(x\oplus a)\) on both sides, we finally obtain
\[(y\oplus b)\ominus (x\oplus a)\oplus (\omega_{5}\oplus\omega_{1}\oplus
\omega_{6}\oplus\omega_{3})=(c_{1}\oplus c_{2})\oplus (\omega_{7}\oplus\omega_{2}\oplus\omega_{4})\]

where
\[(x\oplus a)\ominus (x\oplus a)=\omega_{7}\in\Omega.\]
Using Lemma \ref{lsvp1}, we have \(c\equiv c_{1}\oplus c_{2}\in C\). Since \(\Omega\oplus\Omega\subseteq\Omega\), we obtain
\[(y\oplus b)\ominus (x\oplus a)\oplus\omega_{8}=c\oplus\omega_{9}\]
for some \(\omega_{8},\omega_{9}\in\Omega\), which says
\[(y\oplus b)\ominus (x\oplus a)\stackrel{\Omega}{=}c,\mbox{ i.e. } (y\oplus b)\ominus (x\oplus a)\in^{\Omega}C.\]
Therefore, we obtain \(x\oplus a\preceq y\oplus b\). This completes the proof. \(\blacksquare\)

Proposition. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone in \(X\). Suppose that \(C\) is \(\Omega\)-pointed-like. Then, we have the following properties.

(i) The binary relation \(\preceq\) is antisymmetric-like.

(ii) Suppose that \(\Omega\) satisfies the neutral condition. Then \(x\preceq\omega_{1}\) and \(\omega_{2}\preceq x\) for some \(\omega_{1},\omega_{2}\in\Omega\) implies \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\).

Proof. To prove part (i), assume \(x\preceq y\) and \(y\preceq x\). The Definition says
\[y\ominus x\in^{\Omega}C\mbox{ and }x\ominus y\in^{\Omega}C,\]
which also says
\[y\ominus x\stackrel{\Omega}{=}c_{1}\mbox{ and }x\ominus y\stackrel{\Omega}{=}c_{2}\]
for some \(c_{1},c_{2}\in C\). Therefore, we obtain
\[y\ominus x\oplus\omega_{1}=c_{1}\oplus\omega_{2}\mbox{ and }
x\ominus y\oplus\omega_{3}=c_{2}\oplus\omega_{4}\in C\oplus\Omega\]

for some \(\omega_{i}\in\Omega\) and \(i=1,\cdots ,4\). Since \(\Omega\oplus\Omega\subseteq\Omega\), we also obtain
\[\left (x\ominus y\oplus\omega_{3}\right )\oplus c_{1}\oplus\omega_{2}=
\left (x\ominus y\oplus\omega_{3}\right )\oplus\left (y\ominus x\oplus\omega_{1}\right )
=\left (x\ominus x\right )\oplus\left (y\ominus y\right )\oplus\omega_{1}\oplus\omega_{3}\in\Omega .\]

This shows \(x\ominus y\oplus\omega_{3}\in C^{-}\), since \(c_{1}\in C\). It follows
\[x\ominus y\oplus\omega_{3}\in (C\oplus\Omega )\cap C^{-},\]
which says \(x\ominus y\oplus\omega_{3}\stackrel{\Omega}{=}\omega_{5}\) for some \(\omega_{5}\in\Omega\), since \(C\) is \(\Omega\)-pointed-like. Therefore, we obtain
\[x\ominus y\oplus\omega_{3}\oplus\omega_{6}=\omega_{5}\oplus\omega_{7}\]
for some \(\omega_{6},\omega_{7}\in\Omega\). By adding \(y\) on both sides, we obtain
\[x\oplus\omega_{8}\oplus\omega_{3}\oplus\omega_{6}=y\oplus\omega_{5}\oplus\omega_{7},\]
where \(y\ominus y=\omega_{8}\in\Omega\). This shows \(x\stackrel{\Omega}{=} y\), since \(\Omega\oplus\Omega\subseteq\Omega\).

To prove part (ii), assume \(x\preceq\omega_{1}\) and \(\omega_{2}\preceq x\) for some \(\omega_{1},\omega_{2}\in\Omega\). The Definition says
\[\omega_{1}\ominus x\oplus\omega_{3}=c_{1}\oplus\omega_{4}\mbox{ and }
x\ominus \omega_{2}\oplus\omega_{5}=c_{2}\oplus\omega_{6}\in C\oplus\Omega\]

for some \(\omega_{i}\in\Omega\), \(i=3,\cdots ,6\), and \(c_{1},c_{2}\in C\). Therefore, we obtain
\[\left (x\ominus \omega_{2}\oplus\omega_{5}\right )\oplus\left (c_{1}\oplus\omega_{4}\right )
=\left (x\ominus \omega_{2}\oplus\omega_{5}\right )\oplus\left (\omega_{1}\ominus x\oplus\omega_{3}\right )
=\omega_{1}\oplus\omega_{3}\ominus\omega_{2}\oplus\omega_{5}\oplus\omega_{7},\]

where \(\omega_{7}=x\ominus x\). This shows \(x\ominus \omega_{2}\oplus\omega_{5}\in C^{-}\), since \(\Omega\oplus\Omega\subseteq\Omega\) and \(-\omega_{2}\in\Omega\) using the neutral condition. Therefore, we obtain
\[x\ominus\omega_{2}\oplus\omega_{5}\in (C\oplus\Omega )\cap C^{-},\]
which says \(x\ominus \omega_{2}\oplus\omega_{5}\stackrel{\Omega}{=}\omega_{8}\) for some \(\omega_{8}\in\Omega\), since \(C\) is \(\Omega\)-pointed-like. Therefore, we obtain \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\). This completes the proof. \(\blacksquare\)

Let \(L:X\rightarrow V\) be a function from the near vector space \(X\) into the (conventional) vector space \(V\), and let \(C\) be a convex cone in \(X\). If \(L\) is additive and positively homogeneous, then \(L(C)\) is a convex cone in the vector space \(V\). Therefore, we can define another binary relation \(\preccurlyeq\) on \(L(X)\subseteq V\) by
\begin{equation}{\label{lsveq52}}\tag{37}L(x)\preccurlyeq L(y)\mbox{ if and only if }L(y)-L(x)\in L(C).\end{equation}
By referring to (\ref{lsveq2}), it is obvious that \(L(x)\preccurlyeq_{H}L(y)\) implies \(L(x)\preccurlyeq L(y)\).

\begin{equation}{\label{lsvp*334}}\tag{38}\mbox{}\end{equation}

Proposition \ref{lsvp*334}. Let the function \(L:X\rightarrow V\) be additive and positively homogeneous from a near vector space \(X\) into a (conventional) vector space \(V\), and let \(C\) be a convex cone in \(X\). Then, the binary relation \(\preccurlyeq\) is transitive and compatible with the vector addition and scalar multiplication on \(L(X)\). If we further assume that \(X\) owns a unique zero element \(\theta\), and that \(C\) contains \(\theta\), then the binary relation \(\preccurlyeq\) is reflexive.

Proof. Since \(L(C)\) is a convex cone in the vector space \(V\) by Proposition \ref{lsvr1}, the conventional argument is available for proving the desired results. \(\blacksquare\)

\begin{equation}{\label{lsvp*5}}\tag{39}\mbox{}\end{equation}

Proposition \ref{lsvp*5} (Order-Preserving Property). Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from a near vector space \(X\) into a (conventional) vector space \(V\), and let \(C\) be a convex cone in \(X\). Assume \(\Omega\subseteq\ker L\). Then, we have the following properties.

(i) \(x\preceq y\) implies \(L(x)\preccurlyeq L(y)\).

(ii) Suppose that the assumptions (a) and (b) are satisfied. If \(\ker L\subseteq C\), then \(L(x)\preccurlyeq L(y)\) implies \(x\preceq y\).

Proof. Since \(\Omega\subseteq\ker L\), it follows that \(L(\omega)=\theta_{V}\) is the zero element of \(V\) for any \(\omega\in\Omega\).
To prove part (i), the Definition of \(x\preceq y\) says \(y\ominus x\in^{\Omega}C\), i.e. \(y\ominus x\stackrel{\Omega}{=}c\) for some \(c\in C\). It also means
\[\left (y\ominus x\right )\oplus\omega_{1}=c\oplus\omega_{2}\]
for some \(\omega_{1},\omega_{2}\in\Omega\). By adding \(x\) on both sides, we obtain
\[y\oplus\omega_{3}\oplus\omega_{1}=x\oplus c\oplus\omega_{2},\]
where \(\omega_{3}=x\ominus x\in\Omega\). Since \(L(\omega)=\theta_{V}\), using the additivity, we have
\[L(y)=L(y\oplus\omega_{3}\oplus\omega_{1})=L(x\oplus c\oplus\omega_{2})=L(x)+L(c),\]
which implies
\[L(y)-L(x)=L(c)\in L(C).\]
This shows \(L(x)\preccurlyeq L(y)\).

To prove part (ii), using part (ii) of Proposition \ref{lsvp3}, we have that \(L(x)\preccurlyeq L(y)\) implies
\[L(y\ominus x)=L(y)-L(x)\in L(C).\]
Therefore, there exists \(c\in C\) satisfying
\[L(y\ominus x)=L(c),\mbox{ i.e. }L(y\ominus x)-L(c)=\theta_{V}.\]
Using part (ii) of Proposition \ref{lsvp3} again, we obtain \(L(y\ominus x\ominus c)=\theta_{V}\), i.e. \(y\ominus x\ominus c\in\ker L\), which says \(y\ominus x\ominus c=k\) for some \(k\in\ker L\). By adding \(c\) on both sides, we obtain \((y\ominus x)\oplus\omega =c\oplus k\), where \(\omega =c\ominus c\in\Omega\). Since \(\ker L\subseteq C\), using Lemma \ref{lsvp1}, it follows
\[(y\ominus x)\oplus\omega =c\oplus k\in C,\]
which says \(y\ominus x\stackrel{\Omega}{=}c\oplus k\in C\), i.e. \(y\ominus x\in^{\Omega}C\). Therefore, we obtain \(x\preceq y\). This completes the proof. \(\blacksquare\)

4 Vector Optimization Problems.

Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Then \(L(C)\) is a convex cone in \(V\). Given a subset \(F\) of \(X\), we can define the concepts of minimal elements of \(F\) and \(L(F)\) as follows.

  • Based on the binary relation \(\preceq\) in Definition \ref{lsvd12}, an element \(x^{*}\in F\) is called a minimal element of \(F\) when \(x\preceq x^{*}\) for \(x\in F\) implies \(x^{*}\preceq x\). We denote by \(\mbox{MIN}_{C}(F)\) the set of all minimal elements of \(F\) based on \(\preceq\).
  • Based on the binary relation \(\preceq_{H}\) in Definition \ref{lsvd11}, an element \(x^{*}\in F\) is called an H-minimal element of \(F\) when \(x\preceq_{H}x^{*}\) for \(x\in F\) implies \(x^{*}\preceq_{H}x\). We denote by \(H\mbox{-MIN}_{C}(F)\) the set of all H-minimal elements of \(F\) based on \(\preceq_{H}\).
  • Based on the binary relation \(\preccurlyeq\) in (\ref{lsveq52}), an element \(y^{*}\in L(F)\) is called a minimal element of \(L(F)\) when \(y\preccurlyeq y^{*}\) for \(y\in L(F)\) implies \(y^{*}\preccurlyeq y\). We denote by \(\mbox{MIN}_{L(C)}(L(F))\) the set of all minimal elements of \(L(F)\) based on \(\preccurlyeq\).
  • Based on the binary relation \(\preccurlyeq_{H}\) in (\ref{lsveq2}), an element \(y^{*}\in L(F)\) is called an H-minimal element of \(L(F)\) when \(y\preccurlyeq_{H}y^{*}\) for \(y\in L(F)\) implies \(y^{*}\preccurlyeq_{H}y\). We denote by \(H\mbox{-MIN}_{L(C)}(L(F))\) the set of all H-minimal elements of \(L(F)\) based on \(\preccurlyeq_{H}\).

\begin{equation}{\label{p3}}\tag{40}\mbox{}\end{equation}

Remark \ref{p3}. Suppose that \(\preceq\) is antisymmetric. Then, it is clear to see that \(x^{*}\in F\) is a minimal element of \(F\) if and only if \(x\preceq x^{*}\) for \(x\in F\) implies \(x=x^{*}\). The other three cases can be similarly realized. \(\sharp\)

\begin{equation}{\label{p8}}\tag{41}\mbox{}\end{equation}

Proposition \ref{p8}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Assume \(\Omega\subseteq\ker L\subseteq C\). Let \(F\) be a subset of \(X\) and \(x^{*}\in F\). Then \(x^{*}\in H\mbox{\em -MIN}_{C}(F)\) if and only if \(L(x^{*})\in \mbox{H-MIN}_{L(C)}(L(F))\).

Proof. Given any \(x^{*}\in \mbox{H-MIN}_{C}(F)\), assume that there exists \(y\in L(F)\) satisfying \(y\preccurlyeq_{H}L(x^{*})\), where \(y =L(x)\) for some \(x\in F\); that is, we have \(L(x)\preccurlyeq_{H}L(x^{*})\). We want to show \(L(x^{*})\preccurlyeq_{H}L(x)=y\). Using part (ii) of Proposition \ref{lsvp5}, we have \(x\preceq_{H}x^{*}\), which also says \(x^{*}\preceq_{H}x\) by the definition of minimal element. Using part (i) of Proposition \ref{lsvp5}, it follows \(L(x^{*})\preccurlyeq_{H}L(x)=y\). This shows that \(L(x^{*})\) is an H-minimal element in \(L(F)\) with respect to the convex cone \(L(C)\).

To prove the converse, given any \(L(x^{*})\in\mbox{H-MIN}_{L(C)}(L(F))\), assume that there exists \(x\in F\) satisfying \(x\preceq_{H}x^{*}\). We want to show \(x^{*}\preceq_{H}x\). Using part (i) of Proposition \ref{lsvp5}, we have \(L(x)\preccurlyeq_{H}L(x^{*})\). Since \(L(x^{*})\in \mbox{H-MIN}_{L(C)}(L(F))\), it follows \(L(x^{*})\preccurlyeq_{H}L(x)\). Using part (ii) of Proposition \ref{lsvp5}, we also have \(x^{*}\preceq_{H}x\), which shows that \(x^{*}\) is a minimal element of \(F\). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lsvp6}}\tag{42}\mbox{}\end{equation}

Proposition \ref{lsvp6}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Assume \(\Omega\subseteq\ker L\subseteq C\). Let \(F\) be a subset of \(X\) and \(x^{*}\in F\). Then \(x^{*}\in\mbox{MIN}_{C}(F)\) if and only if \(L(x^{*})\in\mbox{MIN}_{L(C)}(L(F))\).

Proof. Applying Proposition \ref{lsvp*5} to the proof of Proposition \ref{p8}, we can similarly obtain the desired result. \(\blacksquare\)

Let \(U\) be a (conventional) vector space, and let \(X\) be a near vector space. The function \(f:U\rightarrow X\) defined on \(U\) is called an \(X\)valued function. Now, we consider the following optimization problem
\[\begin{array}{lll}
\mbox{(XOP)} & \min & f(u)\\
& \mbox{subject to} & u\in G,
\end{array}\]

where \(G\) is a subset of \(U\). For example, the near vector space \(X\) can take the spaces in Examples \ref{lsv100} and \ref{null29}. Let
\[F\equiv f(G)=\{f(u):u\in G\}\subseteq X\]
be the set of all objective values of problem (XOP), and let \(C\) be a convex cone in \(X\). The solution concepts of problem (XOP) are based on the partial orderings \(\preceq_{H}\) and \(\preceq\).

  • We say that \(u^{*}\) is a minimizer of problem (XOP) when \(f(u^{*})\in \mbox{MIN}_{C}(F)\).
  • We say that \(u^{*}\) is an H-minimizer of problem (XOP) when \(f(u^{*})\in H\mbox{-MIN}_{C}(F)\).

In order to solve problem (XOP), we are going to introduce an auxiliary optimization problem that is solvable by the well-known techniques.

Let \(L:X\rightarrow V\) be a function from a near vector space \(X\) into a (conventional) vector space \(V\). We can consider the composition function \(L\circ f:U\rightarrow V\) of functions \(L\) and \(f\). Now, we consider the following vector optimization problem
\[\begin{array}{lll}
\mbox{(VOP)} & \min & (L\circ f)(u)\\
& \mbox{subject to} & u\in G.
\end{array}\]

It is clear that the set of all objective values of problem (VOP) is \(L(F)\). The solution concepts of problem (VOP) are based on the partial orderings \(\preccurlyeq_{H}\) and \(\preccurlyeq\).

  • We say that \(u^{*}\) is a minimizer of problem (VOP) when \((L\circ f)(u^{*})\in \mbox{MIN}_{L(C)}(L(F))\).
  • We say that \(u^{*}\) is an H-minimizer of problem (VOP) when \((L\circ f)(u^{*})\in H\mbox{-MIN}_{L(C)}(L(F))\).

\begin{equation}{\label{lsvp10}}\tag{43}\mbox{}\end{equation}

Proposition \ref{lsvp10}. Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Assume \(\Omega\subseteq\ker L\subseteq C\). Then, we have the following properties.

(i) \(u^{*}\) is an H-minimizer of problem (XOP) if and only if \(u^{*}\) is an H-minimizer of problem (VOP).

(ii) \(u^{*}\) is a minimizer of problem (XOP) if and only if \(u^{*}\) is a minimizer of problem (VOP).

Proof. Since the feasible sets of problems (XOP) and (VOP) are identical, the results follow immediately from Propositions \ref{p8} and \ref{lsvp6} by taking \(x^{*}=f(u^{*})\). \(\blacksquare\)

Inspired by Proposition \ref{lsvp10}, in order to solve the original problem (XOP), it suffices to solve problem (VOP), where the domain \(U\) and range \(V\) of the objective function \(L\circ f\) in problem (VOP) are all (conventional) vector spaces. Therefore, we can apply the well-known techniques in vector optimization problem to solve problem (VOP). For example, the scalarization technique in vector optimization will be invoked in this paper to solve problem (VOP).

5 Scalarization.

In order to present the scalarization, we first provide some results, which will be used for the later discussion.

\begin{equation}{\label{lsvp7}}\tag{44}\mbox{}\end{equation}

Proposition \ref{lsvp7}. Let \(X\) be a near vector space with the null set \(\Omega\), and let \(C\) be a convex cone and be pointed-like in \(X\). Consider the function \(L:X\rightarrow V\) from \(X\) into a (conventional) vector space \(V\). Assume \(\Omega\oplus\Omega\subseteq\Omega\). If \(L\) is additive and positively homogeneous satisfying \(\ker L=\Omega\), then \(L(C)\) is a pointed convex cone in \(V\).

Proof. Since \(C\) is a convex cone, we see that \(L(C)\) is a convex cone in \(V\) from Proposition \ref{lsvr1}. We remain to show \(L(C)\cap -L(C)=\{\theta_{V}\}\), where \(\theta_{V}\) is the zero element of the vector space \(V\). Given \(\eta\in L(C)\cap -L(C)\), we have \(\eta =L(x)=-L(y)\) for some \(x,y\in C\). It means
\[L(x\oplus y)=L(x)+L(y)=\theta_{V},\]
which implies \(x\oplus y\in\Omega\) by the assumption of \(\ker L=\Omega\). By referring to (\ref{eq3}), it also says \(x\in C^{-}\), since \(y\in C\). In other words, we have \(x\in C\cap C^{-}\). Since \(C\) is pointed-like, we also have \(x\stackrel{\Omega}{=}\omega\) for some \(\omega\in\Omega\). Therefore, we obtain \(x\oplus\omega_{1}=\omega\oplus\omega_{2}\) for some \(\omega_{1},\omega_{2}\in\Omega\). Since \(\Omega\oplus\Omega\subseteq\Omega\), it follows \(x\oplus\omega_{1}\in\Omega\). Using the assumption of \(\ker L=\Omega\), we obtain
\[\eta =L(x)=L(x)+\theta_{V}=L(x)+L(\omega _{1})=L(x\oplus\omega_{1})=\theta_{V}.\]
This completes the proof. \(\blacksquare\)

Let \(X\) be a near vector space, and let \(C\) be a convex cone in \(X\). Let the function \(L:X\rightarrow V\) be additive and positively homogeneous from \(X\) into a (conventional) vector space \(V\). Then, we see that \(\bar C\equiv L(C)\) is a convex cone in \(V\) by Proposition \ref{lsvr1}. We write \(V’\) to denote the collection of all linear functionals from \(V\) to \(\mathbb{R}\). The dual cone of \(\bar C\) is defined by
\[\bar C_{V’}=\left\{\phi\in V’:\phi (\bar{c})\geq 0\mbox{ for all } \bar{c}\in\bar C\right\}.\]
Now, we are in a position to present the scalarization.

\begin{equation}{\label{lsvt30}}\tag{45}\mbox{}\end{equation}

Theorem \ref{lsvt30}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone and be pointed-like in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Assume \(\Omega=\ker L\subseteq C\). Suppose that there exists a linear functional \(\phi\in \bar C_{V’}\) and an element \(u^{*}\in G\) satisfying
\begin{equation}{\label{lsv301}}\tag{46}
\phi ((L\circ f)(u^{*}))<\phi ((L\circ f)(u))\mbox{ for all }u\in G\setminus\{u^{*}\}.
\end{equation}

Then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

Proof. Using (\ref{lsveq52}) and Proposition \ref{lsvp7}, we see that the binary relation \(\preccurlyeq\) is antisymmetric. Suppose that \(u^{*}\) is not a minimizer of problem (VOP). We are going to lead a contradiction. The Definition says \((L\circ f)(u^{*})\not\in\mbox{MIN}_{L(C)}(L(F))\), which also says that \((L\circ f)(u^{*})\) is not a minimal element of \(L(F)\) with respect to the binary relation \(\preccurlyeq\). Since \(\preccurlyeq\) is antisymmetric, using the Definition of minimal element of \(L(F)\) and Remark \ref{p3}, there exists \(u\in G\) satisfying
\[(L\circ f)(u)\neq (L\circ f)(u^{*})\mbox{ and }(L\circ f)(u)\preccurlyeq (L\circ f)(u^{*}).\]
We also see \(u\neq u^{*}\). According to (\ref{lsveq52}), we have
\[(L\circ f)(u^{*})-(L\circ f)(u)\in L(C)=\bar C.\]
Since \(\phi\in \bar C_{V’}\), we obtain
\[\phi\left ((L\circ f)(u^{*})\right )-\phi\left ((L\circ f)(u)\right )=\phi\left ((L\circ f)(u^{*})-(L\circ f)(u)\right )\geq 0,\]
which contradicts (\ref{lsv301}). This contradiction says that \(u^{*}\) is a minimizer of problem (VOP). Using part (ii) of Proposition \ref{lsvp10}, it follows that \(u^{*}\) is a minimizer of problem (XOP). Using part (i) of Proposition \ref{lsvp10} and considering the binary relation \(\preccurlyeq_{H}\) given in (\ref{lsveq2}), the above arguments are still valid to show that \(u^{*}\) is an H-minimizer of problem (XOP). This completes the proof. \(\blacksquare\)

Next, we present further results under different assumptions.

\begin{equation}{\label{lsvt2*30}}\tag{47}\mbox{}\end{equation}

Theorem \ref{lsvt2*30}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied.
Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Suppose that the following conditions are satisfied.

  • \(X\) has a unique zero element \(\theta\) and \(\theta\in C\).
  •  The inclusions \(\Omega\subseteq\ker L\subseteq C\) are satisfied.

If there exists a linear functional \(\phi\in \bar C_{V’}\) and an element \(u^{*}\in G\) satisfying
\[\phi ((L\circ f)(u^{*}))<\phi ((L\circ f)(u))\mbox{ for all }u\in G\setminus\{u^{*}\},\]
then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

Proof. Suppose that \(u^{*}\) is not a minimizer of problem (VOP). We are going to lead a contradiction. The Definition says \((L\circ f)(u^{*})\not\in H\mbox{-MIN}_{L(C)}(L(F))\), which also says that \((L\circ f)(u^{*})\) is not an H-minimizer of \(L(F)\) with respect to the binary relation \(\preccurlyeq_{H}\). Using the Definition of H-minimal element of \(L(F)\), there exists \(u\in G\) satisfying
\[(L\circ f)(u)\preccurlyeq_{H} (L\circ f)(u^{*})\mbox{ and }(L\circ f)(u^{*})\not\preccurlyeq_{H} (L\circ f)(u).\]
Since \((L\circ f)(u^{*})\preccurlyeq_{H} (L\circ f)(u^{*})\) by part (i) of Proposition \ref{lsvp334}, it follows \(u\neq u^{*}\). According to (\ref{lsveq2}), we also have
\[(L\circ f)(u^{*})-(L\circ f)(u)\in L(C)=\bar C.\]
Since \(\phi\in \bar C_{V’}\), we obtain
\[\phi\left ((L\circ f)(u^{*})\right )-\phi\left ((L\circ f)(u)\right )=\phi\left ((L\circ f)(u^{*})-(L\circ f)(u)\right )\geq 0.\]
This contradiction says that \(u^{*}\) is an H-minimizer of problem (VOP). Using part (i) of Proposition \ref{lsvp10}, it follows that \(u^{*}\) is an H-minimizer of problem (XOP). Using Proposition \ref{lsvp*334} and part (ii) of Proposition \ref{lsvp10} by considering the binary relation \(\preccurlyeq\), the above arguments are still valid to show that \(u^{*}\) is a minimizer of problem (XOP). This completes the proof. \(\blacksquare\)

Given \(\phi\in\bar C_{V’}\), we consider the following real-valued (scalar) optimization problem
\[\begin{array}{lll}
\mbox{(SOP)} & \min & \phi((L\circ f)(u))\\
& \mbox{subject to} & u\in G.
\end{array}\]

Then, we have the following results.

\begin{equation}{\label{p7}}\tag{48}\mbox{}\end{equation}

Theorem \ref{p7}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Then, we have the following properties.

(i) Suppose that \(C\) is pointed-like in \(X\), and that \(\Omega=\ker L\subseteq C\). If \(u^{*}\) is a unique optimal solution of scalar optimization problem (SOP), then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

(ii) Suppose that \(X\) has a unique zero element \(\theta\) with \(\theta\in C\), and that \(\Omega\subseteq\ker L\subseteq C\). If \(u^{*}\) is a unique optimal solution of scalar optimization problem (SOP), then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

Proof. It is clear to see that part (i) can follow from Theorem \ref{lsvt30}, and that part (ii) can follow from Theorem \ref{lsvt2*30}. This completes the proof. \(\blacksquare\)

Now, we consider the quasi-interior of the dual cone of \(\bar C\) defined by
\[\bar C^{\circ}_{V’}=\left\{\phi\in V’:\phi (\bar{c})>0\mbox{ for all }\bar{c}\in
\bar C\setminus\{\theta_{V}\}\right\}.\]

Then, we have the different types of scalarization.

\begin{equation}{\label{lsvt31}}\tag{49}\mbox{}\end{equation}

Theorem \ref{lsvt31}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone and be pointed-like in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Assume \(\Omega=\ker L\subseteq C\). Suppose that there exists a linear functional \(\phi^{\circ}\in \bar C_{V’}^{\circ}\) and an element \(u^{*}\in G\) satisfying
\begin{equation}{\label{lsv*301}}\tag{50}
\phi^{\circ} ((L\circ f)(u^{*}))\leq\phi^{\circ} ((L\circ f)(u))\mbox{ for all }u\in G.
\end{equation}

Then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

Proof. Using the proof of Theorem \ref{lsvt30}, there exists \(u\in G\) satisfying
\[(L\circ f)(u)\neq (L\circ f)(u^{*})\mbox{ and }(L\circ f)(u)\preccurlyeq (L\circ f)(u^{*}).\]
According to (\ref{lsveq52}), we also have
\[\theta_{V}\neq (L\circ f)(u^{*})-(L\circ f)(u)\in L(C)=\bar C.\]
Since \(\phi^{\circ}\in\bar C^{\circ}_{V’}\), we obtain
\[\phi^{\circ}\left ((L\circ f)(u^{*})\right )-\phi^{\circ}\left ((L\circ f)(u)\right )
=\phi^{\circ}\left ((L\circ f)(u^{*})-(L\circ f)(u)\right )>0,\]

which contradicts (\ref{lsv*301}). This contradiction says that \(u^{*}\) is a minimizer of problem (VOP). Using part (ii) of Proposition \ref{lsvp10}, we also see that \(u^{*}\) is a minimizer of problem (XOP). Using part (i) of Proposition \ref{lsvp10} and considering the binary relation \(\preccurlyeq_{H}\), the above arguments are still valid to show that \(u^{*}\) is an H-minimizer of problem (XOP). This completes the proof. \(\blacksquare\)

\begin{equation}{\label{lsvt2*31}}\tag{51}\mbox{}\end{equation}

Theorem \ref{lsvt2*31}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Suppose that the following conditions are satisfied.

  • \(X\) has a unique zero element \(\theta\) and \(\theta\in C\).
  • The inclusions \(\Omega\subseteq\ker L\subseteq C\) are satisfied.

If there exists a linear functional \(\phi^{\circ}\in \bar C^{\circ}_{V’}\) and an element \(u^{*}\in G\) satisfying
\[\phi^{\circ} ((L\circ f)(u^{*}))\leq\phi^{\circ} ((L\circ f)(u))\mbox{ for all }u\in G,\]
then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

Proof. Using the proof of Theorem \ref{lsvt2*30}, there exists \(u\in G\) satisfying
\[(L\circ f)(u)\preccurlyeq_{H} (L\circ f)(u^{*})\mbox{ and }(L\circ f)(u^{*})\not\preccurlyeq_{H} (L\circ f)(u).\]
If \((L\circ f)(u^{*})=(L\circ f)(u)\) then it contradicts \((L\circ f)(u^{*})\not\preccurlyeq_{H} (L\circ f)(u)\) by part (i) of Proposition \ref{lsvp334}. According to (\ref{lsveq2}), we have
\[\theta_{V}\neq (L\circ f)(u^{*})-(L\circ f)(u)\in L(C)=\bar C.\]
Since \(\phi^{\circ}\in\bar C^{\circ}_{V’}\), it follows
\[\phi^{\circ}\left ((L\circ f)(u^{*})\right )-\phi^{\circ}\left ((L\circ f)(u)\right )
=\phi^{\circ}\left ((L\circ f)(u^{*})-(L\circ f)(u)\right )>0.\]

This contradiction says that \(u^{*}\) is an H-minimizer of problem (VOP). Using part (i) of Proposition \ref{lsvp10}, we also see that \(u^{*}\) is an H-minimizer of problem (XOP). Using Proposition \ref{lsvp*334} and part (ii) of Proposition \ref{lsvp10} by considering the binary relation \(\preccurlyeq\), the above arguments are still valid to show that \(u^{*}\) is a minimizer of problem (XOP). This completes the proof. \(\blacksquare\)

Given \(\phi^{\circ}\in\bar C_{V’}^{\circ}\), we consider the following real-valued (scalar) optimization problem
\[\begin{array}{lll}
(\mbox{SOP}^{\circ}) & \min & \phi^{\circ}((L\circ f)(u))\\
& \mbox{subject to} & u\in G.
\end{array}\]

Then, we have the following results.

\begin{equation}{\label{p*7}}\tag{52}\mbox{}\end{equation}

Theorem \ref{p*7}. Let \(X\) be a near vector space with the null set \(\Omega\) satisfying \(\Omega\oplus\Omega\subseteq\Omega\), and let \(C\) be a convex cone in \(X\) such that the assumptions (a) and (b) are satisfied. Let \(L:X\rightarrow V\) be an additive and positively homogeneous function from \(X\) into a (conventional) vector space \(V\). Then, we have the following properties.

(i) Suppose that \(C\) is pointed-like in \(X\), and that \(\Omega=\ker L\subseteq C\). If \(u^{*}\) is an optimal solution of scalar optimization problem \((\mbox{SOP}^{\circ})\), then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

(ii) Suppose that \(X\) has a unique zero element \(\theta\) with \(\theta\in C\), and that \(\Omega\subseteq\ker L\subseteq C\). If \(u^{*}\) is an optimal solution of scalar optimization problem \((\mbox{SOP}^{\circ})\), then \(u^{*}\) is both a minimizer and H-minimizer of problem (XOP).

Proof. It is clear to see that part (i) can follow from Theorem \ref{lsvt31}, and that part (ii) can follow from Theorem \ref{lsvt2*31}. This completes the proof. \(\blacksquare\)

6 Linear Optimization Problems.

Let \(\eta :X\rightarrow\mathbb{R}\) be an additive and homogeneous real-valued function defined on the near vector space \(X\) satisfying \(\eta(\omega)=0\) for any \(\omega\in\Omega\). Let
\[C=\left\{c\in X:\eta(c)\geq 0\right\}.\]
It is clear to see that \(C\) is a convex cone in \(X\) satisfying \(\Omega\subseteq C\). Given any fixed \(x_{i}\in X\) for \(i=1,\cdots ,n\), we consider the following generalized linear programming problem
\[\begin{array}{lll}
\mbox{(GLP)} & \min & f(u_{1},\cdots ,u_{n})=x_{1}u_{1}\oplus x_{2}u_{2}
\oplus\cdots\oplus x_{n}u_{n}\\
& \mbox{subject to} & {\bf u}=(u_{1},\cdots ,u_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }
{\bf u}\in\mathbb{R}_{+}^{n}.
\end{array}\]

where \(G\) is a feasible set consisting of real-valued linear constraints, or a feasible set consisting of \(X\)-valued linear constraints which are ordered by a partial ordering on \(X\). The objective function \(f\) is an \(X\)-valued function such that its coefficients are taken to be the values in \(X\).

Given any fixed \(\xi_{i}\in\mathbb{R}\) with \(\xi_{i}\neq 0\) for \(i=1,\cdots ,m\), let \(L:X\rightarrow\mathbb{R}^{m}\) be defined by
\[L\left (x\right )=\left (\xi_{1}\eta(x),\xi_{2}\eta(x),\cdots ,\xi_{m}\eta(x)\right ).\]
Since \(\eta\) is additive and positively homogeneous, it is clear that \(L\) is additive and positively homogeneous. We also have
\[\bar{C}\equiv L(C)=\left\{\left (\xi_{1}\eta(c),\cdots ,\xi_{m}\eta(c)\right )\in\mathbb{R}^{m}:\eta(c)\geq 0\right\}.\]
We present some useful properties.

  • Given any \(\omega\in\Omega\), we have \(L(\omega )=(0,\cdots ,0)\) which is the zero element of \(\mathbb{R}^{m}\). This shows \(\Omega\subseteq\ker L\).
  • Given any \(x\in\ker L\), i.e. \(L(x)={\bf 0}\), we obtain \(\eta(x)=0\), i.e. \(x\in C\). This shows \(\ker L\subseteq C\).

Therefore, we obtain
\begin{equation}{\label{eq111}}\tag{53}
\Omega\subseteq\ker L\subseteq C.
\end{equation}

We consider the composition function \(L\circ f\) given by
\[(L\circ f)\left (u_{1},\cdots ,u_{n}\right )
=\left (\xi_{1}\cdot (\eta\circ f)({\bf u}),\cdots ,\xi_{m}\cdot (\eta\circ f)({\bf u})\right )\in\mathbb{R}^{m},\]

and take the linear functional \(\phi^{\circ}:\mathbb{R}^{m}\rightarrow\mathbb{R}\) given by
\[\phi^{\circ} (u_{1},\cdots ,u_{m})=\zeta_{1}u_{1}+\zeta_{2}u_{2}+\cdots +\zeta_{m}u_{m}+k,\]
where \(k>0\) is a constant and \(\zeta_{i}\neq 0\) for \(i=1,\cdots ,m\) are also constants. Then, we have
\begin{align*}\left (\phi^{\circ}\circ L\right )\left (x\right ) & =\phi^{\circ}\left (\xi_{1}\eta(x),\xi_{2}\eta(x),\cdots ,\xi_{m}\eta(x)\right )\\ & =\xi_{1}\zeta_{1}\eta(x)+\cdots+\xi_{m}\zeta_{m}\eta(x)+k\equiv\gamma\eta(x)+k,\end{align*}
where
\[\gamma\equiv\xi_{1}\zeta_{1}+\cdots+\xi_{m}\zeta_{m}\in\mathbb{R}.\]
Here we take \(\zeta_{i}\) for \(i=1,\cdots ,m\) satisfying \(\gamma\geq 0\). Then, we have \(\phi^{\circ} (\bar{c})>0\) for all \(\bar{c}\in\bar{C}\), which also says \(\phi^{\circ}\in\bar{C}_{V’}^{\circ}\).

Now, we consider the following linear programming problem
\[\begin{array}{lll}
(\mbox{LP}^{\circ}) & \min & \phi^{\circ}\left (\left (L\circ f)(u_{1},\cdots ,u_{n}\right )\right )\\
& \mbox{subject to} & {\bf u}=(u_{1},\cdots ,u_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf u}\in\mathbb{R}_{+}^{n},
\end{array}\]

where the real-valued linear objective function is given by
\begin{align*}\phi^{\circ}\left (\left (L\circ f)({\bf u}\right )\right ) & =\phi^{\circ}\left (\xi_{1}\cdot (\eta\circ f)({\bf u}),\cdots ,\xi_{m}\cdot (\eta\circ f)({\bf u})\right )\\ & =\xi_{1}\zeta_{1}(\eta\circ f)({\bf u})+\cdots+\xi_{m}\zeta_{m}(\eta\circ f)({\bf u})+k=\gamma(\eta\circ f)({\bf u})+k. \end{align*}
We are going to apply Theorem \ref{p*7} by considering two different assumptions.

(A1) Suppose that \(X\) has a unique zero element \(\theta\) with \(\theta\in C\). Using (\ref{eq111}) and part (ii) of Theorem \ref{p*7}, if \({\bf u}^{*}=(u_{1}^{*},\cdots ,u_{n}^{*})\) is an optimal solution of linear programming problem \((\mbox{LP}^{\circ})\), then \({\bf u}^{*}\) is both a minimizer and H-minimizer of problem (GLP).

(A2) Assume \(\ker\eta=\Omega\). Example \ref{p21} says that \(C\) is both a pointed-like and \(\Omega\)-pointed-like convex cone in \(X\). Given any \(x\in\ker L\), i.e. \(L(x)={\bf 0}\), we obtain \(\eta(x)=0\), i.e. \(x\in\ker\eta=\Omega\), which shows \(\ker L\subseteq\Omega\) and obtains \(\ker L=\Omega\) by (\ref{eq111}). Using part (i) of Theorem\ref{p*7}, if \({\bf u}^{*}=(u_{1}^{*},\cdots ,u_{n}^{*})\) is an optimal solution of linear programming problem \((\mbox{LP}^{\circ})\), then \({\bf u}^{*}\) is both a minimizer and H-minimizer of problem (GLP).

It is clear that solving the linear programming problem \((\mbox{LP}^{\circ})\) is equivalent to solving the following linear programming problem
\[\begin{array}{lll}
(\mbox{LP}) & \min & (\eta\circ f)(u_{1},\cdots ,u_{n})\\
& \mbox{subject to} & {\bf u}=(u_{1},\cdots ,u_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf u}\in\mathbb{R}_{+}^{n},
\end{array}\]

where the nonnegative constants \(\gamma\) and \(k\) can be ignored. Also, the problem (LP) can be written as follows
\[\begin{array}{lll}
(\mbox{LP}) & \min & \eta(x_{1})u_{1}+\cdots +\eta(x_{m})u_{m}\\
& \mbox{subject to} & {\bf u}=(u_{1},\cdots ,u_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf u}\in\mathbb{R}_{+}^{n}.
\end{array}\]

Therefore, in order to solve the generalized linear programming problem (GLP) by obtaining the minimizer and H-minimizer, we can solve the conventional linear programming problem (LP) shown above. In the sequel, we present two practical problems.

6.1 Interval-Valued Linear Programming Problems.

Let \({\cal I}\) be a collection of all bounded closed intervals in \(\mathbb{R}\). We see that \({\cal I}\) is a near vector space rather than a (conventional) vector space. The null set \(\Omega\) of \({\cal I}\) is given by
\[\Omega=\left\{[-k,k]:k\geq 0\right\}.\]
It is clear to see that the following set
\[C=\left\{\left [c_{1},c_{2}\right ]\in {\cal I}:c_{1}+c_{2}\geq 0\right\}\]
is a convex cone and is pointed-like in \({\cal I}\) satisfying \(\Omega\subseteq C\).

Let \(I_{i}=[a_{1}^{(i)},a_{2}^{(i)}]\) be bounded closed intervals in \(\mathbb{R}\) for \(i=1,\cdots ,n\). We consider the following interval-valued linear programming problem
\[\begin{array}{lll}
\mbox{(ILP)} & \min & f(x_{1},\cdots ,x_{n})=x_{1}I_{1}\oplus x_{2}I_{2}\oplus\cdots\oplus x_{n}I_{n}\\
& \mbox{subject to} & {\bf x}=(x_{1},\cdots ,x_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf x}\in\mathbb{R}_{+}^{n}.
\end{array}\]

where \(G\) is a feasible set consisting of linear constraints. The problem (ILP) corresponds to the problem (XOP).

We consider a function \(L:{\cal I}\rightarrow\mathbb{R}^{2}\) from \({\cal I}\) into \(\mathbb{R}^{2}\) defined by
\[L\left (\left [a_{1},a_{2}\right ]\right )=\left (-a_{1}-a_{2},a_{1}+a_{2}\right ).\]
We see that \(L\) is additive and positively homogeneous. Now, we have
\[\bar{C}\equiv L(C)=\left\{\left (-c_{1}-c_{2},c_{1}+c_{2}\right )\in\mathbb{R}^{2}:c_{1}+c_{2}\geq 0\right\}.\]
Given any \(\omega =[-k,k]\in\Omega\), we have \(L(\omega )=(0,0)\), which is the zero element of \(\mathbb{R}^{2}\). We also see that
\[L([a_{1},a_{2}])=(0,0)\mbox{ implies }-a_{1}=a_{2},\]
which shows \([a_{1},a_{2}]\in\Omega\). Therefore, we obtain \(\ker L=\Omega\).

The composition function \(L\circ f\) is given by
\[(L\circ f)\left (x_{1},\cdots ,x_{n}\right )=\left (g_{1}\left (x_{1},\cdots ,x_{n}\right ),
g_{2}\left (x_{1},\cdots ,x_{n}\right )\right )\in\mathbb{R}^{2},\]

where
\[g_{1}(x_{1},\cdots ,x_{n})=-\left (a_{1}^{(1)}+a_{2}^{(1)}\right )x_{1}-\cdots -\left (a_{1}^{(n)}+a_{2}^{(n)}\right )x_{n}\]
and
\[g_{2}(x_{1},\cdots ,x_{n})=\left (a_{1}^{(1)}+a_{2}^{(1)}\right )x_{1}+\cdots +
\left (a_{1}^{(n)}+a_{2}^{(n)}\right )x_{n}=-g_{1}(x_{1},\cdots ,x_{n}).\]

Using this composition function \(L\circ f\), we can set up the problem (VOP). Therefore, we use the scalarization technique to solve problem (VOP) as follows.

We take the linear functional \(\phi^{\circ}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) given by
\[\phi^{\circ} (x,y)=\frac{1}{2}x+\frac{3}{2}y+k,\]
where \(k>0\) is a constant. We can show \(\phi^{\circ} (\bar{c} )>0\) for all \(\bar{c}\in\bar{C}\) and \(\phi^{\circ}\in\bar{C}_{V’}^{\circ}\), where \(V=\mathbb{R}^{2}\). We also have
\begin{align*}\left (\phi^{\circ}\circ L\right )\left (\left [a_{1},a_{2}\right ]\right )& =\phi^{\circ}\left (-a_{1}-a_{2},a_{1}+a_{2}\right )\\& =\frac{1}{2}\left (-a_{1}-a_{2}\right )+\frac{3}{2}\left (a_{1}+a_{2}\right )+k=a_{1}+a_{2}+k.\end{align*}

Now, we consider the following linear programming problem
\[\begin{array}{lll}
(\mbox{LP}^{\circ}) & \min & \phi^{\circ}\left (\left (L\circ f)(x_{1},\cdots ,x_{n}\right )\right )\\
& \mbox{subject to} & {\bf x}=(x_{1},\cdots ,x_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf x}\in\mathbb{R}_{+}^{n}.
\end{array}\]

The objective function is given by
\begin{align*} & \phi^{\circ}\left (\left (L\circ f)(x_{1},\cdots ,x_{n}\right )\right )=\phi^{\circ}\left (g_{1}\left (x_{1},\cdots ,x_{n}\right ),g_{2}\left (x_{1},\cdots ,x_{n}\right )\right )\\& \quad =\frac{1}{2}\cdot g_{1}(x_{1},\cdots ,x_{n})+\frac{3}{2}\cdot g_{2}(x_{1},\cdots ,x_{n})+k=g_{2}(x_{1},\cdots ,x_{n})+k\\& \quad =\left (a_{1}^{(1)}+a_{2}^{(1)}\right )x_{1}+\cdots +\left (a_{1}^{(n)}+a_{2}^{(n)}\right )x_{n}+k.\end{align*}
Part (i) of Theorem \ref{p*7} says that if \({\bf x}^{*}=(x_{1}^{*},\cdots ,x_{n}^{*})\) is an optimal solution of linear programming problem \((\mbox{LP}^{\circ})\) then \({\bf x}^{*}\) is both a minimizer and H-minimizer of problem (ILP).

We also see that solving the linear programming problem \((\mbox{LP}^{\circ})\) is equivalent to solving the following linear programming problem
\[\begin{array}{lll}
(\mbox{LP}) & \min & \left (a_{1}^{(1)}+a_{2}^{(1)}\right )x_{1}+\cdots+\left (a_{1}^{(n)}+a_{2}^{(n)}\right )x_{n}\\
& \mbox{subject to} & {\bf x}=(x_{1},\cdots ,x_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf x}\in\mathbb{R}_{+}^{n},
\end{array}\]

where the nonnegative constant \(k\) can be ignored. In other words, we can simply solve the conventional linear programming problem (LP) to obtain the minimizer and H-minimizer of interval-valued linear programming problem (ILP).

6.2 Linear Set Optimization Problems.

Let \((X,\parallel\cdot\parallel)\) be a normed space, and let \({\cal K}_{cc}(X)\) be the collection of all compact and convex subsets of \(X\). Given any \(A,B\in {\cal K}_{cc}(X)\), the set addition is defined by
\[A\oplus B=\left\{a+b:a\in A\mbox{ and }b\in B\right\}\]
and the scalar multiplication in \({\cal K}_{cc}(X)\) is defined by
\[\lambda A=\left\{ka:a\in A\right\},\]
where \(\lambda\) is a constant in \(\mathbb{R}\). It is clear to see that \({\cal K}_{cc}(X)\) cannot form a (conventional) vector space under the above set addition and scalar multiplication. However, it can be justified to be a near vector space. The null set \(\Omega\) of \({\cal K}_{cc}(X)\) is given by
\[\Omega=\left\{K\ominus K:K\in {\cal K}_{cc}(X)\right\},\]
where
\[K\ominus K=\left\{a-b:a,b\in K\right\}.\]

Let \(\lambda\) be a continuous linear functional on \(X\). We can show that the following family
\[{\cal C}=\left\{C\in {\cal K}_{cc}(X):\sup_{\alpha\in C}\lambda(\alpha)+\inf_{\alpha\in C}\lambda(\alpha)\geq 0\right\}\]
is a convex cone in \({\cal K}_{cc}(X)\) with respect to the above set addition and scalar multiplication. We also have the inclusion \(\Omega\subseteq{\cal C}\).

Given nonempty subsets \(K_{i}\) of normed space \((X,\parallel\cdot\parallel)\) for \(i=1,\cdots ,n\), we can consider the following linear set optimization problem
\[\begin{array}{lll}
\mbox{(LSOP)} & \min & f(x_{1},\cdots ,x_{n})=x_{1}K_{1}\oplus x_{2}K_{2}\oplus\cdots\oplus x_{n}K_{n}\\
& \mbox{subject to} & {\bf x}=(x_{1},\cdots ,x_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf x}\in\mathbb{R}_{+}^{n}.
\end{array}\]

where \(G\) is a feasible set consisting of real-valued linear constraints. The problem (LSOP) corresponds to the problem (XOP).

We consider a function \(L:{\cal K}_{cc}(X)\rightarrow\mathbb{R}^{2}\) defined by
\[L(K)=\left (-\sup_{\alpha\in K}\lambda(\alpha)-\inf_{\alpha\in K}\lambda(\alpha),\sup_{\alpha\in K}\lambda(\alpha)
+\inf_{\alpha\in K}\lambda(\alpha)\right ).\]

Using the linearity of \(\lambda\), we can show that \(L\) is additive and positively homogeneous. We also have
\begin{align*}\bar{{\cal C}} & \equiv L({\cal C})\\& =\left\{\left (-\sup_{\alpha\in C}\lambda(\alpha)-\inf_{\alpha\in C}\lambda(\alpha),\sup_{\alpha\in C}\lambda(\alpha)+\inf_{\alpha\in C}\lambda(\alpha)\right )\in\mathbb{R}^{2}:\sup_{\alpha\in C}\lambda(\alpha)+\inf_{\alpha\in C}\lambda(\alpha)\geq 0\right\}.\end{align*}

Using the linearity of \(\lambda\), we have
\[\sup_{\alpha\in A\oplus B}\lambda(\alpha)=\sup_{\alpha\in A}\lambda(\alpha)+\sup_{\beta\in B}\lambda(\beta)\mbox{ and }
\inf_{\alpha\in A\oplus B}\lambda(\alpha)=\inf_{\alpha\in A}\lambda(\alpha)+\inf_{\beta\in B}\lambda(\beta)\]

and
\[\sup_{\alpha\in K\ominus K}\lambda(\alpha)=\sup_{\alpha\in K}\lambda(\alpha)-\inf_{\alpha\in K}\lambda(\alpha)\mbox{ and }
\inf_{\alpha\in K\ominus K}\lambda(\alpha)=\inf_{\alpha\in K}\lambda(\alpha)
-\sup_{\alpha\in K}\lambda(\alpha).\]

Given any \(\omega\in\Omega\), we have \(\omega=K\ominus K\) for some \(K\in{\cal K}_{cc}(X)\) and
\[L(\omega)=\left (-\sup_{\alpha\in K\ominus K}\lambda(\alpha)-
\inf_{\alpha\in K\ominus K}\lambda(\alpha),\sup_{\alpha\in K\ominus K}\lambda(\alpha)
+\inf_{\alpha\in K\ominus K}\lambda(\alpha)\right )=(0,0),\]

which is the zero element of \(\mathbb{R}^{2}\). This shows the inclusion \(\Omega\subseteq\ker L\). On the other hand, given \(K\in\ker L\), i.e. \(L(K)=(0,0)\), we have
\[\sup_{\alpha\in K}\lambda(\alpha)+\inf_{\alpha\in K}\lambda(\alpha)=0,\]
which says \(K\in{\cal C}\). Therefore, we obtain the inclusion \(\ker L\subseteq{\cal C}\); that is to say, we have
\[\Omega\subseteq\ker L\subseteq{\cal C}.\]

We write \(f({\bf x})\equiv f(x_{1},\cdots ,x_{n})\), which is a set in \({\cal K}_{cc}(X)\) for any fixed \({\bf x}\). Then, we have
\[(L\circ f)\left (x_{1},\cdots ,x_{n}\right )=\left (-\sup_{\alpha\in f({\bf x})}\lambda(\alpha)-
\inf_{\alpha\in f({\bf x})}\lambda(\alpha),\sup_{\alpha\in f({\bf x})}\lambda(\alpha)+\inf_{\alpha\in f({\bf x})}\lambda(\alpha)\right ).\]

Since \(x_{i}\geq 0\) for all \(i=1,\cdots ,n\), we can obtain
\[\sup_{\alpha\in f({\bf x})}\lambda(\alpha)=\left [\sup_{\alpha\in K_{1}}\lambda(\alpha)
\right ]x_{1}+\cdots +\left [\sup_{\alpha\in K_{n}}\lambda(\alpha)\right ]x_{n}\]

and
\[\inf_{\alpha\in f({\bf x})}\lambda(\alpha)=\left [\inf_{\alpha\in K_{1}}\lambda(\alpha)
\right ]x_{1}+\cdots +\left [\inf_{\alpha\in K_{n}}\lambda(\alpha)\right ]x_{n}.\]

The composition function \(L\circ f\) is given by
\[(L\circ f)\left (x_{1},\cdots ,x_{n}\right )=\left (g_{1}\left (x_{1},\cdots ,x_{n}\right ),
g_{2}\left (x_{1},\cdots ,x_{n}\right )\right )\in\mathbb{R}^{2},\]

where
\[g_{1}(x_{1},\cdots ,x_{n})=-\sup_{\alpha\in f({\bf x})}\lambda(\alpha)-
\inf_{\alpha\in f({\bf x})}\lambda(\alpha)\]

and
\[g_{2}(x_{1},\cdots ,x_{n})=\sup_{\alpha\in f({\bf x})}\lambda(\alpha)+
\inf_{\alpha\in f({\bf x})}\lambda(\alpha)=-g_{1}(x_{1},\cdots ,x_{n}).\]

Using this composition function \(L\circ f\), we can set up the problem (VOP). Therefore, we use the scalarization technique to solve problem (VOP) as follows.

We take the linear functional \(\phi^{\circ}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) given by
\[\phi^{\circ} (x,y)=\frac{1}{2}x+\frac{3}{2}y+k,\]
where \(k>0\) is a constant. Since
\[\phi^{\circ}\left (-\sup_{\alpha\in C}\lambda(\alpha)-\inf_{\alpha\in C}\lambda(\alpha),
\sup_{\alpha\in C}\lambda(\alpha)+\inf_{\alpha\in C}\lambda(\alpha)\right )
=\sup_{\alpha\in C}\lambda(\alpha)+\inf_{\alpha\in C}\lambda(\alpha)+k>0,\]

we see \(\phi^{\circ} (\bar{c})>0\) for all \(\bar{c}\in\bar{{\cal C}}\) and \(\phi^{\circ}\in\bar{{\cal C}}_{V’}^{\circ}\), where \(V=\mathbb{R}^{2}\). Given any \(K\in{\cal K}_{cc}(X)\), we also have
\begin{align*}\left (\phi^{\circ}\circ L\right )\left (K\right )& = \phi^{\circ}\left (-\sup_{\alpha\in K}\lambda(\alpha)-\inf_{\alpha\in K}\lambda(\alpha),\sup_{\alpha\in K}\lambda(\alpha)+\inf_{\alpha\in K}\lambda(\alpha)\right )\\ & =\frac{1}{2}\left (-\sup_{\alpha\in K}\lambda(\alpha)-\inf_{\alpha\in K}\lambda(\alpha)\right )+\frac{3}{2}\left (\sup_{\alpha\in K}\lambda(\alpha)+\inf_{\alpha\in K}\lambda(\alpha)\right )+k\\ & =\sup_{\alpha\in K}\lambda(\alpha)+\inf_{\alpha\in K}\lambda(\alpha)+k.\end{align*}

Now, we consider the following linear programming problem
\[\begin{array}{lll}
(\mbox{LP}^{\circ}) & \min & \phi^{\circ}\left (\left (L\circ f)(x_{1},\cdots ,x_{n}\right )\right )\\
& \mbox{subject to} & {\bf x}=(x_{1},\cdots ,x_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf x}\in\mathbb{R}_{+}^{n},
\end{array}\]

The objective function is given by
\begin{align*}& \phi^{\circ}\left (\left (L\circ f)(x_{1},\cdots ,x_{n}\right )\right )\\ & \quad =\phi^{\circ}\left (g_{1}\left (x_{1},\cdots ,x_{n}\right ),g_{2}\left (x_{1},\cdots ,x_{n}\right )\right )\\ & \quad =\frac{1}{2}\cdot g_{1}(x_{1},\cdots ,x_{n})+\frac{3}{2}\cdot g_{2}(x_{1},\cdots ,x_{n})+k=g_{2}(x_{1},\cdots ,x_{n})+k\\ & \quad =\left [\sup_{\alpha\in K_{1}}\lambda(\alpha)+\inf_{\alpha\in K_{1}}\lambda(\alpha) \right ]x_{1}+\cdots +\left [\sup_{\alpha\in K_{n}}\lambda(\alpha)+\inf_{\alpha\in K_{n}}\lambda(\alpha)\right ]x_{n}+k.\end{align*}
Since \({\cal K}_{cc}(X)\) has a unique zero element \(\{\theta_{X}\}\) with \(\{\theta_{X}\}\in{\cal C}\), part (ii) of Theorem \ref{p*7} says that if \({\bf x}^{*}=(x_{1}^{*},\cdots ,x_{n}^{*})\) is an optimal solution of linear programming problem \((\mbox{LP}^{\circ})\) then \({\bf x}^{*}\) is both a minimizer and H-minimizer of problem (LSOP).

We see that solving the linear programming problem \((\mbox{LP}^{\circ})\) is equivalent to solving the following linear programming problem
\[\begin{array}{lll}
(\mbox{LP}) & \min & {\displaystyle \left [\sup_{\alpha\in K_{1}}\lambda(\alpha)
+\inf_{\alpha\in K_{1}}\lambda(\alpha)\right ]x_{1}+\cdots+\left [\sup_{\alpha\in K_{n}}\lambda(\alpha)
+\inf_{\alpha\in K_{n}}\lambda(\alpha)\right ]x_{n}}\\
& \mbox{subject to} & {\bf x}=(x_{1},\cdots ,x_{n})\in G\subset\mathbb{R}^{n}\mbox{ and }{\bf x}\in\mathbb{R}_{+}^{n},
\end{array}\]

where the nonnegative constant \(k\) can be ignored. In other words, we can simply solve the conventional linear programming problem (LP) to obtain the minimizer and H-minimizer of linear set optimization problem (LSOP).

7 Conclusions.

An extended formulation of vector optimization problem based on the near vector space has been proposed in this paper, where the concept of near vector space extends the (conventional) vector space. The near vector space also owns the vector structure such that the concept of convex cone can be considered. Especially, the pointed-like convex cone can be defined such that it can induce an antisymmetric-like partial ordering using the null set in near vector space.

Two partial orderings based on the Hukuhara difference and vector subtraction have been proposed to define the concepts of H-minimal element and minimal element, respectively. After setting the vector optimization problem (XOP), the concept of H-minimizer and minimizer of problem (XOP) have also been defined using the H-minimal element and minimal element in the set of all objective values, respectively.

In order to solve the vector optimization problem (XOP), using the order-preserving properties presented in Propositions \ref{lsvp5} and \ref{lsvp*5}, a scalarization technique has been designed to obtain the H-minimizer and minimizer of problem (XOP) by referring to Theorems \ref{p7} and \ref{p*7}. The difference between Theorems \ref{p7} and \ref{p*7} is that \(u^{*}\) must be assumed to be the unique optimal solution of scalar optimization problem (SOP) in Theorem \ref{p7}. However, the uniqueness is not needed in Theorem \ref{p*7} under the different assumptions.

The scalarization technique adopted in this paper is similar to the scalarization problem formulated from the finite-dimensional multiobjective optimization problem. Suppose that \(f_{1},\cdots,f_{m}\) are the objective functions of this multiobjective optimization problem. Then, its corresponding scalarization problem is taken to be the weighting sum of these objective functions given by
\[w_{1}f_{1}+\cdots +w_{m}f_{m}.\]
Under some suitable conditions, the optimal solution of this scalarization problem is also a Pareto optimal solution of the original multiobjective optimization problem. In this case, the different weighting coefficients \(w_{1},\cdots,w_{m}\) will generate the different Pareto optimal solutions. In other words, it can have infinite number of Pareto optimal solutions. Therefore, it is possible to apply the existing algorithms to obtain the “best” Pareto optimal solution. In this paper, it is clear to see that the scalarization problem (SOP) depends on the linear functional \(\phi\). Therefore, the H-minimizer and minimizer of problem (XOP) obtained from Theorems \ref{p7}
and \ref{p*7} also depend on the linear functional \(\phi\). In other words, the different linear functional \(\phi\) will generate the different H-minimizer and minimizer of problem (XOP). In the future research, we may try to design an efficient algorithm to find the “best” H-minimizer and minimizer of problem (XOP).

On the other hand, studying the optimality conditions of vector optimization problem (XOP) can also be the future research. For example, studying the Karush-Kuhn-Tucker optimality conditions in problem (XOP) is an interesting future research topic.

References.

\begin{equation}{\label{alo05}}\tag{[1]}\mbox{}\end{equation}

\ref{alo05} Alonso, M., Rodríguez-Marín, L., Set-Relations and Optimality Conditions in Set-Valued MapsNonlinear Analysis 63 (2005) 1167-1179.

\begin{equation}{\label{alo09}}\tag{[2]}\mbox{}\end{equation}

\ref{alo09} Alonso, M., Rodríguez-Marín, L., Optimality Conditions for Set-Valued Maps with Set Optimization, Nonlinear Analysis 70 (2009) 3057-3064.

\begin{equation}{\label{ass}}\tag{[3]}\mbox{}\end{equation}

\ref{ass} Assev, S.M., Quasilinear Operators and Their Application in the Theory of Multivalued Mappings, Proceedings of the Steklov Institute of Mathematics 167 (1986) 23-52.

\begin{equation}{\label{cha13}}\tag{[4]}\mbox{}\end{equation}

\ref{cha13} Chalco-Cano, Y., Lodwick, W.A. and Rufián-Lizana, A., Optimality Conditions of Type KKT for Optimization Problem with Interval-Valued Objective Function via Generalized Derivative, Fuzzy Optimization and Decision Making 12 (2013) 305-322

\begin{equation}{\label{che}}\tag{[5]}\mbox{}\end{equation}

\ref{che} Chen, G.-Y. and Yang, X.-Q., Characterizations of Variable Domination Structures via Nonlinear Scalarization, Journal of Optimization Theory and Applications 112 (2002) 97-110.

\begin{equation}{\label{her07}}\tag{[6]}\mbox{}\end{equation}

\ref{her07} Hernández, E., Rodríguez-Marín, L., Existence Theorems for Set Optimization Problems. Nonlinear Analysis 67 (2007) 1276-1736.

\begin{equation}{\label{her07b}}\tag{[7]}\mbox{}\end{equation}

\ref{her07b} Hernández, E., Rodríguez-Marín, L., Nonconvex Scalarization in Set optimization with Set-valued Maps. Journal of Mathematical Analysis and Applications 325 (2007) 1-18.

\begin{equation}{\label{jah84}}\tag{[8]}\mbox{}\end{equation}

\ref{jah84} Jahn, J., Scalarization in Vector Optimization, Mathematical Programming 29 (1984) 203-218.

\begin{equation}{\label{jah}}\tag{[9]}\mbox{}\end{equation}

\ref{jah} Jahn, J., Vectorization in Set Optimization, Journal of Optimization Theory and Applications 167 (2015) 783-795.

\begin{equation}{\label{jahb}}\tag{[10]}\mbox{}\end{equation}

\ref{jahb} Jahn, J., Directional Derivatives in Set Optimization with the Set Less Order Relation, Taiwanese Journal of Mathematics 19 (2015) 737-757.

\begin{equation}{\label{jahc}}\tag{[11]}\mbox{}\end{equation}

\ref{jahc} Jahn, J., Karush-Kuhn-Tucker Conditions in Set Optimization, Journal of Optimization Theory and Applications 172 (2017) 707-725.

\begin{equation}{\label{jah11}}\tag{[12]}\mbox{}\end{equation}

\ref{jah11} Jahn, J. and Ha, T.X.D., New order relations in set optimization, Journal of Optimization Theory and Applications 148 (2011) 209-236.

\begin{equation}{\label{jay}}\tag{[13]}\mbox{}\end{equation}

\ref{jay} Jayswal, A., Stancu-Minasian, I. and Ahmad, I., On Sufficiency and Duality for a Class of Interval-Valued Programming Problems, Applied Mathematics and Computation 218 (2011) 4119-4127

\begin{equation}{\label{kur96}}\tag{[14]}\mbox{}\end{equation}

\ref{kur96} Kuroiwa, D., Convexity for Set-Valued Maps, Applied Mathematics Letters 9 (1996) 97-101.

\begin{equation}{\label{kur01}}\tag{[15]}\mbox{}\end{equation}

\ref{kur01} Kuroiwa, D., On Set-Valued OptimizationNonlinear Analysis 47 (2001) 1395-1400.

\begin{equation}{\label{kur03}}\tag{[16]}\mbox{}\end{equation}

\ref{kur03} Kuroiwa, D., Existence Theorems of Set Optimization with Set-valued Maps, Journal of Information and Optimization Sciences 24 (2003) 73-84.

\begin{equation}{\label{kur97}}\tag{[17]}\mbox{}\end{equation}

\ref{kur97} Kuroiwa, D., Tanaka, T. and Ha, T.X.D., On Cone Convexity of Set-Valued Maps, Nonlinear Analysis 30 (1997) 1487-1496.

\begin{equation}{\label{loh}}\tag{[18]}\mbox{}\end{equation}

\ref{loh} L\”{o}hne, A. and Schrage, C., An Algorithm to Solve Polyhedral Convex Set Optimization Problems, Optimization 62 (2013) 131-141.

\begin{equation}{\label{lon13}}\tag{[19]}\mbox{}\end{equation}

\ref{lon13} Long, X.J. and Peng, J.W., Generalized B-Well-Posedness for Set Optimization Problems, Journal of Optimization Theory and Applications 157 (2013) 612-623.

\begin{equation}{\label{luc}}\tag{[20]}\mbox{}\end{equation}

\ref{luc} Luc, D.T., Scalarization of Vector Optimization Problems, Journal of Optimization Theory and Applications 55 (1987) 85-102.

\begin{equation}{\label{lup}}\tag{[21]}\mbox{}\end{equation}

\ref{lup} Lupulescu, V. and O’Reagan, D., A New Derivative Concept for Set-valued and Fuzzy-Valued Functions. Differential and Integral Calculus in Quasilinear metric spaces, Fuzzy Sets and System 404 (2021) 75-110.

\begin{equation}{\label{mig}}\tag{[22]}\mbox{}\end{equation}

\ref{mig} Miglierina, E. and Molho, E., Scalarization and Stability in Vector Optimization, Journal of Optimization Theory and Applications 114 (2002) 657-670.

\begin{equation}{\label{mig05}}\tag{[23]}\mbox{}\end{equation}

\ref{mig05} Miglierina, E., Molho, E. and Rocca, M., Well-Posedness and Scalarization in Vector Optimization, Journal of Optimization Theory and Applications 126 (2005) 391-409.

\begin{equation}{\label{gom}}\tag{[24]}\mbox{}\end{equation}

\ref{gom} Osuna-Gómez, R., Chalco-Cano, Y., Hernández-Jiménez, B. and Ruiz-Garzón, G., Optimality Conditions for Generalized Differentiable Interval-Valued Functions, Information Sciences 321 (2015) 136-146.

\begin{equation}{\label{pas}}\tag{[25]}\mbox{}\end{equation}

\ref{pas} Pascoletti, A. and Serafini, P., Scalarizing Vector Optimization Problems, Journal of Optimization Theory and Applications 42 (1984) 499-524.

\begin{equation}{\label{rub}}\tag{[26]}\mbox{}\end{equation}

\ref{rub} Rubinov, A.M. and Gasimov, R.N., Scalarization and Nonlinear Scalar Duality for Vector Optimization with Preferences that are not Necessarily a Pre-Order Relation, Journal of Global Optimization 29 (2004) 455-477.

\begin{equation}{\label{rui}}\tag{[27]}\mbox{}\end{equation}

\ref{rui} Ruiz, F., Luque, M., Miguel, F. and del Mar Munoz, M., An Additive Achievement Scalarizing Function for Multiobjective Programming Problems, European Journal of Operational Research 188 (2008) 683-694.

\begin{equation}{\label{tru}}\tag{[28]}\mbox{}\end{equation}

\ref{tru} Truong, X.D.H., Cones Admitting Strictly Positive Functionals and Scalarization of Some Vector Optimization Problems, Journal of Optimization Theory and Applications 93 (1997) 355-372.

\begin{equation}{\label{wan}}\tag{[29]}\mbox{}\end{equation}

\ref{wan} Wang, S. and Li, Z., Scalarization and Lagrange Duality in Multiobjective Optimization, Optimization 26 (1992) 315-324.

\begin{equation}{\label{wu}}\tag{[30]}\mbox{}\end{equation}

\ref{wu} Wu, H.-C., The Karush-Kuhn-Tucker Optimality Conditions in an Optimization Problem with Interval-Valued Objective Function, European Journal of Operational Research 176 (2007) 46-59.

\begin{equation}{\label{wu08}}\tag{[31]}\mbox{}\end{equation}

\ref{wu08} Wu, H.-C., On Interval-Valued Nonlinear Programming Problems, Journal of Mathematical Analysis and Applications} 338 (2008) 299-316.

\begin{equation}{\label{wu08b}}\tag{[32]}\mbox{}\end{equation}

\ref{wu08b} Wu, H.-C., Wolfe Duality for Interval-Valued Optimization, Journal of Optimization Theory and Applications 138 (2008) 497-509.

\begin{equation}{\label{wu10}}\tag{[33]}\mbox{}\end{equation}

\ref{wu10} Wu, H.-C., Duality Theory for Optimization Problems with Interval-Valued Objective Functions, Journal of Optimization Theory and Applications 144 (2010) 615-628.

\begin{equation}{\label{wu18}}\tag{[34]}\mbox{}\end{equation}

\ref{wu18} Wu, H.-C., Solving the Interval-Valued Optimization Problems Based on the Concept of Null Set, Journal of Industrial and Management Optimization 14 (3) (2018) 1157-1178.

\begin{equation}{\label{wu19}}\tag{[35]}\mbox{}\end{equation}

\ref{wu19} Wu, H.-C., Solving Set Optimization Problems Based on the Concept of Null Set, Journal of Mathematical Analysis and Applications 472 (2019) 1741-1761.

\begin{equation}{\label{zhe}}\tag{[36]}\mbox{}\end{equation}

\ref{zhe} Zheng, X.Y., Scalarization of Henig Proper Efficient Points in a Normed Space, Journal of Optimization Theory and Applications 105 (2000) 233-247.

 

 

Hsien-Chung Wu
Hsien-Chung Wu
文章: 183

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *