Abstract
In this paper, we suggest a new infeasible interior-point method (IIPM) for \(P_{*}(\kappa )\)-linear complementarity problem (\(P_{*}(\kappa )\)-LCP) based on a new class of kernel functions with full-Newton steps. Each iteration of the algorithm consists of a feasible step and a few centering steps. The feasible step is defined by the newly proposed kernel functions, while the centering step is determined by using Newton’s method. New class of kernel functions includes a logarithmic kernel function in Roos (SIAM J Optim 16:1110–1136, 2006) and a trigonometric kernel function in Moslemi and Kheirfam (Optim Lett 13:127–157, 2019), Kheirfam and Haghighi (Commun Comb Optim 3:51–70, 2018), Fathi-Hafshejani et al. (J Appl Math Comput 48:111–128, 2015) as special cases. And we show that the proposed algorithm has the best known complexity for \(P_*(\kappa )\)-LCP for such a method and present some numerical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the following \(P_*(\kappa )\)-LCP: Find vectors \(x,s \in {\textbf{R}}^{n}\) such that
where \(M\in {\textbf{R}}^{n\times n},\) a \(P_*(\kappa )\)-matrix, \(\kappa \ge 0,\) \(~ n\ge 2\), \(q\in {\textbf{R}}^{n}\), and xs denotes the componentwise product.
The notion of \(P_{*}(\kappa )\)-matrix was introduced by Kojima et al. [1]. For \(\kappa \ge 0,\) a matrix \(M\in {\textbf{R}}^{n\times n}\) is called a \(P_*(\kappa )\)-matrix if
where
For M, a \(P_*(\kappa )\)-matrix, there is the smallest \({\hat{\kappa }}\ge 0\) for which (2) holds. The value is called the handicap of the \(P_*(\kappa )\)-matrix M [2]. Note that if \(\kappa = 0\), then (1) is a monotone LCP. For \(x,s\in {\textbf{R}}^{n}\) satisfying \(-Mx+s=q\) and \((x,s) \ge 0\), we say that (x, s) is a feasible solution for (1). A feasible solution (x, s) is called a solution for (1) if \(xs=0\).
After the appearance of the Karmakar’s landmark paper [3] in 1984 for linear optimization (LO), the interior-point methods (IPMs) have become an active research field and have made many changes to the numerical analysis and theoretical approaches for LO. They have polynomial complexity with its promising performance in practice. Since then, they have been widely extended to solve linear complementarity problem (LCP), semidefinite optimization (SDO), second order cone optimization (SOCO), symmetric optimization (SO), etc.
IPMs are classified into two categories: feasible IPMs (FIPMs) and infeasible IPMs (IIPMs). A feasible IPM (FIPM) assumes that the starting point satisfies the first equality constraint of (1) and lies inside the region defined by the inequality constraint and maintain feasibility during the solution process. Until recently, many researchers have been studied on the FIPMs for LO [4, 5], LCP [6], \(P_*(\kappa )\)-LCP [7,8,9], \(P_*(\kappa )\)-nonlinear complementarity problem [10], SDO [11,12,13], SOCO [14], and SO [15], etc.
However, in practice, obtaining such a feasible starting point is difficult for most problems. IIPMs start with an arbitrary positive point and feasibility is reached as optimality approached. Many IIPMs have been proposed for LO [16,17,18], monotone LCP [19], \(P_*(\kappa )\)-horizontal LCP [20], SDO [21], SOCO [22], and SO [23], etc.
LCP has many applications in mathematical programming and equilibrium problems, e.g., linear quadratic programming, finding Nash-equilibrium in bimatrix games, economics with institutional restrictions upon prices, contact problems with friction, optimal stopping in Markov chains, circuit simulation, free boundary problems, and calculating the interval hull of linear systems of interval equations [24, 25].
Kojima et al. [1] first showed the existence of the central path for any \(P_*(\kappa )\)-LCP and generalized the primal-dual interior-point method (IPM) for LO to \(P_*(\kappa )\)-LCP which has polynomial iteration complexity \({\mathcal {O}}\big ((1+\kappa )\sqrt{n}L)\), where L is the input size of the problem. Darvay et al. [7] introduce a new feasible corrector-predictor interior-point algorithm for \(P_*(\kappa )\)-LCP by using the method of algebraically equivalent transform of the nonlinear equation of the system and obtain the polynomial complexity \({\mathcal{O}}\left( {\left( {1 + 2\kappa } \right)\sqrt n \log \frac{{9n\mu ^{o} }}{{8\epsilon }}} \right)\). This complexity is still the best one for \(P_*(\kappa )\)-LCP for FIPMs.
Roos et al. [26] proposed the new IIPMs for LO utilizing so called the perturbed subproblem. These methods for LO are generalized to Mansouri et al. [27, 28], Kheirfam [29], and Rigó et al. [30]. In this paper, we generalize the full-Newton step IIPMs for LO [26] to \(P_*(\kappa )\)-LCP with new search directions and proximity measures. The new method uses only full-Newton steps. The advantage is that no line search is required. Moreover we propose new class of kernel functions and analyze IIPMs. Kernel functions define new search directions and are used to measure the distance to the central path. The new class of kernel functions includes a classical logarithmic kernel function [26], trigonometric kernel functions [31,32,33], and some of self-regular kernel functions [34] as special cases. This implies that our method proposes the new search directions and generalize previous results for \(P_*(\kappa )\)-LCP. In addition, the complexity bounds of the algorithm have the best known iteration bounds \({\mathcal{O}} \left( {n\left( {1 + 2\kappa } \right)^{2} \log \frac{{\max \left\{ {(x^{0} )^{T} s^{0} ,~\left\| {r^{0} } \right\|} \right\}}}{\epsilon }} \right)\), which is improved compared to \({\mathcal{O}}\left( {n\left( {1 + 2\kappa } \right)^{3} \log \frac{{\max \left\{ {(x^{0} )^{T} s^{0} ,~\left\| {r^{0} } \right\|} \right\}}}{\epsilon }} \right)\) iteration complexity suggested in [32], where \((x^0,s^0)\), an initial starting vector, \(r^0:=s^0-q-Mx^0\) and \(\epsilon >0\). And we show numerical results for some examples. As mentioned earlier, iteration complexity of the FIPMs for \(P_*(\kappa )\)-LCP looks better than IIPMs for \(P_*(\kappa )\)-LCP. However, FIPMs require an additional problem of finding initial starting vector. Since IIPMs do not need this step, in many cases, it is advantageous to consider IIPMs.
This paper is organized as follows. In Sect. 2, we introduce generic IPMs and IIPMs for \(P_*(\kappa )\)-LCP and some basic lemmas for later use. In Sect. 3, we define new class of kernel functions with some examples and the main algorithm. In Sect. 4, for the complexity analysis we derive some important properties of the new kernel functions and define the feasibility step and derive the complexity bound. In Sect. 5, we show the numerical results for some examples and give some concluding remarks in Sect. 6.
We use the following notations throughout the paper: For \(x\in {\textbf{R}}^{n}\), \(\Vert x \Vert _1:= \sum _{i=1}^{n} |x_{i} |\), \(\Vert x \Vert _2:= \bigl (\sum _{i=1}^{n} |x_i |^2 \bigr )^{\frac{1}{2}}\), and \(\Vert x \Vert _\infty :=\max _{1\le i \le n} |x_i |\). X and S are the diagonal matrices from vector x and s, i.e., \(X:=\)diag(x) and \(S:=\)diag(s). \(\frac{x}{s}\) denotes the componentwise division of vectors x and s and \(x^Ts\) is the scalar product of the vectors x and s. e is the n-dimensional vector of ones and I is the n-dimensional identity matrix. For \(a\in {\textbf{R}}\), \(\Big \lceil a \Big \rceil :=\min \{n\in {\textbf{Z}}~\vert ~n\ge a\}\), where \({\textbf{Z}}\) is the set of integers.
2 Preliminaries
In this section, we introduce generic IPMs and IIPMs for \(P_*(\kappa )\)-LCP and some basic lemmas.
Proposition 1
[1] Let \(M\in {\textbf{R}}^{n\times n}\) be a \(P_*(\kappa )\)-matrix and \(x,s>0\). Then
is a nonsingular matrix for any positive diagonal matrices \(X,S\in {\textbf{R}}^{n\times n}.\)
Corollary 1
If \(M\in {\textbf{R}}^{n\times n}\) is a \(P_*(\kappa )\)-matrix and \(x,s>0\). Then for all \(a\in {\textbf{R}}^{n}\) the system
has a unique solution \((\Delta x,\Delta s)\).
Without loss of generality, we assume that \(P_*(\kappa )\)-LCP satisfies the interior-point condition (IPC), i.e. there exists a pair \((x^{0},s^{0})>0\) such that \(s^{0} = Mx^{0}+q\). In the case of IIPMs, we call the pair \((x,s)\ge 0\) an \(\epsilon\)-approximate solution of \(P_*(\kappa )\)-LCP (1) if
For an \(\epsilon\)-approximate solution for (1), we perturb the complementarity condition, i.e., the second equation in (1), as follows:
Since M is a \(P_*(\kappa )\)-matrix and the IPC holds, by Lemma 4.3 in [1], the system (3) has a unique solution for each \(\mu >0\). We denote by \((x(\mu ), s(\mu ))\) the solution of (3) for given \(\mu >0\). We also call it the \(\mu\)-center for given \(\mu > 0\). The solution set \(\left\{ {\left( {x\left( \mu \right),s\left( \mu \right)} \right)~|~\mu > 0} \right\}\) is the central path for the system (1). Note that the sequence \((x(\mu ), s(\mu ))\) approaches to the solution (x, s) of the system (1) as \(\mu \rightarrow 0\) [1]. We apply Newton’s method to the system (3). Then we have the system:
Since M is a \(P_*(\kappa )\)-matrix, the system (4) uniquely defines \((\Delta x, \Delta s)\) for any \(x>0\) and \(s>0\). We define the following notations:
Then we rewrite the system (4) as follows:
where \({\overline{M}}:=DMD\) and \(D:=\) diag\(\left( {\sqrt {\frac{x}{s}} } \right)\). Since M is a \(P_*(\kappa )\)-matrix, \({\overline{M}}\) is also a \(P_*(\kappa )\)-matrix.
The search directions \(d_{x}\) and \(d_{s}\) are obtained by solving (6). From (5) we obtain \(\Delta x\) and \(\Delta s\). The iteration is obtained by taking a full-Newton step according to
For the analysis of the algorithm, we define a norm-based proximity measure as follows:
We recall some results which are needed for the analysis of the algorithm.
Lemma 1
(Lemma II.60 in [35]) Let v and \(\delta :=\delta (x,s;\mu )\) be defined in (5) and (8), respectively and \(\rho (\delta ):=\delta +\sqrt{1+\delta ^2}\). Then
Lemma 2
(Theorem 3.1 in [36]) Let \(\delta :=\delta (x,s;\mu )<\frac{1}{\sqrt{1+4\kappa }}\). Then the full-Newton step is strictly feasible.
Lemma 3
(Theorem 3.2 in [36]) Let \(\delta :=\delta (x,s;\mu )<\frac{1}{\sqrt{1+4\kappa }}\). Then
where (\(x^{+},s^{+}\)) are defined in (7).
Corollary 2
If \(\delta :=\delta (x,s;\mu )\le \frac{1}{2(1+2\kappa )}\) and \(\kappa \ge 0\), then
which shows the local quadratic convergence of the full-Newton step.
Proof
Since \(\kappa \ge 0\), \(\delta =\delta (x,s;\mu )\le \frac{1}{2(1+2\kappa )}<\frac{1}{\sqrt{2(1+2\kappa )}}<\frac{1}{\sqrt{1+4\kappa }}\). From Lemma 3, we have
Thus, the proof is complete. \(\square\)
Corollary 2 implies that the centering step is quadratically convergent.
In the remainder of this section, we introduce IIPMs for \(P_*(\kappa )\)-LCP. For the generic IIPMs, the initial starting vector \((x^0, s^0)\) is defined as follows:
where \(\rho _{p}\) and \(\rho _{d}\) are positive numbers such that
and \((x^{*},s^{*})\) is the solution of (1). We define by \(r^{0}:=s^{0}-q-Mx^{0}\) the residual vector at an initial starting vector. Generally the initial iteration is infeasible, i.e., \(r^0 \ne 0\). Thus we generate a sequence of perturbed problems that are feasible for the initial value. Accordingly, we define the following sequence of system, for any \(\nu\) with \(0<\nu \le 1\),
Then the initial starting vector is strictly feasible for (\(P_{\nu }\)) when \(\nu =1\). This means that if \(\nu = 1\), then (\(P_{\nu }\)) satisfies the IPC.
Lemma 4
(Lemma 3.1 in [37]) Let the problem (1) be feasible and let \(0<\nu \le 1\). Then, the perturbed problem (\(P_{\nu }\)) satisfies the IPC.
Assume that problem (1) is feasible and let \(0<\nu \le 1\). By Lemma 4, (\(P_{\nu }\)) satisfies the IPC and its central path exists. By Lemma 4.3 in [1], for every \(\mu >0\),
has a unique solution \(\left( {x\left( {\mu ,\nu } \right),s\left( {\mu ,\nu } \right)} \right)\).
The IIPMs for \(P_*(\kappa )\)-LCP solve the perturbed problem (\(P_{\nu }\)) through one feasible step and a few centering steps. The followings are detailed iteration scheme: Let \(\mu ^0\) be defined in (9) and \(\tau >0\). Suppose that for some \(\mu \in (0,\mu ^{0}]\), (x, s) is strictly feasible for the perturbed problem (\(P_{\nu }\)). This is certainly true at the initial iteration when \(\mu =\mu ^0\). Then we repeatedly perform a feasible step and a few centering steps, sequentially. In the feasible step, we generate a feasible point \((x^{f},s^{f})\) for \((P_{\nu ^{+}})\) with \(\nu ^{+}:=(1-\theta )\nu\) and \(\mu ^{+}:=(1-\theta )\mu\), respectively, and belongs to the quadratic convergence region with respect to \(\mu ^{+}\)-centers of \((P_{\nu ^{+}})\). After the feasibility step, we perform a few centering steps in order to get an iteration \((x^{+},s^{+})\) which satisfies \((x^{+})^{T}s^{+}=n\mu ^+\) and \(\delta (x^{+},s^{+};\mu ^{+})\le \tau\). Note that by (9) with \(\nu ^0=1\), we have
Now we consider the feasibility step more precisely. Suppose that (x, s) is strictly feasible for (\(P_{\nu }\)). In the feasibility step, we need search directions \(\Delta ^{f}x\) and \(\Delta ^{f}s\) such that
is feasible for (\(P_{\nu }\)). That is,
Since M is a \(P_*(\kappa )\)-matrix, by Corollary 1, the system (14) has a unique solution \((\Delta ^{f} x, \Delta ^{f} s)\). Then we define the scaled search directions
where v is defined as in (5) and \(\Delta ^{f}x\) and \(\Delta ^{f}s\) are defined in (13). The system (14) can be expressed as follows:
Note that the right-hand side of the second equation in (16) coincides with the negative gradient of the logarithmic barrier function \(\Psi _{l}(v)\), i.e.,
where \(\Psi _{l}(v):=\sum _{i=1}^{n}\psi _{l}(v_{i})\) and \(\psi _{l}(t):= \frac{t^{2} - 1}{2}-\text {log}~t\), for \(t>0\).
From (16), we have \(d^{f}_{x}\) and \(d^{f}_{s}\). Then \(\Delta ^{f}x\) and \(\Delta ^{f}s\) are computed via (15) and then the new iteration is obtained by (13). Thus we have a new iteration \((x^{f},s^{f})\) satisfying the first equation of (\(P_{\nu }\)) with \(\nu = \nu ^+\).
3 New class of kernel functions
The aim of this paper is to propose new IIPMs based on a new class of kernel functions which is defined in Definition 1 below and to yield the best known complexity result for (1).
We call \(\psi :(0,\infty )\rightarrow [0,\infty )\) a kernel function if \(\psi\) is twice differentiable and satisfies the following conditions:
Definition 1
We call a kernel function \(\psi :(0,\infty )\rightarrow [0,\infty )\), defined by \(\psi (t):=\frac{t^2-1}{2}-\varphi (t)\), (1/t)-bound function if the function \(\psi\) is a kernel function and \(\varphi\) satisfies the following conditions:
Note that a logarithmic kernel function satisfies (18), (19), (20). Moreover, any function \(\varphi\) satisfying (18) and (19) is bounded above by a 1/t for \(0<t\le 1\) and bounded below by a 1/t for \(t>1\), respectively. From this, we call the function in Definition 1 (1/t)-bound kernel function.
In this paper, we replace the logarithmic barrier kernel function \(\Psi _{l}\) in (17) by (1/t)-bound kernel function \(\Psi\) as follows:
where \(\Psi (v):=\sum _{i=1}^{n}\psi (v_{i})\) and \(\psi (t)\) is any (1/t)-bound kernel function, i.e.,
In the following, we suggest some (1/t)-bound kernel functions and derive their important properties.
Table 1 presents some examples of (1/t)-bound kernel functions, e.g., \(\psi _j,~j\in \{1,2,\ldots ,7\}\), and their references. In Table 2, we give the first two derivatives of the kernel functions. In Table 3, we give the first derivative of the barrier terms and the condition (20).
In Table 1, by choosing \(p=1\) for the function (15c) in [32], we have \(\psi _4(t)\). For \(\psi _j, 1\le j \le 7\), we obtain the following observations: First, the class of kernel functions in this paper is different from the class of eligible kernel functions defined in [38] and includes a logarithmic kernel function as a special case. Indeed, \(\psi _j,~j\in \{1, 4, 5, 6, 7\}\), are eligible but \(\psi _j,~j\in \{2, 3\},\) are not eligible. Secondly, IPMs based on \(\psi _j, 1\le j \le 7\), yield the best known complexity result for given problems and for such methods. Finally, we have tested FIPMs based on \(\psi _j, 1\le j \le 7\), for \(P_{*}(1/4)\)-LCP in Example 4.12 in [9].
Note that the solution of the problem is \(x^* =(1,0)^T\) and \(s^* =(0,1)^T\). By letting \(q = (0, 3)^T\), \(x^o =(1,1)^T\), \(s^o =(1,1)^T\), \(\epsilon =10^{-4}\), \(\mu ^o =1\), \(\tau =1\), and \(\theta =0.6\), we obtain a similar number of iterations to have \(\epsilon\)-approximate solutions as follows: 11 iterations for \(\psi _6\), 12 iterations for \(\psi _2\) and \(\psi _3\), and 13 iterations for \(\psi _1\), \(\psi _4\), \(\psi _5\), and \(\psi _7\).
For complexity analysis, we present some lemmas for the (1/t)-bound kernel functions. We use the following lemma to prove Lemma 9.
Lemma 5
For \(\varphi (t):=\varphi _j(t),~j\in \{ 1,2,\ldots ,7 \}\), we have
Proof
For \(0< t\le 1\), by (18), we have
Since \(t\le 1 \le \frac{1}{t}\),
For \(t>1\), by (19), we have
Since \(\frac{1}{t}-t \le t\big ( \frac{1}{t}-t \big )\),
Thus, the proof is complete. \(\square\)
We use the following lemma to prove Lemma 13.
Lemma 6
For \(\varphi (t):=\varphi _j(t),~j\in \{ 1,2,\ldots ,7 \}\), we have
Proof
For \(0< t\le 1\), by (18), we have
From
we have
For \(t>1\), in the same way as for (23) and by (19), we have
Thus, the proof is complete. \(\square\)
4 The complexity analysis
Let (x, s) be such that \(\delta (x,s;\mu )\le \tau\). Recall that at the start of the initial iteration this is certainly true, since \(\delta (x^{0},s^{0};\mu ^{0})=0\). By following the feasible step, we have a new iteration \((x^{f},s^{f}) =(x+\Delta ^f x,s+\Delta ^f s)\) that satisfies \((P_{\nu ^+})\). A crucial point in the analysis is to show that \(\delta (x^{f},s^{f};\mu ^{+})\le \frac{1}{2(1+2\kappa )}\) and \((x^{f},s^{f}) > 0\). From [38], we have
where the second and third equalities are from (5) and the last equality is from (22).
The following lemma provides a sufficient condition for the strictly feasible step.
Lemma 7
After a feasible step, the new iteration \(\left( {x^{f} ,s^{f} } \right) = \left( {x + \Delta ^{f} x,s + \Delta ^{f} s} \right)\) satisfying \((P_{\nu ^+})\), is strictly feasible if and only if
Proof
For a step length \(\alpha\) with \(0\le \alpha \le 1\), we define
Then we have \(x(0)=x\), \(s(0)=s\), \(x(1) = x^{f}\), \(s(1)= s^{f}\) and \(x(0)s(0)=xs>0\). On the other hand, from (22) and \(v\varphi '(v)+ d^{f}_{x}d^{f}_{s} > 0\), we have
Hence, none of the entries of \(x(\alpha )\) and \(s(\alpha )\) vanishes, for \(0<\alpha \le 1\). Since x(0) and s(0) are positive and \(x(\alpha )\) and \(s(\alpha )\) depend linearly on \(\alpha\), this implies that \(x(\alpha )>0\) and \(s(\alpha )>0\) for \(0<\alpha \le 1\). Hence, x(1) and s(1) are positive. Thus, the proof is complete. \(\square\)
In the sequel, we denote
From (25),
In what follows, we denote \(\delta (x^{f},s^{f};\mu ^{+})\) by \(\delta (v^{f})\), where \(v^{f}:=\sqrt{\frac{x^{f}s^{f}}{\mu ^{+}}}\).
Lemma 8
Let \(\omega\) be defined in (25). If \((x^{f},s^{f})\) is strictly feasible, then
where \(v^{f}_{min}:=\smash {\displaystyle \min _{1\le i \le n}} \{ v^{f}_{i} \}\) and \(0<\theta <1\).
Proof
Using (5) and (24) and dividing both sides by \(\mu ^{+} = (1-\theta )\mu\), we have
By using (28), (27), and (20), we have
Thus, the proof is complete. \(\square\)
Lemma 9
Let \(\delta (v)\) and \(\omega\) be defined in (8) and (25), respectively. Then the following inequality holds.
where \(0<\theta <1\) and \(n \ge 2\).
Proof
Using (28), the triangular inequality, (26), Lemma 5, and (8), we have
Thus, the proof is complete. \(\square\)
Lemma 10
Let \(\delta (v)\) and \(\omega\) be defined in (8) and (25), respectively. If \(v\varphi '(v) + d^{f}_{x}d^{f}_{s} > 0\), then
where \(0<\theta <1\) and \(n \ge 2\).
Proof
Using (8), we have
where the last inequality is from Lemma 8 and Lemma 9. Thus, the proof is complete. \(\square\)
In what follows, we want to choose \(0<\theta <1\), as large as possible such that \((x^{f}, s^{f})\) lies in the quadratic convergence neighborhood with respect to the \(\mu ^{+}\)-center of \((P_{\nu ^{+}})\), i.e., \(\delta (v^{f})\le \frac{1}{2(1+2\kappa )}\). By Lemma 10, we have
Define
and choose any \(\omega\) and \(\theta\) satisfying
Then the left-hand side of (29) is monotonically increasing with respect to \(\theta\) and \(\omega\). Since \(\delta \le \tau\), using (30) and (31), we have the upper bound of the left-hand side of (29):
Let \(g(n,\kappa ):=\frac{4+5\sqrt{n}}{\sqrt{(14+48\kappa )(-1+5n+10\kappa n)}}\) . Then \(g(n,\kappa )\) decreases with respect to n, so that
Hence we have (29) whenever (30) and (31) hold.
In the following we find a bound for the \(\omega\) defined in (25).
Lemma 11
(Corollary 2.2 in [39]) Let x, s, a, and b be n-dimensional vectors with \(x>0\) and \(s>0\), and let \({\widehat{M}}\in {\textbf{R}}^{n\times n}\) be a \(P_*(\kappa )\)-matrix. Then the solution \(({\hat{u}},{\hat{v}})\) of the linear system
satisfies the following inequality:
where \(D=X^{-\frac{1}{2}}S^{\frac{1}{2}}\), \({\tilde{a}}=(XS)^{-\frac{1}{2}}a\), \({\tilde{b}}=D^{-1}b\), \({\tilde{c}}={\tilde{a}}+{\tilde{b}}\).
Consider the system (32) with \(({\hat{u}},{\hat{v}})=(\Delta ^{f}x,\Delta ^{f}s)\), \(b=\theta \nu r^{0}\), \(a=-\mu v\nabla \Psi (v)\), and \({\widehat{M}}={\overline{M}}\) defined in (21). Then from (33) we obtain
Using the inequality (31) in [32], we have
In the following, we let \(\delta :=\delta (v)\), where \(\delta (v)\) is defined in (8), \(\rho (\delta ):=\delta +\sqrt{1+\delta ^2}\), and \(v_{min}:=\smash {\displaystyle \min _{1\le i \le n}} \{ v_{i} \}\).
Lemma 12
(Lemma 4.5 in [40]) Let (x, s) be feasible for (\(P_{\nu }\)) and let \((x^0,s^0)\) and \((x^{*},s^{*})\) be as defined in (9) and (10), respectively. Then,
Lemma 13
Let \(\nabla \Psi (v)\) be defined in (21). Then we have
Proof
Using Lemma 6, (8), and Lemma 1, we have
Thus, the proof is complete. \(\square\)
Using (35) and Lemma 13, we have
Therefore, using bounds in (35), (36), and \(D^{-1}\Delta ^{f}s = \sqrt{\mu }d^{f}_{s}\) and \(D\Delta ^{f}x=\sqrt{\mu }d^{f}_{x}\), we obtain the upper bound for (34) as follows:
It is easy to observe that the right-hand side of the inequality (37) are monotonically increasing with respect to \(\delta\) and \(\frac{1}{v_{min}} \le \rho (\delta )\) by Lemma 1. By substituting the bound for \(\delta\), \(\Vert x\Vert _{1}\) defined in Lemma 12, and \(\frac{1}{v_{min}}\) into (37), we have
From (25) and (38), (31) holds if the right-hand side of (38) is less than or equal to \(\frac{1}{4(1+2\kappa )}\). Define
where
If we substitute \(a(\tau )\) and \(b(\tau )\) in the right-hand side of (38), we have
By simple calculation, we derive an upper bound of (40) as follows:
where in the second inequality, from (39), we use \(\rho (\tau )=\tau +\sqrt{1+\tau ^2}\le 1+2\tau\) and \(1+2\kappa =\frac{1}{16\tau }\). For \(\kappa \ge 0\), \(\tau = \frac{1}{16(1+2\kappa )}\in (0,\frac{1}{16}]\). By basic calculation, the last term (41) is less than or equal to \(4\tau\). Hence \(4\omega ^2\le 4\tau =\frac{1}{4(1+2\kappa )}\) which implies that \(\omega \le \frac{1}{4\sqrt{1+2\kappa }}\) satisfying (31).
So far we compute the upper bound of \(\omega\) for which (31) holds. This implies that (29) holds. Thus feasible step holds. Moreover, by (29) and Lemma 10, we have \(\delta (v^{f})\le \frac{1}{1+2\kappa }\) which is the condition of Corollary 2.
Then we consider the centering steps to get \((x^+,s^+)\) that satisfy \((x^+)^{T}s^+=n\mu ^+\) and \(\delta (x^+,s^+;\mu ^+)\) \(\le \tau .\) By Corollary 2, the required number of centering steps can easily be obtained. Indeed, we assume that \(\delta (x^f, s^f;\mu ^+) \le \frac{1}{2(1+2\kappa )}\). Starting at \((x^{f}, s^{f})\), by Corollary 2, after m centering steps, we have the iteration \((x^{(m)}, s^{(m)})\) that is still feasible for \((P_{\nu ^{+}})\) and we get
Hence we have \(\delta (x^{+},s^{+};\mu ^{+})\le \tau\) after at most
centering steps.
Note that if an iteration (x, s) satisfies \(\delta (x,s;\mu )\le \tau\), then after the feasible step, \(\delta (x^{f},s^{f};\mu ^{+})\le \frac{1}{2(1+2\kappa )}\). According to (42), at most
centering steps are needed to get \((x^{+},s^{+})\) such that \(\delta (x^{+},s^{+};\mu ^{+})\le \tau\). In each main iteration both the duality gap and the norm of the residual vector are reduced by the factor \(1-\theta\). Hence, the total number of outer iterations is bounded above by
Thus, from \(\theta =\frac{1}{106n(1+2\kappa )^2}\) and (43), the total number of inner iterations is bounded above by
In the following we state our main result without proof.
Theorem 1
Let \({\hat{\kappa }}\) be the handicap of \(P_*(\kappa )\)-matrix M. Assume that \(\tau =\frac{1}{16(1+2\kappa )}\), \(\theta = \frac{1}{106n(1+2\kappa )^2}\) and problem (1) has a solution \((x^{*},s^{*})\) such that \(\Vert x^{*} \Vert _{\infty }\le \rho _{p}\) and \(\Vert s^{*} \Vert _{\infty }\le \rho _{d}\), then after at most
iterations, the algorithm finds an \(\epsilon\)-approximate solution of (1).
5 Numerical tests
In this section, we present some preliminary numerical results of the Algorithm 1 for some \(P_*(\kappa )\)-LCPs.
In the following example, we compare the Algorithm 1 in this paper with the one in [32] with the kernel function \(\psi _3\) in Table 1.
Example 1
We consider the \(P_*(\kappa )\)-LCP, where the matrices are given in [41]:
In [41], the handicap of matrices \(M_1\) and \(M_2\) is \(\hat{\kappa _1} = \hat{\kappa _2} = 6\). Let the initial vectors as \(x=\rho _{p}e,~s=\rho _{d}e\), and \(\mu =\rho _{p}\rho _{d}\). For the convergence of the algorithm, we take the parameters as \(\tau = \frac{1}{16(1+2\kappa )}\), \(\theta _1=\frac{1}{106(1+\kappa )^2}\)(for the Algorithm 1), \(\theta _2=\frac{1}{33n(1+2\kappa )^3}\) ( [32]), and \(\epsilon = 10^{-2}\).
Note that the solutions of both problems are \(x^*=(1,0,3)\) and \(s^* = (0,5,0)\). For IIPMs based on \(\psi _j\), \(1\le j\le 7\), no centering process occurs and the methods have the same number of iterations to have \(\epsilon\)-approximate solutions for both \(\theta\) and M.
Table 4 shows that Algorithm 1 converges faster than the one in [32] for these examples. Note that the main difference between Algorithm 1 and the algorithm in [32] is the barrier update parameter \(\theta\). For \(\kappa = 6\), 10, and 20, we compare the execution time for two methods under the same conditions except the parameter \(\theta\).
6 Conclusions
In this paper, we propose a new full-Newton step IIPMs for \(P_*(\kappa )\)-LCP based on new class of (1/t)-bound kernel functions which includes the existing kernel functions in [26, 31,32,33,34] as special cases. We prove that the convergence is guaranteed for any (1/t)-bound kernel functions. Also we show that the Algorithm 1 has the best known complexity for such a method and we show that our algorithm have good performance in the numerical tests for some examples. It is worth noting that there is no centering step in the numerical tests.
For the future research, we study the gap between the complexity theory and practice during the centering process. Furthermore, we extend our method to more general problem such as \(P_*(\kappa )\)-symmetric cone LCP, etc.
References
Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A unified approach to interior point algorithms for linear complementarity problems: a summary. Oper. Res. Lett. 10(5), 247–254 (1991)
Kojima, R.: Determining the handicap of a sufficient matrix. Linear Algebra Appl. 253(1–3), 279–298 (1997)
Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 302–311 (1984)
Fathi-Hafshejani, S., Peyghami, M.R., Fakharzadeh Jahromi, A.: An efficient primal-dual interior point method for linear programming problems based on a new kernel function with a finite exponential-trigonometric barrier term. Optim. Eng. 21, 107–129 (2020)
Benhadid, A., Saoudi, K.: A new parameterized logarithmic kernel function for linear optimization with a double barrier term yielding the best known iteration bound. Commun. Math. 28, 27–41 (2020)
Pirhaji, M., Zangiabadi, M., Mansouri, H.: A path following interior-point method for linear complementarity problems over circular cones. Jpn. J. Ind. Appl. Math. 35, 1103–1121 (2018)
Darvay, Z., Illés, T., Povh, J., Rigó, P.R.: Feasible corrector-predictor interior-point algorithm for \(P_*(\kappa )\)-linear complementarity problems based on a new search direction. SIAM J. Optim. 30(3), 2628–2658 (2020)
Darvay, Z., Illés, T., Rigó, P.R.: Predictor-corrector interior-point algorithm for \(P_*(\kappa )\)-linear complementarity problems based on a new type of algebraic equivalent transformation technique. Eur. J. Oper. Res. 298(1), 25–35 (2022)
Cho, Y.Y., Cho, G.M.: Interior-point methods for \(P_*(\kappa )\)-horizontal linear complementarity problems. J. Nonlinear Convex Anal. 21(1), 127–137 (2020)
Cho, Y.Y., Cho, G.M.: New interior-point methods for \(P_*(\kappa )\)-nonlinear complementarity problems. J. Nonlinear Convex Anal 22(5), 901–917 (2021)
Wang, G.Q., Bai, Y.Q., Gao, X.Y., Wang, D.Z.: Improved complexity analysis of full Nesterov-Todd step interior-point methods for semidefinite optimization. J. Optim. Theory Appl. 165(1), 242–262 (2015)
Lee, Y.H., Jin, J.H., Cho, G.M.: Interior-point algorithm for SDO based on new classes of kernel functions. J. Nonlinear Convex Anal. 17(1), 55–75 (2016)
Cho, G.M.: Large-update primal-dual interior-point algorithm for semidefinite optimization. Pacific J. Optim. 11(1), 29–36 (2015)
Feng, Z.: A new \({\cal{O} }(\sqrt{n}L)\) iteration large-update primal-dual interior-point method for second-order cone programming. Numer. Funct. Anal. Optim. 33(4), 397–414 (2012)
Wang, G.Q., Kong, L.C., Tao, J.Y., Lesaja, G.: Improved complexity analysis of full Nesterov-Todd step feasible interior-point method for symmetric optimization. J. Optim. Theory Appl. 166, 588–604 (2015)
Roos, C.: An improved and simplified full-Newton step \({\cal{O} }(n)\) infeasible interior-point method for linear optimization. SIAM J. Optim. 25(1), 102–114 (2015)
Yang, Y., Yamashita, M.: An arc-search \({\cal{O} }(nL)\) infeasible-interior-point algorithm for linear programming. Optim. Lett. 12, 781–798 (2018)
Kheirfam, B., Nasrollah, A.: A second-order corrector wide neighborhood infeasible interior-point method for linear optimization based on a specific kernel function. Commun. Comb. Optim. 7(1), 29–44 (2022)
Pirhaji, M., Zangiabadi, M., Mansouri, H.: An infeasible interior-point algorithm for monotone linear complementarity problem based on a specific kernel function. J. Appl. Math. Comput. 54, 469–483 (2017)
Asadi, S., Zangiabadi, M., Mansouri, H.: An infeasible interior-point algorithm with full-Newton steps for \(P_*(\kappa )\) horizontal linear complementarity problems based on a kernel function. J. Appl. Math. Comput. 50, 15–37 (2016)
Wang, W., Liu, H., Bi, H.: Simplified full Nesterov-Todd step infeasible interior-point algorithm for semidefinite optimization based on a kernel function. J. Appl. Math. Comput. 59, 445–463 (2019)
Zangiabadi, M., Gu, G., Roos, C.: A Full Nesterov-Todd Step Infeasible Interior-Point Method for Second-Order Cone Optimization. J. Optim. Theory Appl. 158, 816–858 (2013)
Lesaja, G., Wang, G.Q., Oganian, A.: A Full Nesterov-Todd step infeasible interior-point method for symmetric optimization in the wider neighborhood of the central path. Stat., Optim. Inf. Comput. 9(2), 250–267 (2021)
Kojima, V.: A linear complementarity problem with a P-matrix. SIAM Rev. 46(2), 189–201 (2004)
Cottle, R., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. Academic Press, Boston (1992). Lecture Notes in Computer Science, vol. 538. Springer, Berlin (1991)
Roos, C.: A full-Newton step \({\cal{O} }(n)\) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)
Mansouri, H., Zangiabadi, M., Pirhaji, M.: A full-Newton step \({\cal{O} }(n)\) infeasible-interior-point algorithm for linear complementarity problems. Nonlinear Anal. Real World Appl. 12(1), 545–561 (2011)
Mansouri, H., Roos, C.: A new full-Newton step \({\cal{O} }(n)\) infeasible interior-point algorithm for semidefinite optimization. Numerical Algorithms 52(2), 225–255 (2009)
Kheirfam, B.: An improved and modified infeasible interior-point method for symmetric optimization. Asian-Eur. J. Math. 9(3), 1650059 (2016)
Rigó, P.R., Darvay, Z.: Infeasible interior-point method for symmetric optimization using a positive-asymptotic barrier. Comput. Optim. Appl. 71, 483–508 (2018)
Moslemi, M., Kheirfam, B.: Complexity analysis of infeasible interior-point method for semidefinite optimization based on a new trigonometric kernel function. Optim. Lett. 13, 127–157 (2019)
Kheirfam, B., Haghighi, M.: An infeasible interior-point method for the \(P_*\)-matrix linear complementarity problem based on a trigonometric kernel function with full-Newton step. Commun. Comb. Optim. 3(1), 51–70 (2018)
Fathi-Hafshejani, S., Fatemi, M., Peghami, M.R.: An interior-point method for \(P_*(\kappa )\)-linear complementarity problem based on a trigonometric kernel function. J. Appl. Math. Comput. 48, 111–128 (2015)
Peng, J., Roos, C., Terlaky, T.: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 93, 129–171 (2002)
Roos, C., Terlaky, T., Vial, J.P.: Theory and Algorithms for Linear Optimization: An Interior-Point Approach. John Wiley and Sons, Chichester, UK (1997)
Wang, G.Q., Bai, Y.Q.: A full-Newton step feasible interior-point algorithm for \(P_*(\kappa )\)-linear complementarity problem. J. Global Optim. 59, 81–99 (2014)
Kheirfam, B.: A new complexity analysis for full-newton step infeasible interior-point algorithm for horizontal linear complementarity problems. J. Optim. Theory Appl. 161, 853–869 (2014)
Bai, Y.Q., El Ghami, M., Roos, C.: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 15(1), 101–128 (2004)
Ji, J., Potra, F.A.: An infeasible-interior-point method for the \(P_*(\kappa )\)-matrix LCP. J. Numerical Anal. Approx. Theory 27(2), 277–295 (1998)
Zhu, D., Zhang, M.: A full-Newton step infeasible interior-point algorithm for \(P_*(\kappa )\) linear complementarity problem. J. Syst. Sci. Complex. 27, 1027–1044 (2014)
de Klerk, E., Nagy, M.E.: On the complexity of computing the handicap of a sufficient matrix. Math. Program. 129, 383–402 (2011)
Acknowledgements
This research of the first author was supported by the National Research Foundation of Korea funded by the Korea Government (MEST) (NRF2021R1A2C100853711). This research of the fourth author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2020R1F1A1A01066581). The authors acknowledge anonymous referees for their helpful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lee, JK., Cho, YY., Jin, JH. et al. A New full-newton step infeasible interior-point method for \(P_*(\kappa )\)-linear Complementarity problem. Optim Lett 18, 943–964 (2024). https://doi.org/10.1007/s11590-023-02025-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-023-02025-5