Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Convexification techniques for linear complementarity constraints

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We develop convexification techniques for mathematical programs with complementarity constraints. Specifically, we adapt the reformulation-linearization technique of Sherali and Adams (SIAM J Discrete Math 3, 411–430, 1990) to problems with linear complementarity constraints and discuss how this procedure reduces to its standard specification for binary mixed-integer programs. Then, we consider specially structured complementarity sets that appear in KKT systems with linear constraints. For sets with a single complementarity constraint, we develop a convexification procedure that generates all nontrivial facet-defining inequalities and has an appealing “cancel-and-relax” interpretation. This procedure is used to describe the convex hull of problems with few side constraints in closed-form. As a consequence, we delineate cases when the factorable relaxation techniques yield the convex hull from those for which they do not. We then discuss how these results extend to sets with multiple complementarity constraints. In particular, we show that a suitable sequential application of the cancel-and-relax procedure produces all nontrivial inequalities of their convex hull. We conclude by illustrating, on a set of randomly generated problems, that the relaxations we propose can be significantly stronger than those available in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. An extended abstract [31] containing the statements of some of the results in Sects. 2 and 3 was presented at IPCO XV.

References

  1. Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis, 3rd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  2. Anitescu, M.: Global convergence of an elastic mode approach for a class of mathematical programs with complementarity constraints. SIAM J. Optim. 16, 120–145 (2005)

    Article  MathSciNet  Google Scholar 

  3. Balas, E.: Intersection cuts-a new type of cutting planes for integer programming. Oper. Res. 19, 19–39 (1971)

    Article  MathSciNet  Google Scholar 

  4. Balas, E.: Disjunctive programming and a hierarchy of relaxations for discrete optimization problems. SIAM J. Discrete Math. 6, 466–486 (1985)

    Article  MathSciNet  Google Scholar 

  5. Balas, E.: Disjunctive programming: properties of the convex hull of feasible points. Discrete Appl. Math. 89, 3–44 (1998), original manuscript was published as a technical report in 1974

  6. Balas, E., Ceria, S., Cornuéjols, G.: A lift-and-project cutting plane algorithm for mixed \(0\text{- }1\) programs. Math. Program. 58, 295–324 (1993)

    Article  MathSciNet  Google Scholar 

  7. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, 3rd edn. Wiley-Interscience, Hoboken, NJ (2006)

    Book  Google Scholar 

  8. Benson, H., Sen, A., Shanno, D.F., Vanderbei, R.V.D.: Interior-point algorithms, penalty methods and equilibrium problems. Comput. Optim. Appl. 34, 155–182 (2006)

    Article  MathSciNet  Google Scholar 

  9. Byrd, R., Nocedal, J., Waltz, R.: An integrated package for nonlinear optimization. In: Pillo, G.D., Roma, M. (eds.) Large-scale nonlinear optimization, pp. 35–60. Springer, Berlin (2006)

    Chapter  Google Scholar 

  10. Cook, W., Dash, S.: On the matrix-cut rank of polyhedra. Math. Oper. Res. 26, 19–30 (2001)

    Article  MathSciNet  Google Scholar 

  11. Davarnia, D., Richard, J.P.P., Tawarmalani, M.: Simultaneous convexification of bilinear functions over polytopes with application to network interdiction. SIAM J. Optim. 27(3), 1801–1833 (2017)

    Article  MathSciNet  Google Scholar 

  12. de Farias, I.R., Johnson, E.L., Nemhauser, G.L.: Facets of the complementarity knapsack polytope. Math. Oper. Res. 27, 210–226 (2002)

    Article  MathSciNet  Google Scholar 

  13. Dirkse, S.P., Ferris, M.C.: The PATH solver: a non-monotone stabilization scheme for mixed complementarity problems. Optim. Methods Softw. 5, 123–156 (1995)

    Article  Google Scholar 

  14. Ferris, M.C., Dirkse, S.P., Meeraus, A.: Mathematical programs with equilibrium constraints: automatic reformulation and solution via constrained optimization. In Frontiers in Applied General Equilibrium Modeling. pp. 67–93. Cambridge University Press, Cambridge (2005)

  15. Ferris, M.C., Pang, J.S.: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669–713 (1997)

    Article  MathSciNet  Google Scholar 

  16. Fletcher, R., Leyffer, S.: Solving mathematical programs with complementarity constraints as nonlinear programs. Optim. Methods Softw. 19, 15–40 (2004)

    Article  MathSciNet  Google Scholar 

  17. Fletcher, R., Leyffer, S., Toint, P.: On the global convergence of a filter-SQP algorithm. SIAM J. Optim. 13, 44–59 (2002)

    Article  MathSciNet  Google Scholar 

  18. Friedlander, A.D.M., Nogales, F., Scholtes, S.: A two-sided relaxation scheme for mathematical programs with equilibrium constraints. SIAM J. Optim. 16, 587–609 (2005)

    Article  MathSciNet  Google Scholar 

  19. Hu, J., Mitchell, J.E., Pang, J.S.: An LPCC approach to nonconvex quadratic programs. Math. Program. 133, 243–277 (2012)

    Article  MathSciNet  Google Scholar 

  20. Hu, J., Mitchell, J.E., Pang, J.S., Bennett, K.P., Kunapuli, G.: On the global solution of linear programs with linear complementarity constraints. SIAM J. Optim. 19, 445–471 (2008)

    Article  MathSciNet  Google Scholar 

  21. Hu, J., Mitchell, J.E., Pang, J.S., Yu, B.: On linear programs with linear complementarity constraints. J. Global Optim. 53, 29–51 (2012)

    Article  MathSciNet  Google Scholar 

  22. Ibaraki, T.: The use of cuts in complementary programming. Oper. Res. 21, 353–359 (1973)

    Article  MathSciNet  Google Scholar 

  23. Jeroslow, R.G.: Cutting-planes for complementarity constraints. SIAM J. Control Optim. 16, 56–62 (1978)

    Article  MathSciNet  Google Scholar 

  24. Kim, J., Tawarmalani, M., Richard, J.P.P.: On cutting planes for cardinality-constrained linear programs. Math. Program. 178(1), 417–448 (2019)

    Article  MathSciNet  Google Scholar 

  25. Kojima, M., Tunçel, L.: Some fundamental properties of successive convex relaxation methods on LCP and related problems. J. Global Optim. 24, 333–348 (2002)

    Article  MathSciNet  Google Scholar 

  26. Laurent, M.: A comparison of the Sherali-Adams, Lovász-Schrijver, and Lasserre relaxations for 0–1 programming. Math. Oper. Res. 28, 470–496 (2003)

    Article  MathSciNet  Google Scholar 

  27. Leyffer, S., Munson, T.: A globally convergent filter method for MPECs, technical report, Argonne National Laboratory (2009)

  28. Leyffer, S.: MacMPEC. https://wiki.mcs.anl.gov/leyffer/index.php/MacMPEC, accessed: 2016-06-02

  29. Liu, X., Perakis, G., Sun, J.: A robust SQP method for mathematical programs with linear complementarity constraints. Comput. Optim. Appl. 34, 5–33 (2005)

    Article  MathSciNet  Google Scholar 

  30. McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: Part i - convex underestimating problems. Math. Program. 10, 147–175 (1976)

    Article  Google Scholar 

  31. Nguyen, T.T., Tawarmalani, M., Richard, J.P.P.: Convexification techniques for linear complementarity constraints. In Proceedings of the 15th international conference on Integer programming and combinatoral optimization. pp. 336–348. IPCO’11, Springer-Verlag, Berlin, Heidelberg (2011)

  32. Raghunathan, A., Biegler, L.T.: An interior point method for mathematical programs with complementarity constraints (MPCCs). SIAM J. Optim. 15, 720–750 (2005)

    Article  MathSciNet  Google Scholar 

  33. Ramarao, B., Shetty, C.M.: Application of disjunctive programming to the linear complementarity problem. Naval Res. Logist. Quart. 31, 589–600 (1984)

    Article  MathSciNet  Google Scholar 

  34. Rockafellar, R.T.: Convex Analysis. Princeton University Press, New Jersy (1970)

    Book  Google Scholar 

  35. Rutherford, T.F.: MILES: a mixed inequality and nonlinear equation solver, technical report, Department of Economics, University of Colorado, Boulder (1993)

  36. Sherali, H.D., Adams, W.P.: A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM J. Discrete Math. 3, 411–430 (1990)

    Article  MathSciNet  Google Scholar 

  37. Sherali, H.D., Krishnamurthy, R.S., Al-Khayyal, F.A.: Enhanced intersection cutting-plane approach for linear complementarity problems. J. Optim. Theory Appl. 90, 183–201 (1996)

    Article  MathSciNet  Google Scholar 

  38. Sherali, H.D., Krishnamurthy, R.S., Al-Khayyal, F.A.: Enumeration approach for linear complementarity problems based on a reformulation-linearization technique. J. Optim. Theory Appl. 99, 481–507 (1998)

    Article  MathSciNet  Google Scholar 

  39. Sherali, H.D., Adams, W.P.: A reformulation-linearization technique for solving discrete and continuous nonconvex problems, Nonconvex Optimization and its Applications, vol. 31. Kluwer Academic Publishers, Dordrecht (1999)

    MATH  Google Scholar 

  40. Tawarmalani, M.: Inclusion certificates and disjunctive programming, presented in Operations Research Seminar at GSIA, Carnegie Mellon University (2006)

  41. Tawarmalani, M.: Inclusion certificates and simultaneous convexification of functions (2010), working paper, http://www.optimization-online.org/DB_HTML/2010/09/2722.html

  42. Tuy, H.: Convex analysis and global optimization, Nonconvex Optimization and its Applications, vol. 22. Kluwer Academic Publishers, Dordrecht (1998)

    Book  Google Scholar 

  43. Vandenbussche, D., Nemhauser, G.L.: A polyhedral study of nonconvex quadratic programs with box constraints. Math. Program. 102, 531–557 (2005)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-Philippe P. Richard.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by NSF CMMI Grants 0856605, 0900065, 1234897, and 1235236.

:  Proof of Proposition 17

:  Proof of Proposition 17

The fact that all facet-defining inequalities of \({\mathrm{conv}}(C^{1,1})\) are described below is a direct consequence of Theorem 5.

  1. i.

    Pick \(k \in I^+ \ne \emptyset \). Consider the points \((x^0;y_1^0)=({0};{0}), \; (x^j;y_1^j)=(\epsilon (|\alpha _j| {e_k}+\alpha _k {e_j}); {0})\) for \(j \in M.\) These \(m+1\) affinely independent points satisfy \(y_1\ge 0\) at equality and belong to \(C^{1,1} \subseteq {\mathrm{conv}}(C^{1,1})\) for a sufficiently small positive \(\epsilon \). It follows that \(y_1 \ge 0\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\).

  2. ii.

    First we consider the reverse implication. If \( (M \setminus \{ i \} ) \cap I^+ \ne \emptyset \), there exists \(k \in I^+\) with \(k \ne i\). Construct the \(m+1\) affinely independent points \((x^0; y_1^0) = ({0}; {0}), \; (x^j; y_1^j) = (\epsilon (|\alpha _j| {e_k} + \alpha _k {e_j}); {0 })\) for \(j \in M \setminus \{ i\}\) and \((x^{m+1}; y_1^{m+1}) = ({0}; {1}).\) These points belong to \(C^{1,1}\) for a sufficiently small positive \(\epsilon \). It follows that \(x_i \ge 0\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). If \(M \setminus \{ i \} = I^0\), we construct the affinely independent points \((x^0; y_1^0) = ({0}; {0}), \; (x^j; y_1^j) = ({e_j}; {0})\) for \(j \in M \setminus \{ i\} = I^0\) and \((x^{m+1}; y_1^{m+1}) = ({0} ;{1}).\) These \(m+1\) points belong to \(C^{1,1}\), showing that \(x_i \ge 0\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). Conversely, if \( (M \setminus \{ i \} )\cap I^+ = \emptyset \), it must be that \(I^+ = \{ i\}\) since \(I^+ \ne \emptyset \). Further, since \(M \setminus \{ i \} \ne I^0\), it must be that \(I^- \ne \emptyset \) . In this case \(\sum _{j \in M} \alpha _j x_j \ge 0\) reduces to

    $$\begin{aligned} \alpha _i x_i + \sum _{j \in I^-} \alpha _j x_j \ge 0. \end{aligned}$$
    (A.1)

    Since \(x_i \ge 0\) can be obtained as a conic combination of (A.1) and of \(x_j \ge 0\) for \(j \in I^-\) with nonnegative weights \(\frac{1}{\alpha _i}\) and \(-\frac{\alpha _j}{\alpha _i}\) respectively, it does not define a facet of \({\mathrm{conv}}(C^{1,1})\) as \({\mathrm{conv}}(C^{1,1})\) is full-dimensional.

  3. iii.

    Assume that \(I^- \ne \emptyset \). Select \(l \in I^- \ne \emptyset \), \(k \in I^+ \ne \emptyset \), and consider the points \((x^0;y_1^0)=({0}; {0}), \; (x^j;y_1^j)=(\epsilon (-\alpha _l {e_j}+\alpha _j {e_l}); {0})\) for \(j \in I^+ \cup I^0, \; (x^j;y_1^j)=(\epsilon (-\alpha _j {e_k} +\alpha _k {e_j}); {0})\) for \(j \in I^- \backslash \{l\}\) and \((x^{m+1}; y_1^{m+1}) = ({0}; {1}).\) These \(m+1\) affinely independent points belong to \(C^{1,1}\) for a sufficiently small positive \(\epsilon \) and satisfy \(\sum _{i \in M} \alpha _i x_i \ge 0\) at equality. It follows that \(\sum _{i \in M} \alpha _i x_i \ge 0\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). If \(I^-= \emptyset \) and \(I^+ = \{ k\}\) then \(\sum _{i \in M} \alpha _i x_i \ge 0\) reduces to \(x_k \ge 0\). Since \(M \setminus \{ k\} = I^0\) in this case, it follows from (ii) that \(x_k \ge 0\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). We now prove the direct implication by contraposition. If \(I^-= \emptyset \) and \(|I^+| \ge 2\) then \(\sum _{i \in M} \alpha _i x_i \ge 0\) can be obtained as a conic combination with weight \(\alpha _i\) of \(x_i \ge 0\) for \(i \in I^+\) and is therefore not facet-defining for \({\mathrm{conv}}(C^{1,1})\) since \({\mathrm{conv}}(C^{1,1})\) is full-dimensional.

  4. iv.
    1. a.

      Assume first that \(i \in I^0\). Pick \(k \in I^+ \ne \emptyset \) and consider the points \((x^0; y_1^0) = ({e_i}; {0}), \; (x^j; y_1^j) = ({e_i} + \epsilon (|\alpha _j| {e_k} + \alpha _k {e_j}); {0})\) for \(j \in M \setminus \{ i\}\) and \((x^{m+1}; y_1^{m+1}) = ({e_i}; {1}).\) These \(m+1\) affinely independent points belong to \(C^{1,1}\) for a sufficiently small positive \(\epsilon \), satisfy \(x_i \le 1\) at equality. It follows that \(x_i \le 1\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\).

    2. b.

      Assume now that \(i\in I^+\). For the reverse implication, the given condition implies that \(I^- \ne \emptyset \). Construct the points \((x^0; y_1^0) = ({e_i}; {0}), \; (x^j; y_1^j) = ({e_i} + \epsilon {e_j} ; {0})\) for \(j \in M \setminus \{i\}\) and \((x^{m+1}; y_1^{m+1}) = \left( {e_i} + \sum _{j \in I^-} \left( \frac{- \alpha _i}{ \sum _{k \in I^-} \alpha _k}\right) {e_j}; {1}\right) .\) The points \((x^j;y_1^j)\) for \(j \in M \setminus \{ i\}\) belong to \(C^{1,1}\) for a sufficiently small positive \(\epsilon \). Further, \((x^{m+1}; y_1^{m+1})\) satisfies \(\sum _{i \in M} \alpha _i x_i \ge 0\) at equality and also satisfies the upper bound constraints on variables \(x_j\) for \(j \in I^-\) since \(\alpha _i + \sum _{j \in I^-} \alpha _j \le 0 \). It therefore belongs to \(C^{1,1}\). Since these points are affinely independent and satisfy \(x_i \le 1\) at equality, we conclude that \(x_i \le 1\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). Conversely, if \(\alpha _i + \sum _{j \in I^-} \alpha _j> 0\) then setting \(x_i =1\) implies that \(\sum _{j \in M} \alpha _j x_j \ge \alpha _i + \sum _{j \in I^-} \alpha _j x_j \ge \alpha _i + \sum _{j \in I^-} \alpha _j > 0\), which in turn requires that \(y_1=0\). We conclude that the face of \({\mathrm{conv}}(C^{1,1})\) induced by \(x_i \le 1\) has dimension less than or equal to \(m-1\).

    3. c.

      Assume finally that \(i \in I^-\). We know from assumption that \(I^+ \ne \emptyset \). For the reverse implication, construct the points \((x^0; y_1^0) = \left( {e_i} + \sum _{j \in I^+} \left( \frac{-\alpha _i + \epsilon }{\sum _{k \in I^+} \alpha _k}\right) {e_j}; {0} \right) ,\) \((x^j; y_1^j) = (x^0 + {\tilde{\epsilon }}{e_j}; {0})\) for \(j \in M\setminus \{ i\}\) and \((x^{m+1}; y^{m+1}) = \left( {e_i} + \sum _{j \in I^+} \left( \frac{- \alpha _i}{ \sum _{k \in I^+} \alpha _k}\right) {e_j} ; {1}\right) .\) For \(\epsilon \) positive but sufficiently small, the point \((x^0, y_1^0)\) is feasible to \(C^{1,1}\). It satisfies \(\sum _{i \in M} \alpha _i x_i \ge 0\) and the upper bound constraints of \(C^{1,1}\) strictly at inequality since \(\epsilon >0\) and \(\alpha _i + \sum _{j \in I^+} \alpha _j> 0\). It follows that the points \((x^j; y_1^j)\), for \(j \in M \setminus \{i\}\) belong to \(C^{1,1}\) when \( {\tilde{\epsilon }}\) is sufficiently small. Finally, \((x^{m+1}, y_1^{m+1})\) also belongs to \(C^{1,1}\) as we assumed that \(\alpha _i + \sum _{j \in I^+} \alpha _j > 0 \). Since these \(m+1\) points are affinely independent and satisfy \(x_i \le 1\) at equality, we conclude that \(x_i \le 1\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). Conversely, if \(\alpha _i + \sum _{j \in I^+} \alpha _j \le 0\) then setting \(x_i =1\) implies that \(0 \le \sum _{j \in M} \alpha _j x_j \le \alpha _i + \sum _{j \in I^+} \alpha _j x_j \le \alpha _i + \sum _{j \in I^+} \alpha _j \le 0\), which in turn requires that \(\sum _{j \in M} \alpha _j x_j =0\). We conclude that the face of \({\mathrm{conv}}(C^{1,1})\) induced by \(x_i \le 1\) has dimension less than or equal to \(m-1\).

  5. v.
    1. a.

      We first observe that if \(I^- = \emptyset \) and \(|S^+|=1\), say \( S^+ = \{ k\}\), (3.13) reduces to

      $$\begin{aligned} \alpha _k y_1+ \alpha _kx_k \le \alpha _k \end{aligned}$$
      (A.2)

      Inequality (A.2) holds at equality for the feasible points \((x^0; y_1^0) = ({0}; {1})\), \((x^j; y_1^j) = ({e_k + e_j};{0})\) for \(j \in M \setminus \{ k\}\) and \((x^{m+1}; y_1^{m+1}) = ({e_k}; {0})\) showing that (A.2) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). Conversely, if \(|S^+| \ge 2\), (3.13) can be obtained as a conic combination of valid inequalities \(\alpha _i y_1+ \alpha _i x_i \le \alpha _i\) for \(i \in S^+\) showing that it is not facet-defining for \({\mathrm{conv}}(C^{1,1})\). If \(|S^+| = 0\), inequality (3.13) is trivial and is clearly not facet-defining for \({\mathrm{conv}}(C^{1,1})\).

    2. b.

      Assume now that \(I^- \ne \emptyset \). We prove the reverse implications first. Assume first that \(S^+ = \{ k\}\), \(S^- =\emptyset \) and \(\alpha _k + \sum _{j \in I^-} \alpha _j =0\). Then, (3.13) reduces to \( x_k \le 1\). It follows from (iv)(b) that \(x_k \le 1\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). If \(S^+ = \emptyset \), \(S^- =\{k\}\) and \(\alpha _k + \sum _{j \in I^-\backslash S^-} \alpha _j =0\) then (3.13) reduces to \( x_k \ge 0\). It follows from (ii) that \(x_k \ge 0\) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). Assume next that \(\sum _{i \in S^+} \alpha _i + \sum _{i \in I^- \setminus S^-} \alpha _i> 0\). Clearly \(S^+ \ne \emptyset \). Construct the points \( ({\check{x}}; {\check{y}}_1 ) = (\sum _{j \in S^+} {e_j} ; { 0})\), \( ({\bar{x}}^k; {\bar{y}}_1^k ) = (\sum _{j \in S^+} {e_j} + {e_k} ; { 0})\) for \(k \in I^+ \setminus S^+\), \(({\dot{x}}^{l}; {\dot{y}}_1^{l} ) = (\sum _{j \in S^+} {e_j} + \epsilon {e_l}; {0})\) for \(l \in I^-\setminus S^-\), \( ({\tilde{x}}^k; {\tilde{y}}_1^k ) = (\sum _{j \in S^+} {e_j} + {e_k} ; { 0})\) for \(k \in I^0\), \(({\hat{x}};{\hat{y}}_1) = (\delta \sum _{j \in S^+ } {e_j} + \sum _{j \in I^- \setminus S^-} {e_j} ; {1})\), \((x^{k,l}; y_1^{k,l})= (\delta \sum _{j \in S^+} {e_j} + \sum _{j \in I^- \setminus S^-} {e_j} + \frac{\epsilon }{\alpha _k} {e_k} - \frac{\epsilon }{\alpha _l} {e_l} ;{1})\) for \(k,l \in S^+\), \(({\ddot{x}}^{k,l}; {\ddot{y}}_1^{k,l}) = (\delta \sum _{j \in S^+} {e_j} + \sum _{j \in I^- \setminus S^-} {e_j} + \frac{\epsilon }{\alpha _k} {e_k} - \frac{\epsilon }{\alpha _l} {e_l} ; {1})\) for \(k \in S^+, \; l \in S^-\) where \( \delta = - \frac{\sum _{j \in I^- \setminus S^-} \alpha _j}{\sum _{i \in S^+} \alpha _i} \in [0,1)\). These points belong to \(C^{1,1}\) if \(\epsilon \) is a sufficiently small positive number. Further, these points belong to the face of \({\mathrm{conv}}(C^{1,1})\) induced by (3.13). Consider now any valid inequality \(\sum _{i \in M} \beta _i x_i + \beta _0 y_1 \le \gamma \) for \({\mathrm{conv}}(C^{1,1})\) whose associated face contains the face of \({\mathrm{conv}}(C^{1,1})\) that (3.13) defines. It must therefore be satisfied at equality by the aforementioned points. The coefficients \(\beta _i\) and \(\gamma \) satisfy

      $$\begin{aligned}&\sum _{i \in S^+} \beta _i = \gamma \end{aligned}$$
      (A.3)
      $$\begin{aligned}&\sum _{i \in S^+} \beta _i + \beta _k = \gamma , \quad \forall k \in I^+ \setminus S^+ \end{aligned}$$
      (A.4)
      $$\begin{aligned}&\sum _{i \in S^+} \beta _i + \epsilon \beta _l = \gamma , \quad \forall l \in I^- \setminus S^- \end{aligned}$$
      (A.5)
      $$\begin{aligned}&\sum _{i \in S^+} \beta _i + \beta _k = \gamma , \quad \forall k \in I^0 \end{aligned}$$
      (A.6)
      $$\begin{aligned}&\delta \sum _{i \in S^+} \beta _i + \sum _{j \in I^- \setminus S^-} \beta _j + \beta _0 = \gamma \end{aligned}$$
      (A.7)
      $$\begin{aligned}&\delta \sum _{i \in S^+} \beta _i + \sum _{j \in I^- \setminus S^-}\beta _j +\frac{\epsilon }{\alpha _k} \beta _k - \frac{\epsilon }{\alpha _l} \beta _l + \beta _0 = \gamma , \forall k,l \in S^+ \end{aligned}$$
      (A.8)
      $$\begin{aligned}&\delta \sum _{i \in S^+} \beta _i + \sum _{j \in I^-\setminus S^-} \beta _j + \frac{\epsilon }{\alpha _k} \beta _k - \frac{\epsilon }{\alpha _l} \beta _l + \beta _0 =\gamma , \forall k \in S^+, \forall l \in S^- . \end{aligned}$$
      (A.9)

      We next show that \(\sum _{i \in M} \beta _i x_i + \beta _0 y_1 \le \gamma \) is a scalar multiple of (3.13), proving that (3.13) is facet-defining for \({\mathrm{conv}}(C^{1,1})\). From (A.3) and (A.4), we obtain that \(\beta _k = 0, \, \forall k \in I^+ \setminus S^+.\) Similarly, we obtain that \(\beta _l =0, \, \forall l \in I^- \setminus S^-\) from (A.3) and (A.5), and we obtain that \(\beta _k =0, \, \forall k \in I^0\) from (A.3) and (A.6). It then follows from (A.7) that \(\delta \gamma +\beta _0=\gamma \), i.e., \(\beta _0= \gamma (1- \delta ) = \gamma \Big ( \frac{ \sum _{i \in S^+} \alpha _i + \sum _{j \in I^- \setminus S^-} \alpha _j}{\sum _{i \in S^+} \alpha _i} \Big )\). Note that this conclusion holds both when \(I^- \backslash S^-\) is empty and when it is nonempty.

      From (A.7) and (A.8), we conclude there exists \(\theta \in {\mathbb {R}}\) such that \( \frac{\beta _k}{\alpha _k} =\theta \) for \(k \in S^+\). We then see that \( \frac{\gamma }{\sum _{i \in S^+} \alpha _i} = \frac{ \sum _{i \in S^+}\beta _i}{\sum _{i \in S^+} \alpha _i} = \frac{ \theta \sum _{i \in S^+} \alpha _i}{ \sum _{i \in S^+} \alpha _i} = \theta \) where the first equality holds because of (A.3). Therefore, \(\beta _k = \frac{\gamma \alpha _k}{ \sum _{i \in S^+} \alpha _i}\) for \(k \in S^+ \). Further, from (A.7) and (A.9), we establish that \(\frac{\beta _k}{\alpha _k} = \frac{\beta _l}{\alpha _l} = \theta \) for \(k \in S^+\) and \( l \in S^-\). This implies that \(\beta _l = \frac{\gamma \alpha _l}{\sum _{i \in S^+} \alpha _i }\) for \(l \in S^-\). It follows that (3.13) is facet-defining for \({\mathrm{conv}}(C^{1,1})\).

      Conversely, assume that (\(\sum _{i \in S^+} \alpha _i + \sum _{i \in I^- \setminus S^-} \alpha _i \le 0\)) and (\( |S^+|+|S^-| \ne 1\) or \(\sum _{i \in S^+} \alpha _i + \sum _{i \in I^-\setminus S^-} \alpha _i \ne 0.)\) If (\(\sum _{i \in S^+} \alpha _i + \sum _{i \in I^- \setminus S^-} \alpha _i < 0\)), (3.13) is satisfied at equality only when \(y_1=0\), showing that the dimension of the face that it induces is less than or equal to \(m-1\). If \(\sum _{i \in S^+} \alpha _i + \sum _{i \in I^-\setminus S^-} \alpha _i =0\), then (3.13) reduces to

      $$\begin{aligned} \sum _{i \in S^+} \alpha _i x_i+ \sum _{i \in S^-} \alpha _i x_i \le \sum _{i \in S^+} \alpha _i. \end{aligned}$$
      (A.10)

      If \(|S^-| \ne 0\), then (A.10) can only be tight if \(x_i =0\) for \(i \in S^-\), showing that (A.10) is facet-defining for \({\mathrm{conv}}(C^{1,1})\) if and only if \(|S^+|=0\) and \(|S^-|=1\). We now assume that \(|S^-| =0\). If \(|S^+| \ge 2\) then (A.10) is not facet-defining since it can be obtained as a conic combination of inequalities \(x_i \le 1\) with positive weights \(\alpha _i\) for \(i \in S^+\). If \(|S^+| = 0\) as well, then (A.10) is trivial and clearly not facet-defining. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nguyen, T.T., Richard, JP.P. & Tawarmalani, M. Convexification techniques for linear complementarity constraints. J Glob Optim 80, 249–286 (2021). https://doi.org/10.1007/s10898-020-00979-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-020-00979-9

Keywords

Navigation