Dadush DKoh ZNatura BOlver NVégh LMohar BShinkar IO'Donnell R(2024)A Strongly Polynomial Algorithm for Linear Programs with At Most Two Nonzero Entries per Row or ColumnProceedings of the 56th Annual ACM Symposium on Theory of Computing10.1145/3618260.3649764(1561-1572)Online publication date: 10-Jun-2024
Theoretically compared the tightness of the J-set relaxation with the recursive McCormick relaxations.
Abstract
This paper studies linear programming (LP) relaxations for solving polynomial programming problems. A polynomial programming problem can be equivalently formulated as a quadratically constrained quadratic program (QCQP) by introducing ...
Linear mixed 0---1 integer programming problems may be reformulated as equivalent continuous bilevel linear programming (BLP) problems. We exploit these equivalences to transpose the concept of mixed 0---1 Gomory cuts to BLP. The first phase of our new ...
A new interior method for linear programming is presented and a polynomial time bound for it is proven. The proof is substantially different from those given for the ellipsoid algorithm and for Karmarkar's algorithm. Also, the algorithm is conceptually ...
This paper describes a remarkable new polynomial algorithm for linear programming which has already elicited considerable publicity in the popular press, as well as among the operations research and computer science research communities, because of the striking claims made for its performance. Karmarkar has reported solving several large-scale problems by a factor of 50 times or more faster than those achieved by the simplex method, a practically very efficient (although exponential in the worst case) algorithm devised by Dantzig in 1947 [1]. I will comment further on computational issues below.
In its basic form, Karmarkar's algorithm applies to problems in the form: minimize c
Tx subject to
Ax=0, &Sgr;x
j=1 and x?, where it is assumed that
A has full row rank, that a strictly positive feasible vector x
0 is known, and that the minimum value is zero. In this case, the algorithm generates a sequence of strictly positive feasible vectors whose objective function values decrease by a constant factor every
n iterations, where
n is the number of variables. This provides the polynomial time bound by rounding, as in the analysis by Gro¨tschel, Lova´sz, and Schrijver [2] of the ellipsoid method. The latter algorithm, originally developed for convex programming by Yudin and Nemirovski in the Soviet Union based on work by Shor, was shown to provide a polynomial algorithm for linear programming by Khachian in 1979 (see, e.g., [3]). There is a fascinating relationship between these two algorithms: while based on very different ideas, both generate a sequence of ellipsoids (in the ellipsoid algorithm containing, and in Karmarkar's method contained in, the feasible region), with each iteration implicitly solving a weighted least squares problem; both are intrinsically infinite iterative methods, though each can be truncated in a polynomial number of steps with relational data; and both use ideas of nonlinear programming more heavily than those of linear programming. The contrast in their significance is also noteworthy. The ellipsoid method appears not to be computationally efficient, but has been used theoretically by Gro¨tschel et al. [2] and others to prove that in a certain sense optimization is equivalent to separation, and thus establish the computational complexity of several combinatorial optimization problems. On the other hand, Karmarkar's algorithm, with its considerable computational promise, is apparently unable to provide such theoretical results since it cannot deal with an exponential number of implicit constraints.
The ideas introduced by Karmarkar are also novel. He uses projective transformations to view each iteration as similar to the first (the ellipsoid method uses affine transformations analogously). The performance guarantee is provided by an auxiliary objective function, Karmarkar's potential function, which decreases by a constant at every iteration. The form of this function is reminiscent of logarithmic barrier functions in nonlinear programming.
If one can solve in polynomial time linear programming problems in the canonical form above, one can similarly solve general problems. However, the transformation described by Karmarkar in Section 5 has a small gap, in that the transformed problem might have an optimal solution with the extra homogenizing variable x? n+1=0. In this case, the projective transformation cannot be inverted, or leads to a point at infinity. This could occur, for instance, if the original problem were: minimize x 1?x 2 subject to x 1?x 2? 1, ?x 1+x 2? 2, x 1, x:- 0I2?. The resulting canonical problem has an optimal solution x?=(1/2,1/2,0,...,0)
T, with corresponding “point at infinity” x
1=x 2= ? y=u=v=0, &lgr;=0. Note that the original problem could have an optimal solution or be infeasible according to whether b 1+b 2 is nonpositive or positive respectively. I believe the problem can be fixed by perturbing b and c, but at a cost of an increased complexity bound.
Finally, let me comment briefly on computational experience. A number of researchers have confirmed the very small number of iterations required by Karmarkar's algorithm (growing very slowly with n, if at all; indeed it seems that 30 iterations almost always suffice). However, at present only Karmarkar has been able to perform each iteration (basically a weighted least squares problem) fast enough to provide dramatic speedups compared to the simplex method.
Access critical reviews of Computing literature here
Dadush DKoh ZNatura BOlver NVégh LMohar BShinkar IO'Donnell R(2024)A Strongly Polynomial Algorithm for Linear Programs with At Most Two Nonzero Entries per Row or ColumnProceedings of the 56th Annual ACM Symposium on Theory of Computing10.1145/3618260.3649764(1561-1572)Online publication date: 10-Jun-2024
Raza NMoazeni F(2024)Chance-constrained vulnerability assessment of smart water distribution systems against stealthy false data injection attacksInternational Journal of Critical Infrastructure Protection10.1016/j.ijcip.2023.10064544:COnline publication date: 1-Mar-2024
Mariz JBadiozamani MPeroni RSilva R(2024)A critical review of bench aggregation and mining cut clustering techniques based on optimization and artificial intelligence to enhance the open-pit mine planningEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108334133:PDOnline publication date: 1-Jul-2024
Nesterov Y(2024)Set-Limited Functions and Polynomial-Time Interior-Point MethodsJournal of Optimization Theory and Applications10.1007/s10957-023-02163-x202:1(11-26)Online publication date: 1-Jul-2024
Darvay ZRigó P(2024)New Predictor–Corrector Algorithm for Symmetric Cone Horizontal Linear Complementarity ProblemsJournal of Optimization Theory and Applications10.1007/s10957-022-02078-z202:1(50-75)Online publication date: 1-Jul-2024