Abstract
In this paper we study the complexity of solving linear programs in finite precision arithmetic. This is the normal setup in scientific computation, as digital computers work in finite precision. We analyze two aspects of the complexity: one is the number of arithmetic operations required to solve the problem approximately, and the other is the working precision required to carry out some critical computations safely. We show how the “conditioning” of the problem instance affects the working precision required and the computational requirements of a classical logarithmic barrier algorithm to approximate the optimal value of the problem within a given tolerance. Our results show that these complexity measures depend linearly on the logarithm of a certain condition measure. We carry out the analysis by looking at how well Newton's Method can follow the central trajectory of the feasible set, and computing error bounds in terms of the condition measure. These results can be interpreted as a theoretical indication of “good” numerical behavior of the logarithmic barrier method, in the sense that a problem instance “twice as hard” as the other from the numerical point of view, requires only at most twice as much precision to be solved. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
Similar content being viewed by others
References
S. Smale, Some remarks on the foundations of numerical analysis, SIAM Review 32 (1990) 211–220.
J. Renegar, Incorporating Condition Measures into the Complexity Theory of Linear Programming, SIAM Journal on Optimization 5 (3) (1995) 506–524.
J. Vera, Ill-Posedness in mathematical programming and problem solving with approximate data, Ph.D. Dissertation, Cornell University, Ithaca, NY, 1992.
J. Vera, Ill-Posedness and the complexity of deciding existence of solutions to linear programs, SIAM Journal on Optimization 6 (3) (1996) 549–569.
J. Vera, Ill-Posedness and the computation of solutions to linear programs with approximate data, Working paper, Dept. of Industrial Engineering, University of Chile.
J. Renegar, A polynomial-time algorithm based on Newton's method for linear programming, Mathematical Programming 40 (1988) 59–94.
Nesterov, Y., A. Nemirovskii, Interior-point polynomial algorithms in convex programming, in: SIAM Studies in Applied Mathematics No. 13, SIAM, Philadelphia, PA, 1994.
D. Den Hertog, C. Roos, T. Terlaky, On the classical logarithmic barrier function method for a class of smooth convex programming problems, Journal of Optimization Theory and Applications 73 (1) (1992) 1–25.
F. Jarre, Interior-point methods for convex programming, Applied Mathematics and Optimization 26 (1992) 287–311.
J. Renegar, Linear programming, complexity theory and elementary functional analysis, Mathematical Programming 70 (3) (1995) 279–351.
G.H. Golub, C. Van Loan, Matrix Computations, second ed., Johns Hopkins Univ. Press, Baltimore, MD, 1989.
J.H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford Univ. Press, Oxford, 1965.
R. Freund, J. Vera, Some characterization and properties of the “distance to ill-posedness” and the condition measure of a conic linear system, Working paper, November 1995.
J. Renegar, Some perturbation theory for linear programming, Mathematical Programming 65 (1994) 73–91.
Author information
Authors and Affiliations
Additional information
This research has been supported through grants from Fundación Andes, under agreement C12021/7, and FONDECYT (project number 1930948).
Rights and permissions
About this article
Cite this article
Vera, J.R. On the complexity of linear programming under finite precision arithmetic. Mathematical Programming 80, 91–123 (1998). https://doi.org/10.1007/BF01582132
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01582132