Abstract
Two extreme techniques when choosing a search direction in a linearly constrained optimization calculation are to take account of all the constraints or to use an active set method that satisfies selected constraints as equations, the remaining constraints being ignored. We prefer an intermediate method that treats all inequality constraints with “small” residuals as inequalities with zero right hand sides and that disregards the other inequality conditions. Thus the step along the search direction is not restricted by any constraints with small residuals, which can help efficiency greatly, particularly when some constraints are nearly degenerate. We study the implementation, convergence properties and performance of an algorithm that employs this idea. The implementation considerations include the choice and automatic adjustment of the tolerance that defines the “small” residuals, the calculation of the search directions, and the updating of second derivative approximations. The main convergence theorem imposes no conditions on the constraints except for boundedness of the feasible region. The numerical results indicate that a Fortran implementation of our algorithm is much more reliable than the software that was tested by Hock and Schittkowski (1981). Therefore the algorithm seems to be very suitable for general use, and it is particularly appropriate for semi-infinite programming calculations that have many linear constraints that come from discretizations of continua.
Similar content being viewed by others
References
J. Bracken and G.P. McCormick,Selected Applications of Nonlinear Programming (Wiley, New York, 1968).
R. Fletcher,Practical Methods of Optimization (Wiley, Chichester, 1988).
P.E. Gill, W. Murray and M.H. Wright,Practical Optimization (Academic Press, New York, 1981).
D. Goldfarb and A. Idnani, “A numerically stable dual method for solving strictly convex quadratic programs,”Mathematical Programming 27 (1983) 1–33.
W. Hock and K. Schittkowski,Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems, Vol. 187 (Springer, Berlin, 1981).
E. Polak,Computational Methods in Optimization: A Unified Approach (Academic Press, New York, 1971).
M.J.D. Powell, “Updating conjugate directions by the BFGS formula,”Mathematical Programming 38 (1987) 29–46.
M.J.D. Powell, “On a matrix factorization for linearly constrained optimization problems,” Report DAMTP/1988/NA9, University of Cambridge (Cambridge, UK, 1988).
M.J.D. Powell, “TOLMIN: a Fortran package for linearly constrained optimization calculations,” Report DAMTP/1989/NA2, University of Cambridge (Cambridge, UK, 1989).
P. Wolfe, “Convergence conditions for accent methods,”SIAM Review 11 (1969) 226–235.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Powell, M.J.D. A tolerant algorithm for linearly constrained optimization calculations. Mathematical Programming 45, 547–566 (1989). https://doi.org/10.1007/BF01589118
Issue Date:
DOI: https://doi.org/10.1007/BF01589118