Abstract.
In this paper, we introduce the notion of a self-regular function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy a polynomial ?(n \(\frac{q+1}{2q}\)log\(\frac{n}{\varepsilon}\)) iteration bound, where q≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends on q and the growth degree p≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(\(\sqrt{n}\)lognlog\(\frac{n}{\varepsilon}\)) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor \(\sqrt{n}\). Our unified analysis provides also the ?(\(\sqrt{n}\)log\(\frac{n}{\varepsilon}\)) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension of the above results to semidefinite optimization (SDO) is also presented.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Received: March 2000 / Accepted: December 2001¶Published online April 12, 2002
Rights and permissions
About this article
Cite this article
Peng, J., Roos, C. & Terlaky, T. Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 93, 129–171 (2002). https://doi.org/10.1007/s101070200296
Issue Date:
DOI: https://doi.org/10.1007/s101070200296