Abstract
The alternating direction method of multipliers (ADMM) is being widely used in a variety of areas; its different variants tailored for different application scenarios have also been deeply researched in the literature. Among them, the linearized ADMM has received particularly wide attention in many areas because of its efficiency and easy implementation. To theoretically guarantee convergence of the linearized ADMM, the step size for the linearized subproblems, or the reciprocal of the linearization parameter, should be sufficiently small. On the other hand, small step sizes decelerate the convergence numerically. Hence, it is interesting to probe the optimal (largest) value of the step size that guarantees convergence of the linearized ADMM. This analysis is lacked in the literature. In this paper, we provide a rigorous mathematical analysis for finding this optimal step size of the linearized ADMM and accordingly set up the optimal version of the linearized ADMM in the convex programming context. The global convergence and worst-case convergence rate measured by the iteration complexity of the optimal version of linearized ADMM are proved as well.
Similar content being viewed by others
References
Beck, A.: First-order Methods in Optimization. SIAM, Philadelphia (2017)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010)
Chan, T.F., Glowinski, R.: Finite element approximation and iterative solution of a class of mildly non-linear elliptic equations, technical report, Stanford University, (1978)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
Eckstein, J., Yao, W.: Approximate ADMM algorithms derived from Lagrangian splitting. Comput. Optim. Appl. 68(2), 363–405 (2017)
Eckstein, J., Yao, W.: Understanding the convergence of the alternating direction method of multipliers: theoretical and computational perspectives. Pac. J. Optim. 11, 619–644 (2015)
Esser, E., Möller, M., Osher, S., Sapiro, G., Xin, J.: A convex model for non-negative matrix factorization and dimensionality reduction on physical space. IEEE Trans. Image Process. 21(7), 3239–3252 (2012)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity problems, vol. I. Springer, Berlin (2003)
Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to the Numerical Solutions of Boundary Value Problems, Studies in Applied Mathematics 15. North-Holland, Amsterdam (1983)
Gabay, D.: Applications of the method of multipliers to variationalinequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrange Methods: Applications to the Solution of Boundary-valued Problems, pp. 299–331. North Holland, Amsterdam (1983)
Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite-element approximations. Comput. Math. Appl. 2, 17–40 (1976)
Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, New York (1984)
Glowinski, R.: On alternating direction methods of multipliers: a historical perspective. In: Fitzgibbon, W., Kuznetsov, Y.A., Neittaanm ki, P., Pironneau, O. (eds.) Modeling, Simulation and Optimization for Science and Technology, pp. 59–82. Springer, Dordrecht (2014)
Glowinski, R., Le Tallec, P.: Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. SIAM Studies in Applied Mathematics, Philadelphia (1989)
Hager, W., Ngo, C., Yashtini, M., Zhang, H.: An Alternating direction approximate Newton algorithm for ill-conditioned inverse problems with application to parallel MRI. J. Oper. Res. Soc. China 3, 139–162 (2015)
He, B.S.: PPA-like contraction methods for convex optimization: a framework using variational inequality approach. J. Oper. Res. Soc. China 3, 391–420 (2015)
He, B.S., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monontone variational inequalities. Math. Program. 92, 103–118 (2002)
He, B.S., Liu, H., Wang, Z.R., Yuan, X.M.: A strictly contractive Peaceman–Rachford splitting method for convex programming. SIAM J. Optim. 24, 1011–1040 (2014)
He, B.S., Ma, F., Yuan, X.M.: Optimal proximal augmented Lagrangian method and its application to full Jacobian splitting for multi-block separable convex minimization problems. IMA J. Numer. Anal. dry092, https://doi.org/10.1093/imanum/dry092
He, B.S., Tao, M., Yuan, X.M.: A splitting method for separable convex programming. IMA J. Numer. Anal. 31, 394–426 (2015)
He, B.S., Yang, H.: Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23, 151–161 (1998)
He, B.S., Yang, H., Wang, S.L.: Alternating direction method with selfadaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 106(2), 337–356 (2000)
He, B.S., Yuan, X.M.: On the \(O(1/t)\) convergence rate of the alternating direction method. SIAM J. Numer. Anal. 50, 700–709 (2012)
He, B.S., Yuan, X.M.: On non-ergodic convergence rate of Douglas–Rachford alternating directions method of multipliers. Numer. Math. 130(3), 567–577 (2015)
Larsen, R.M.: Propack-software for large and sparse SVD calculations. Available online.http://sun.stanford.edu/rmunk/PROPACK, pp. 2008–2009 (2004)
Li, X.X., Yuan, X.M.: A proximal strictly contractive Peaceman–Rachford splitting method for convex programming with applications to imaging. SIAM J. Image Sci. 8(2), 1332–1365 (2015)
Lin, Z., Liu, R., Li, H.: Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Mach. Learn. 99(2), 287–325 (2015)
Ng, M.K., Wang, F., Yuan, X.M.: Inexact alternating direction methods for image recovery. SIAM J. Sci. Comput. 33(4), 1643–1668 (2011)
Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996)
Wang, X.F., Yuan, X.M.: The linearized alternating direction method of multipliers for Dantzig selector. SIAM J. Sci. Comput. 34, A2792–A2811 (2012)
Yang, J.F., Yuan, X.M.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82, 301–329 (2013)
Zhang, X.Q., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46, 20–46 (2010)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This is an improved version of our earlier manuscript released on Optimization Online in July 2016 (\(\#\)5569) entitled “Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming”, with the proved optimality of 0.75 for the parameter \(\tau \).
B. He: This author was supported by the NSFC Grant 11871029.
F. Ma: This author was supported by the NSFC Grant 11701564.
X. Yuan: This author was supported by the General Research Fund from Hong Kong Research Grants Council: 12300317.
Rights and permissions
About this article
Cite this article
He, B., Ma, F. & Yuan, X. Optimally linearizing the alternating direction method of multipliers for convex programming. Comput Optim Appl 75, 361–388 (2020). https://doi.org/10.1007/s10589-019-00152-3
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-019-00152-3