Abstract
The augmented Lagrangian method (ALM) is a popular method for solving linearly constrained convex minimization problems, and it has been used in many applications such as compressive sensing or image processing. Recently, accelerated versions of the augmented Lagrangian method (AALM) have been developed, and they assume that the subproblem can be exactly solved. However, the subproblem of the augmented Lagrangian method in general does not have a closed-form solution. In this paper, we introduce an inexact version of an accelerated augmented Lagrangian method (I-AALM), with an implementable inexact stopping condition for the subproblem. It is also proved that the convergence rate of our method remains the same as the accelerated ALM, which is \({{\mathcal {O}}}(\frac{1}{k^2})\) with an iteration number \(k\). In a similar manner, we propose an inexact accelerated alternating direction method of multiplier (I-AADMM), which is an inexact version of an accelerated ADMM. Numerical applications to compressive sensing or image inpainting are also presented to validate the effectiveness of the proposed iterative algorithms.
Similar content being viewed by others
References
Alber, Y.I., Burachik, R.S., Iusem, A.N.: A proximal point method for nonsmooth convex optimization problems in Banach spaces. Abstr. Appl. Anal. 2, 97–120 (1997)
Beck, A., Teboulle, M.: A fast iterative shrinkage-threshoilding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)
Benfenati, A., Ruggiero, V.: Inexact Bregman iteration with an application to poisson data reconstruction. Inverse Probl. 29(6), 065,016 (2013)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, New York (1996)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010)
Cai, J., Osher, S., Shen, Z.: Linearized Bregman iterations for compressed sensing. Math. Comput. 78, 1515–1536 (2009)
Candés, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Thoery 52(2), 489–509 (2006)
Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)
Donga, F., Zhangb, H., Kongc, D.X.: Nonlocal total variation models for multiplicative noise removal using split Bregman iteration. Math. Comput. Model. 55, 939–954 (2012)
Douglas, J., Rachford, H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(3), 421–439 (1956)
Elad, M., Matalon, B., Zibulevsky, M.: Subspace optimization methods for linear least squares with non-quadratic regularization. Appl. Comput. Harmon. Anal. 23, 346–367 (2007)
Esser, E.: Applications of Lagrangian-based alternating direction methods and connections to split-Bregman. UCLA CAM Report 09–31 (2009)
Freiedlander, J., Tseng, P.: Exact regularization of convex programs. SIAM J. Opt. 18(4), 1326–1350 (2007)
Goldstein, T., O’Donoghue, B., Setzer, S.: Fast alternating direction optimization methods. UCLA CAM Report, pp. 12–35 (2012)
Goldstein, T., Osher, S.: The split Bregman method for l1-regularized problems. SIAM J. Imag. Sci. 2(2), 323–343 (2009)
Hale, E., Yin, W., Zhang, Y.: Fixed-point continuation applied to compressed sensing: implementation and numerical experiments. J. Comput. Math. 28, 170–194 (2010)
He, B., Yang, H.: Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23, 151–161 (1998)
He, B., Yuan, X.: The acceleration of augmented Lagrangian method for linearly constrained optimization. Optimization online www.optimization-online.org/DBFILE/2010/10/2760.pdf (2010)
He, B., Yuan, X.: An accelerated inexact proximal point algorithm for convex minimization. J. Optim. Thoery Appl. 154(2), 536–548 (2012)
Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Thoery Appl. 4, 303–320 (1969)
Hiriart-Urruty, J.B., Lemarechal, C.: Convex Analysis and Minimization Algorithms: Part 1: Fundamentals. Springer-Verlag, New York (1996)
Huang, B., Ma, S., Goldfarb, D.: Accelerated linearized Bregman method. J. Sci. Comput. 54, 428–453 (2013)
Jiang, K., Sun, D., Toh, K.C.: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP. SIAM J. Optim. 22(3), 1042–1064 (2012)
Kang, M., Yun, S., Woo, H., Kang, M.: Accelerated bregman method for linearly constrained \(\ell _1\) -\(\ell _2\) minimization. J. Sci. Comput. 56(3), 515–534 (2013)
Nedelcu, V., Necoara, I.: Iteration complexity of an inexact augmented Lagrangian method for constrained mpc. Paper presented at the IEEE 51st annual conference on decision and control, pp. 650–655 (2012)
Nesterov, Y.: Gradient methods for minimizing composite objective function. CORE (2007)
Ng, M.K., Wang, F., Yuan, X.: Inexact alternating direction methods for image recovery. SIAM J. Sci. Comput. 33(4), 1643–1668 (2011)
Ochs, P., Brox, T., Pock, T.: ipiasco : Inertial proximal algorithm for strongly convex optimization. Manuscript (2014)
Powell, M.J.D.: A Method for Nonlinear Constraints in Minimization Problems. Academic Press, New York (1972)
Rockafellar, R.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)
Rockafellar, R.T., Wets, R.B.: Variational Analysis. Springer, New York (1998)
Setzer, S.: Operator splittings, Bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 92, 265–280 (2011)
Villa, S., Salzo, S., Baldassarre, L., Verri, A.: Accelerated and inexact forward–backward algorithms. SIAM J. Optim. 23(3), 1607–1633 (2013)
Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imag. Sci. 1(3), 248–272 (2008)
Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Proc. 57, 2479–2493 (2009)
Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33(1), 250–278 (2011)
Yang, J., Zhang, Y., Yin, W.: A fast alternating direction method for TV L1–L2 signal reconstruction from partial fourier data. IEEE J. Sel. Top. Signal Process. 4(2), 288–297 (2010)
Yang, Y., Möller, M., Osher, S.: A dual split Bregman method for fast \(\ell _1\) minimization. Math. Comput. 82(284), 2061–2085 (2013)
Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imag. Sci. 3(4), 856–877 (2010)
Yin, W., Osher, S.: Error forgetting of Bregman iteration. J. Sci. Comput. 54, 684–695 (2013)
Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressive sensing. SIAM J. Imag. Sci. 1(1), 143–168 (2008)
Acknowledgments
Myeongmin Kang was supported by BK21 PLUS SNU Mathematical Sciences Division. Myungjoo Kang was supported by Basic Sciences Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2014R1A2A1A10050531) and MOTIE (10048720). Miyoun Jung was supported by Hankuk University of Foreign Studies Research Fund.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Proof of Theorem 2
Proof
Let \(h_k = t_k^2(D(\lambda ^*) - D(\lambda _k)) \ge 0\) and \(p_k = \frac{1}{2\tau }\Vert s_k\Vert ^2_2.\) By Remark 1 with setting \(\gamma = \lambda ^*\), we have
From the inexact conditions (32) and (33) of subproblems in the I-AADMM, we note that
Then, from the inequalities (47)–(48) and by setting \(\epsilon _{-1} = \epsilon _0,\) we obtain
Furthermore, by Lemma 8 and the inequality (48), we have
The inequalities (49) and (50) yield that
where \( q_k =\sqrt{2\tau p_1}\epsilon _1+ \cdots + \sqrt{2\tau p_k}\epsilon _k + \epsilon _{-1}^2 + \cdots + \epsilon _{k-2}^2.\) Since \(h_k\ge 0\) and due to the inequality (51), we have
Since \(\frac{1}{2\tau }\Vert \lambda ^* - \hat{\lambda }_1\Vert ^2_2 \ge p_1 -\epsilon _{-1}^2 -\epsilon _1\sqrt{2\tau p_1},\) we obtain the following, by the triangle inequality
This inequality and the triangle inequality lead to
From (52), we have
i.e.,
Consequently, we obtain the following inequalities
where the first inequality is from (52) and (54), and the last inequality is from the triangle inequality.
By summing the inequality (55) from \(2\) to \(k\), we have
where the second inequality is from (53) and \(\epsilon _j\le \epsilon _{j+1}\). This implies that
From here, we have \(q_k \le 2\tau \bar{\epsilon }_k^2 + 2 \Vert \lambda ^*-\hat{\lambda }_1\Vert _2\bar{\epsilon }_k + 2\tilde{\epsilon }_k\), by the arithmetic mean-geometric mean inequality.
Since \(h_k \le \frac{1}{2\tau }\Vert \lambda ^*-\hat{\lambda }_1\Vert _2^2 + q_k,\) we get to the final inequality
\(\square \)
Rights and permissions
About this article
Cite this article
Kang, M., Kang, M. & Jung, M. Inexact accelerated augmented Lagrangian methods. Comput Optim Appl 62, 373–404 (2015). https://doi.org/10.1007/s10589-015-9742-8
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-015-9742-8