Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Asynchronous parallel primal–dual block coordinate update methods for affinely constrained convex programs

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Recent several years have witnessed the surge of asynchronous (async-) parallel computing methods due to the extremely big data involved in many modern applications and also the advancement of multi-core machines and computer clusters. In optimization, most works about async-parallel methods are on unconstrained problems or those with block separable constraints. In this paper, we propose an async-parallel method based on block coordinate update (BCU) for solving convex problems with nonseparable linear constraint. Running on a single node, the method becomes a novel randomized primal–dual BCU for multi-block affinely constrained problems. For these problems, Gauss–Seidel cyclic primal–dual BCU is not guaranteed to converge to an optimal solution if no additional assumptions, such as strong convexity, are made. On the contrary, assuming convexity and existence of a primal–dual solution, we show that the objective value sequence generated by the proposed algorithm converges in probability to the optimal value and also the constraint residual to zero. In addition, we establish an ergodic O(1 / k) convergence result, where k is the number of iterations. Numerical experiments are performed to demonstrate the efficiency of the proposed method and significantly better speed-up performance than its sync-parallel counterpart.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. In [32], a linear convergence result is established under strong monotonicity assumption, which is similar to strong convexity in optimization.

  2. Each epoch is equivalent to updating m\({\mathbf {x}}\)-blocks.

  3. The data can be downloaded from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.

References

  1. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods, vol. 23. Prentice Hall, Englewood Cliffs (1989)

    MATH  Google Scholar 

  2. Bianchi, P., Hachem, W., Iutzeler, F.: A coordinate descent primal–dual algorithm and application to distributed asynchronous optimization. IEEE Trans. Autom. Control 61(10), 2947–2957 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cai, X., Han, D., Yuan, X.: On the convergence of the direct extension of ADMM for three-block separable convex minimization models with one strongly convex function. Comput. Optim. Appl. 66(1), 39–73 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chang, T.-H., Hong, M., Liao, W.-C., Wang, X.: Asynchronous distributed admm for large-scale optimization—part I: algorithm and convergence analysis. IEEE Trans. Signal Process. 64(12), 3118–3130 (2016)

    Article  MathSciNet  Google Scholar 

  5. Chang, T.-H., Liao, W.-C., Hong, M., Wang, X.: Asynchronous distributed admm for large-scale optimization—part II: linear convergence analysis and numerical performance. IEEE Trans. Signal Process. 64(12), 3131–3144 (2016)

    Article  MathSciNet  Google Scholar 

  6. Chazan, D., Miranker, W.: Chaotic relaxation. Linear Algebra Appl. 2(2), 199–222 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Programm. 155(1–2), 57–79 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  9. Combettes, P.L., Eckstein, J.: Asynchronous block-iterative primal–dual decomposition methods for monotone inclusions. Math. Program. 168(1–2), 645–672 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  11. Davis, D., Yin, W.: A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 25(4), 829–858 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  12. Deng, W., Lai, M.-J., Peng, Z., Yin, W.: Parallel multi-block ADMM with \(o(1/k)\) convergence. J. Sci. Comput. 71(2), 712–736 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  13. Feng, J.-K., Zhang, H.-B., Cheng, C.-Z., Pei, H.-M.: Convergence analysis of L-ADMM for multi-block linear-constrained separable convex minimization problem. J. Oper. Res. Soc. China 3(4), 563–579 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Gao, X., Xu, Y., Zhang, S.: Randomized primal–dual proximal block coordinate updates. arXiv preprint arXiv:1605.05969 (2016)

  15. Gao, X., Zhang, S.: First-order algorithms for convex optimization with nonseparable objective and coupled constraints. J. Oper. Res. Soc. China 5(2), 131–159 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  16. Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155(1), 227–238 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  17. He, B., Hou, L., Yuan, X.: On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming. SIAM J. Optim. 25(4), 2274–2312 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. He, B., Tao, M., Yuan, X.: Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 22(2), 313–340 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. He, B., Tao, M., Yuan, X.: Convergence rate analysis for the alternating direction method of multipliers with a substitution procedure for separable convex programming. Math. Oper. Res. 42(3), 662–691 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hong, M., Chang, T.-H., Wang, X., Razaviyayn, M., Ma, S., Luo, Z.-Q.: A block successive upper bound minimization method of multipliers for linearly constrained convex optimization. arXiv preprint arXiv:1401.7079 (2014)

  21. Hong, M., Wang, X., Razaviyayn, M., Luo, Z.-Q.: Iteration complexity analysis of block coordinate descent methods. Math. Program. 163(1–2), 85–114 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. James, G.M., Paulson, C., Rusmevichientong, P.: Penalized and Constrained Regression. Technical report (2013)

  23. Li, M., Sun, D., Toh, K.-C.: A convergent 3-block semi-proximal ADMM for convex minimization problems with one strongly convex block. Asia-Pac. J. Oper. Res. 32(04), 1550024 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Li, X., Sun, D., Toh, K.-C.: A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions. Math. Program. 155(1–2), 333–373 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  25. Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multiblock variables. SIAM J. Optim. 25(3), 1478–1497 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lin, T., Ma, S., Zhang, S.: On the sublinear convergence rate of multi-block ADMM. J. Oper. Res. Soc. China 3(3), 251–274 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  27. Liu, J., Wright, S.J.: Asynchronous stochastic coordinate descent: parallelism and convergence properties. SIAM J. Optim. 25(1), 351–376 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Liu, J., Wright, S.J., Ré, C., Bittorf, V., Sridhar, S.: An asynchronous parallel stochastic coordinate descent algorithm. J. Mach. Learn. Res. 16, 285–322 (2015)

    MathSciNet  MATH  Google Scholar 

  29. Markowitz, H.: Portfolio selection. J. Finance 7(1), 77–91 (1952)

    Google Scholar 

  30. Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  31. Peng, Z., Wu, T., Xu, Y., Yan, M., Yin, W.: Coordinate friendly structures, algorithms and applications. Ann. Math. Sci. Appl. 1(1), 57–119 (2016)

    MATH  Google Scholar 

  32. Peng, Z., Xu, Y., Yan, M., Yin, W.: ARock: an algorithmic framework for asynchronous parallel coordinate updates. SIAM J. Sci. Comput. 38(5), A2851–A2879 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Peng, Z., Xu, Y., Yan, M., Yin, W.: On the convergence of asynchronous parallel iteration with arbitrary delays. arXiv preprint arXiv:1612.04425 (2016)

  34. Razaviyayn, M., Hong, M., Luo, Z.-Q.: A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM J. Optim. 23(2), 1126–1153 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  35. Recht, B., Re, C., Wright, S., Niu, F.: Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems, pp. 693–701 (2011)

  36. Richtárik, P., Takáč, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(1–2), 1–38 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  37. Sun, D., Toh, K.-C., Yang, L.: A convergent 3-block semiproximal alternating direction method of multipliers for conic programming with 4-type constraints. SIAM J. Optim. 25(2), 882–915 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  38. Sun, R., Luo, Z.-Q., Ye, Y.: On the expected convergence of randomly permuted ADMM. arXiv preprint arXiv:1503.06387 (2015)

  39. Tseng, P.: On the rate of convergence of a partially asynchronous gradient projection algorithm. SIAM J. Optim. 1(4), 603–619 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  40. Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109(3), 475–494 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  41. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117(1–2), 387–423 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  42. Wei, E., Ozdaglar, A.: On the \(o (1/ k)\) convergence of asynchronous distributed alternating direction method of multipliers. In: IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 551–554 (2013)

  43. WhiteHouse. Big Data: Seizing Opportunities Preserving Values (2014)

  44. Xu, Y.: Hybrid Jacobian and Gauss–Seidel proximal block coordinate update methods for linearly constrained convex programming. SIAM J. Optim. 28(1), 646–670 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  45. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6(3), 1758–1789 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  46. Xu, Y., Yin, W.: A globally convergent algorithm for nonconvex optimization based on block coordinate update. J. Sci. Comput. 72(2), 700–734 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  47. Zeng, Z.-Q., Yu, H.-B., Xu, H.-R., Xie, Y.-Q., Gao, J.: Fast training support vector machines using parallel sequential minimal optimization. In: Intelligent System and Knowledge Engineering, 2008. ISKE 2008. 3rd International Conference on, vol. 1, pp. 997–1001. IEEE (2008)

  48. Zhang, R., Kwok, J.: Asynchronous distributed ADMM for consensus optimization. In: International Conference on Machine Learning, pp. 1701–1709 (2014)

  49. Zhang, Y., Yang, J., Yin, W.: YALL1: your algorithms for \(l_1\). Online at yall1.blogs.rice.edu (2011)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yangyang Xu.

Additional information

This work is partly supported by NSF Grant DMS-1719549.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, Y. Asynchronous parallel primal–dual block coordinate update methods for affinely constrained convex programs. Comput Optim Appl 72, 87–113 (2019). https://doi.org/10.1007/s10589-018-0037-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-018-0037-8

Keywords

Mathematics Subject Classification

Navigation