Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We introduce a first-order method for solving very large convex cone programs. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of requiring more time to reach very high accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available or a certificate of infeasibility or unboundedness otherwise, is parameter free, and the per-iteration cost of the method is the same as applying a splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We describe an open-source implementation, which handles the usual (symmetric) nonnegative, second-order, and semidefinite cones as well as the (non-self-dual) exponential and power cones and their duals. We report numerical results that show speedups over interior-point cone solvers for large problems, and scaling to very large general cone programs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ye, Y.: Interior Point Algorithms: Theory and Analysis. Wiley, London (2011)

    Google Scholar 

  2. Sturm, J.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 11(1), 625–653 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  3. Skajaa, A., Ye, Y.: A homogeneous interior-point algorithm for nonsymmetric convex conic optimization. http://www.stanford.edu/yyye/nonsymmhsdimp.pdf (2012)

  4. Glowinski, R., Marrocco, A.: Sur l’approximation, par elements finis d’ordre un, et la resolution, par penalisation-dualité, d’une classe de problems de Dirichlet non lineares. Rev. Fr. d’Autom. Inf. Rech. Opér. 9, 41–76 (1975)

    MathSciNet  MATH  Google Scholar 

  5. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  MATH  Google Scholar 

  6. Gabay, D.: Applications of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian Methods: Applications to Numerical Solution of Boundary-Value Problems, pp. 299–331. North-Holland, Amsterdam (1983)

    Chapter  Google Scholar 

  7. Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. Ph.D. thesis, Massachusetts Institute of Technology (1989)

  8. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2011)

    Article  MATH  Google Scholar 

  9. He, B., Yuan, X.: On the \({O(1/n)}\) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Ye, Y., Todd, M., Mizuno, S.: An \(O(\sqrt{n}L)\)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19(1), 53–67 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  11. Xu, X., Hung, P., Ye, Y.: A simplified homogeneous and self-dual linear programming algorithm and its implementation. Ann. Oper. Res. 62, 151–171 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  12. Nesterov, Y., Nemirovski, A.: Interior-Point Polynomial Methods in Convex Programming. SIAM, Philadelphia (1994)

    Book  Google Scholar 

  13. Wen, Z., Goldfarb, D., Yin, W.: Alternating direction augmented Lagrangian methods for semidefinite programming. Math. Program. Comput. 2(3–4), 203–230 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Lan, G., Lu, Z., Monteiro, R.: Primal–dual first-order methods with \({\cal O}(1/\epsilon )\) iteration-complexity for cone programming. Math. Program. 126(1), 1–29 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Aybat, N., Iyengar, G.: An augmented Lagrangian method for conic convex programming. Preprint (2013). arXiv:1302.6322v1

  16. Boyle, J., Dykstra, R.: A method for finding projections onto the intersection of convex sets in Hilbert spaces. In: Dykstra, R., Robertson, T., Wright, F. (eds.) Advances in Order Restricted Statistical Inference. Lecture Notes in Statistics, vol. 37, pp. 28–47. Springer, New York (1986)

    Chapter  Google Scholar 

  17. Bauschke, H., Borwein, J.: Dykstra’s alternating projection algorithm for two sets. J. Approx. Theory 79(3), 418–443 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  18. Censor, Y., Chen, W., Combettes, P., Davidi, R., Herman, G.: On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Comput. Optim. Appl. 51(3), 1065–1088 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  20. Bauschke, H., Koch, V.: Projection methods: Swiss army knives for solving feasibility and best approximation problems with halfspaces. arXiv:1301.4506 (2013)

  21. Eckstein, J., Bertsekas, D.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  22. Combettes, P., Pesquet, J.: Primal–dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Var. Anal. 20(2), 307–330 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  23. Combettes, P.: Systems of structured monotone inclusions: duality, algorithms, and applications. SIAM J. Optim. 23(4), 2420–2447 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Komodakis, N., Pesquet, J.: Playing with duality: an overview of recent primal–dual approaches for solving large-scale optimization problems. arXiv:1406.5429 (2014)

  25. Glowinski, R., Le Tallec, P.: Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. SIAM, Philadelphia (1989)

    Book  MATH  Google Scholar 

  26. Lions, P., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  27. Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin (1984)

    Book  MATH  Google Scholar 

  28. Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems. North-Holland, Amsterdam (1983)

    MATH  Google Scholar 

  29. Douglas, J., Rachford, H.: On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  30. Rockafellar, R.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  31. Spingarn, J.: Partial inverse of a monotone operator. Appl. Math. Optim. 10, 247–265 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  32. Spingarn, J.: Applications of the method of partial inverses to convex programming: decomposition. Math. Program. 32, 199–223 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  33. Spingarn, J.: A primal–dual projection method for solving systems of linear inequalities. Linear Algebra Appl. 65, 45–62 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  34. Eckstein, J.: The Lions–Mercier splitting algorithm and the alternating direction method are instances of the proximal point algorithm. Tech. Rep. LIDS-P-1769, Massachusetts Institute of Technology (1989)

  35. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probab. 18(2), 441 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  36. Censor, Y., Motova, A., Segal, A.: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327(2), 1244–1256 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  37. Censor, T.: Sequential and parallel projection algorithms for feasibility and optimization. In: Multispectral Image Processing and Pattern Recognition, pp. 1–9. Bellingham: International Society for Optics and Photonics (2001)

  38. Yan, M., Yin, W.: Self equivalence of the alternating direction method of multipliers. arXiv:1407.7400 (2014)

  39. Combettes, P.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–270 (1996)

    Article  Google Scholar 

  40. Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  41. O’Connor, D., Vandenberghe, L.: Image deblurring by primal–dual operator splitting. SIAM J. Imaging Sci. 7(3), 1724–1754 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  42. Lin, F., Fardad, M., Jovanovic, M.: Design of optimal sparse feedback gains via the alternating direction method of multipliers. In: Proceedings of the 2012 American Control Conference, pp. 4765–4770 (2012)

  43. Annergren, M., Hansson, A., Wahlberg, B.: An ADMM algorithm for solving \(\ell _1\) regularized MPC (2012)

  44. O’Donoghue, B., Stathopoulos, G., Boyd, S.: A splitting method for optimal control. IEEE Trans. Control Syst. Technol. 21(6), 2432–2442 (2013)

  45. Mota, J., Xavier, J., Aguiar, P., Puschel, M.: Distributed ADMM for model predictive control and congestion control. In: 2012 IEEE 51st Annual Conference on Decision and Control (CDC), pp. 5110–5115 (2012)

  46. O’Donoghue, B.: Suboptimal control policies via convex optimization. Ph.D. thesis, Stanford University (2012)

  47. Wahlberg, B., Boyd, S., Annergren, M., Wang, Y.: An ADMM algorithm for a class of total variation regularized estimation problems. In: Proceedings 16th IFAC Symposium on System Identification (to appear) (2012)

  48. Combettes, P., Wajs, V.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  49. Combettes, P., Pesquet, J.: A Douglas–Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Sign. Proces. 1(4), 564–574 (2007)

    Article  Google Scholar 

  50. Combettes, P., Pesquet, J.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, Berlin (2011)

  51. Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33(1), 250–278 (2011)

    Article  MathSciNet  Google Scholar 

  52. Boyd, S., Mueller, M., O’Donoghue, B., Wang, Y.: Performance bounds and suboptimal policies for multi-period investment. Found. Trends Optim. 1(1), 1–69 (2013)

    Article  Google Scholar 

  53. Parikh, N., Boyd, S.: Block splitting for distributed optimization. Math. Program. Comput. 6(1), 77–102 (2013)

  54. Kraning, M., Chu, E., Lavaei, J., Boyd, S.: Dynamic network energy management via proximal message passing. Found. Trends Optim. 1(2), 70–122 (2014)

    Google Scholar 

  55. Chambolle, A., Pock, T.: A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  56. Becker, S., Candès, E., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3(3), 1–54 (2010)

  57. Gondzio, J.: Matrix-free interior point method. Comput. Optim. Appl. 51(2), 457–480 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  58. Monteiro, R., Ortiz, C., Svaiter, B.: An inexact block-decomposition method for extra large-scale conic semidefinite programming. Optimization-online preprint 4158, 1–21 (2013)

  59. Monteiro, R., Ortiz, C., Svaiter, B.: Implementation of a block-decomposition algorithm for solving large-scale conic semidefinite programming problems. Comput. Optim. Appl. 57, 45–69 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  60. Monteiro, R., Ortiz, C., Svaiter, B.: A first-order block-decomposition method for solving two-easy-block structured semidefinite programs. Math. Program. Comput. 6, 103–150 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  61. Zhao, X., Sun, D., Toh, K.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  62. O’Donoghue, B., Candès, E.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)

  63. Esser, E., Zhang, X., Chan, T.: A general framework for a class of first order primal–dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3(4), 1015–1046 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  64. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2014)

    Google Scholar 

  65. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  66. Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  67. Franklin, G., Powell, J., Emami-Naeini, A.: Feedback Control of Dynamic Systems, vol. 3. Addison-Wesley, Reading, MA (1994)

    MATH  Google Scholar 

  68. Gol’shtein, E., Tret’yakov, N.: Modified Lagrangians in convex programming and their generalizations. Point-to-Set Maps Math. Program. 10, 86–97 (1979)

  69. Eckstein, J.: Parallel alternating direction multiplier decomposition of convex programs. J. Optim. Theory Appl. 80(1), 39–62 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  70. Pataki, G., Schmieta, S.: The DIMACS library of mixed semidefinite-quadratic-linear programs. dimacs.rutgers.edu/Challenges/Seventh/Instances

  71. Mittelmann, H.: An independent benchmarking of SDP and SOCP solvers. Math. Program. (Ser. B) 95, 407–430 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  72. Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  73. Davis, T.: Direct Methods for Sparse Linear Systems. SIAM Fundamentals of Algorithms. SIAM, Philadelphia (2006)

    Book  Google Scholar 

  74. Vanderbei, R.: Symmetric quasi-definite matrices. SIAM J. Optim. 5(1), 100–113 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  75. Nocedal, J., Wright, S.: Numerical Optimization. Springer, Berlin (2006)

    MATH  Google Scholar 

  76. Saad, Y.: Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia (2003)

    Book  MATH  Google Scholar 

  77. Bauer, F.: Optimally scaled matrices. Numer. Math. 5(1), 73–87 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  78. Bauer, F.: Remarks on optimally scaled matrices. Numer. Math. 13(1), 1–3 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  79. Van Der Sluis, A.: Condition numbers and equilibration of matrices. Numer. Math. 14(1), 14–23 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  80. Ruiz, D.: A scaling algorithm to equilibrate both rows and columns norms in matrices. Tech. Rep., Rutherford Appleton Laboratories (2001)

  81. Osborne, E.: On pre-conditioning of matrices. JACM 7(4), 338–345 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  82. Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal–dual algorithms in convex optimization. In: Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), pp. 1762–1769. IEEE (2011)

  83. Giselsson, P., Boyd, S.: Diagonal scaling in Douglas–Rachford splitting and ADMM. In: Proceedings of the 54th IEEE Conference on Decision and Control, pp. 5033–5039 (2014)

  84. Giselsson, P., Boyd, S.: Metric selection in fast dual forward backward splitting. Automatica 62, 1–10 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  85. Toh, K., Todd, M., Tütüncü, R.: SDPT3: A Matlab software package for semidefinite programming. Optim. Methods Softw. 11(12), 545–581 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  86. SCS: Splitting conic solver v1.1.0. https://github.com/cvxgrp/scs (2015)

  87. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx (2013)

  88. Diamond, S., Boyd, S.: CVXPY: A python-embedded modeling language for convex optimization. http://web.stanford.edu/boyd/papers/cvxpy_paper.html (2015)

  89. Udell, M., Mohan, K., Zeng, D., Hong, J., Diamond, S., Boyd, S.: Convex optimization in Julia. SC14 Workshop on High Performance Technical Computing in Dynamic Languages (2014)

  90. Lofberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: IEEE International Symposium on Computed Aided Control Systems Design, pp. 294–289 (2004)

  91. Davis, T.: Algorithm 849: a concise sparse Cholesky factorization package. ACM Trans. Math. Softw. 31(4), 587–591 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  92. Amestoy, P., Davis, T., Duff, I.: Algorithm 837: AMD, an approximate minimum degree ordering algorithm. ACM Trans. Math. Softw. 30(3), 381–388 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  93. OpenMP Architecture Review Board: OpenMP application program interface version 3.0. http://www.openmp.org/mp-documents/spec30.pdf (2008)

  94. Nickolls, J., Buck, I., Garland, M., Skadron, K.: Scalable parallel programming with CUDA. Queue 6(2), 40–53 (2008)

    Article  Google Scholar 

  95. Nesterov, Y.: Towards nonsymmetric conic optimization. http://www.optimization-online.org/DB_FILE/2006/03/1355.pdf (2006). CORE discussion paper

  96. Skajaa, A., Ye, Y.: A homogeneous interior-point algorithm for nonsymmetric convex conic optimization. Math. Program. 150(2), 391–422 (2015)

  97. Khanh Hien, L.: Differential properties of Euclidean projection onto power cone. http://www.optimization-online.org/DB_FILE/2014/08/4502.pdf (2014)

  98. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  99. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  100. Demanet, L., Zhang, X.: Eventual linear convergence of the Douglas–Rachford iteration for basis pursuit. arXiv preprint arXiv:1301.0542 (2013)

  101. Lobo, M., Vandenberghe, L., Boyd, S., Lebret, H.: Applications of second-order cone programming. Linear Algebra Appl. 284, 193–228 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  102. Markowitz, H.: Portfolio selection. J. Finance 7(1), 77–91 (1952)

    Google Scholar 

Download references

Acknowledgments

This research was supported by DARPA’s XDATA program under grant FA8750-12-2-0306. N. Parikh was supported by a NSF Graduate Research Fellowship under grant DGE-0645962. The authors thank Wotao Yin for extensive comments and suggestions on an earlier version of this manuscript, and Lieven Vandenberghe for fruitful discussions early on. We would also like to thank the anonymous reviewers for their constructive feedback.

Conflict of interest

The authors declare that they have no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brendan O’Donoghue.

Appendix: Nonexpansivity

Appendix: Nonexpansivity

In this appendix we show that the mapping consisting of one iteration of the algorithm (17) is nonexpansive, i.e., if we denote the mapping by \(\phi \), then we shall show that

$$\begin{aligned} \Vert \phi (u, v) - \phi (\hat{u}, \hat{v}) \Vert _2 \le \Vert (u ,v) - (\hat{u}, \hat{v}) \Vert _2, \end{aligned}$$

for any (uv) and \((\hat{u}, \hat{v})\).

From (17) we can write the mapping as the composition of two operators, \(\phi = P \circ L\), where

$$\begin{aligned} P(x) = (\Pi _\mathcal {C}(x), -\Pi _{-\mathcal {C}^*}(x)), \end{aligned}$$

and

$$\begin{aligned} L(u,v) = (I+Q)^{-1}(u+v) - v. \end{aligned}$$

To show that \(\phi \) is nonexpansive, we only need to show that both P and L are nonexpansive.

To show that P is nonexpansive, we proceed as follows

$$\begin{aligned} \Vert x - \hat{x}\Vert _2^2&= \Vert \Pi _\mathcal {C}(x) + \Pi _{-\mathcal {C}^*}(x) - \Pi _\mathcal {C}(\hat{x}) - \Pi _{-\mathcal {C}^*}(\hat{x}) \Vert _2^2 \\&= \Vert \Pi _\mathcal {C}(x)- \Pi _\mathcal {C}(\hat{x})\Vert _2^2 + \Vert \Pi _\mathcal {-C^*}(x)- \Pi _\mathcal {-C^*}(\hat{x})\Vert _2^2 \\&\qquad -2 \Pi _\mathcal {C}(\hat{x})^\mathrm{T} \Pi _\mathcal {-C^*}(x) - 2 \Pi _\mathcal {C}(x)^\mathrm{T} \Pi _\mathcal {-C^*}(\hat{x}) \\&\ge \Vert \Pi _\mathcal {C}(x)- \Pi _\mathcal {C}(\hat{x})\Vert _2^2 + \Vert \Pi _\mathcal {-C^*}(x)- \Pi _\mathcal {-C^*}(\hat{x})\Vert _2^2 \\&= \Vert (\Pi _\mathcal {C}(x)- \Pi _\mathcal {C}(\hat{x}) ) , - (\Pi _\mathcal {-C^*}(x)- \Pi _\mathcal {-C^*}(\hat{x}) )\Vert _2^2 \\&= \Vert P(x) - P(\hat{x}) \Vert _2^2, \end{aligned}$$

where the first equality is from the Moreau decompositions of x and \(\hat{x}\) with respect to the cone \(\mathcal {C}\), the second follows by expanding the norm squared and the fact that \(\Pi _\mathcal {C}(x) \perp \Pi _\mathcal {-C^*}(x)\) for any x, and the inequality follows from \(\Pi _\mathcal {C}(\hat{x})^\mathrm{T} \Pi _\mathcal {-C^*}(x) \le 0\) by the definition of dual cones.

Similarly for L we have

$$\begin{aligned} \Vert L(u,v) - L(\hat{u}, \hat{v}) \Vert _2&= \left\| (I+Q)^{-1}(u - \hat{u} +v - \hat{v}) - v + \hat{v}\right\| _2 \\&= \left\| \begin{bmatrix} (I+Q)^{-1}&-(I-(I+Q)^{-1}) \end{bmatrix} (u - \hat{u}, v - \hat{v}) \right\| _2 \\&\le \Vert (u - \hat{u}, v - \hat{v}) \Vert _2 = \Vert (u,v) - (\hat{u}, \hat{v})\Vert _2, \end{aligned}$$

where the inequality can be seen from the fact that

$$\begin{aligned} \begin{bmatrix} (I+Q)^{-1}&-(I-(I+Q)^{-1}) \end{bmatrix} \begin{bmatrix} (I+Q)^{-1}&-(I-(I+Q)^{-1}) \end{bmatrix}^\mathrm{T} = I \end{aligned}$$

by the skew symmetry of Q, and so \(\left\| \begin{bmatrix} (I+Q)^{-1}&-(I-(I+Q)^{-1}) \end{bmatrix}\right\| _2 = 1\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

O’Donoghue, B., Chu, E., Parikh, N. et al. Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding. J Optim Theory Appl 169, 1042–1068 (2016). https://doi.org/10.1007/s10957-016-0892-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-016-0892-3

Keywords

Mathematics Subject Classification

Navigation