Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Preconditioning Newton–Krylov methods in nonconvex large scale optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We consider an iterative preconditioning technique for non-convex large scale optimization. First, we refer to the solution of large scale indefinite linear systems by using a Krylov subspace method, and describe the iterative construction of a preconditioner which does not involve matrices products or matrices storage. The set of directions generated by the Krylov subspace method is used, as by product, to provide an approximate inverse preconditioner. Then, we experience our preconditioner within Truncated Newton schemes for large scale unconstrained optimization, where we generalize the truncation rule by Nash–Sofer (Oper. Res. Lett. 9:219–221, 1990) to the indefinite case, too. We use a Krylov subspace method to both approximately solve the Newton equation and to construct the preconditioner to be used at the current outer iteration. An extensive numerical experience shows that the proposed preconditioning strategy, compared with the unpreconditioned strategy and PREQN (Morales and Nocedal in SIAM J. Optim. 10:1079–1096, 2000), may lead to a reduction of the overall inner iterations. Finally, we show that our proposal has some similarities with the Limited Memory Preconditioners (Gratton et al. in SIAM J. Optim. 21:912–935, 2011).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. We remark, indeed, that the numerical results reported in [21] are pretty different from the results using PREQN in our Truncated Newton scheme, proving that the two optimization frameworks are likely different, and not immediately comparable.

References

  1. Baglama, J., Calvetti, D., Golub, G.H., Reichel, L.: Adaptively preconditioned GMRES algorithms. SIAM J. Sci. Comput. 20, 243–269 (1998)

    Article  MathSciNet  Google Scholar 

  2. Bernstein, D.S.: Matrix Mathematics, 2nd edn. Princeton University Press, Princeton (2009)

    MATH  Google Scholar 

  3. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. MPS—SIAM Series on Optimization. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  4. Dembo, R.S., Steihaug, T.: Truncated-Newton algorithms for large-scale unconstrained optimization. Math. Program. 26, 190–212 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  5. Dolan, E.D., Moré, J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  6. Erhel, J., Burrage, K., Pohl, B.: Restarted GMRES preconditioned by deflation. J. Comput. Appl. Math. 69, 303–318 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fasano, G.: Planar-conjugate gradient algorithm for large-scale unconstrained optimization, Part 1: Theory. J. Optim. Theory Appl. 125, 523–541 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  8. Fasano, G.: Planar-conjugate gradient algorithm for large-scale unconstrained optimization, Part 2: Application. J. Optim. Theory Appl. 125, 543–558 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Fasano, G.: Lanczos-conjugate gradient method and pseudoinverse computation, in unconstrained optimization. J. Optim. Theory Appl. 132, 267–285 (2006)

    Article  MathSciNet  Google Scholar 

  10. Fasano, G., Roma, M.: Iterative computation of negative curvature directions in large scale optimization. Comput. Optim. Appl. 38, 81–104 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gill, P.E., Murray, W., Ponceleon, D.B., Saunders, M.A.: Preconditioners for indefinite systems arising in optimization. SIAM J. Matrix Anal. Appl. 13, 292–311 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  12. Giraud, L., Gratton, S.: On the sensitivity of some spectral preconditioners. SIAM J. Matrix Anal. Appl. 27, 1089–1105 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. The John Hopkins Press, Baltimore (1996)

    MATH  Google Scholar 

  14. Gould, N.I.M., Lucidi, S., Roma, M., Toint, Ph.L.: Exploiting negative curvature directions in linesearch methods for unconstrained optimization. Optim. Methods Softw. 14, 75–98 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gould, N.I.M., Orban, D., Toint, Ph.L.: CUTEr (and sifdec), a constrained and unconstrained testing environment, revised. ACM Trans. Math. Softw. 29, 373–394 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  16. Gratton, S., Sartenaer, A., Tshimanga, J.: On a class of limited memory preconditioners for large scale linear systems with multiple right-hand sides. SIAM J. Optim. 21, 912–935 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Grippo, L., Lampariello, F., Lucidi, S.: A truncated Newton method with nonmonotone linesearch for unconstrained optimization. J. Optim. Theory Appl. 60, 401–419 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kelley, C.T.: Iterative Methods for Optimization. SIAM Frontiers in Applied Mathematics. SIAM, Philadelphia (1999)

    Book  MATH  Google Scholar 

  19. Loghin, L., Ruiz, D., Touhami, A.: Adaptive preconditioners for nonlinear systems of equations. J. Comput. Appl. Math. 189, 362–374 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  20. Morales, J.L., Nocedal, J.: Algorithm PREQN: Fortran 77 subroutine for preconditioning the conjugate gradient method. ACM Trans. Math. Softw. 27, 83–91 (2001)

    Article  MATH  Google Scholar 

  21. Morales, J.L., Nocedal, J.: Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10, 1079–1096 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  22. Nash, S.G.: Preconditioning of truncated-Newton methods. SIAM J. Sci. Stat. Comput. 6, 599–616 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  23. Nash, S.G.: A survey of truncated-Newton methods. J. Comput. Appl. Math. 124, 45–59 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  24. Nash, S.G., Sofer, A.: Assessing a search direction within a truncated-Newton method. Oper. Res. Lett. 9, 219–221 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  25. Nocedal, J.: Large scale unconstrained optimization. In: Watson, A., Duff, I. (eds.) The State of the Art in Numerical Analysis, pp. 311–338. Oxford University Press, Oxford (1997)

    Google Scholar 

  26. Roma, M.: A dynamic scaling based preconditioning for truncated Newton methods in large scale unconstrained optimization. Optim. Methods Softw. 20, 693–713 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  27. Stoer, J.: Solution of large linear systems of equations by conjugate gradient type methods. In: Bachem, A., Grötschel, M., Korte, B. (eds.) Mathematical Programming. the State of the Art, pp. 540–565. Springer, Berlin (1983)

    Chapter  Google Scholar 

  28. Tshimanga, J., Gratton, S., Weaver, A.T., Sartenaer, A.: Limited-memory preconditioners with applications to incremental four-dimensional variational data assimilation. Q. J. R. Meteorol. Soc. 134, 751–769 (2008)

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the national research program “Programma PRIN 20079PLLN7 Nonlinear Optimization, Variational Inequalities, and Equilibrium Problems”. The first author wishes also to thank the CNR-INSEAN (Italian Ship Model Basin) research program “RITMARE”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Massimo Roma.

Appendix

Appendix

Proof of Proposition 5.2

From the definition of \(M_{h}^{-1}\), we have

(A.1)

From the latter relation, if \(\tilde{a}_{1}=1\) in (5.4), then the directions s h and \(s_{2}^{\scriptscriptstyle PR}\) coincide. Now, by definition we have

(A.2)

Moreover, from (A.1)

thus, we have also

Furthermore, observe that \(Q({s}_{2}^{\scriptscriptstyle PR}) \leq Q(s_{h})\) if and only if

or equivalently

(A.3)

To prove the latter relation we separately consider two cases: the case \(\sum_{i=1}^{h} a_{i} \|r_{i}\|^{2} >\nobreak 0\) and the case \(\sum_{i=1}^{h} a_{i} \|r_{i}\|^{2} < 0\). In the first case the relation (A.3) holds if and only if

or equivalently if and only if

and the latter inequality holds since

In the second case the relation (A.3) is equivalent to

which holds since

This finally proves that

 □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fasano, G., Roma, M. Preconditioning Newton–Krylov methods in nonconvex large scale optimization. Comput Optim Appl 56, 253–290 (2013). https://doi.org/10.1007/s10589-013-9563-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9563-6

Keywords

Navigation