Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing general nonlinear objective function with Lipschitz continuous derivatives. For convex objective function, this algorithm maintains the optimal convergence rate of convex optimization. In numerical experiments, this algorithm is specified by employing safe-guarded nonlinear conjugate gradient search directions. Numerical results show the nonmonotone algorithm performs significantly better than the corresponding monotone algorithm for solving the unconstrained optimization problems in the CUTEr (Bongartz et al. in ACM Trans. Math. Softw. 21:123–160, 1995) library.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  2. Dai, Y.H., Zhang, H.: An adaptive two-point stepsize gradient algorithm. Numer. Algorithms 27, 377–385 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  3. Dai, Y.H., Yuan, Y.Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  4. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  5. Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic optimization. Technical Report, Department of Industrial and Systems Engineering, University of Florida (2013)

  6. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  7. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  8. Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17, 526–557 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 35–58 (2006)

    MATH  MathSciNet  Google Scholar 

  10. Hager, W.W., Zhang, H.: The limited memory conjugate gradient method. Technical Report (Nov. 6th, 2012)

  11. Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  12. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence \(\mathcal {O}(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983). Translated as Soviet Math. Dokl.

    MathSciNet  Google Scholar 

  13. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic, Norwell (2004)

    Book  Google Scholar 

  14. Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14, 1043–1056 (2004)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongchao Zhang.

Additional information

This material is based upon work supported by National Science Foundation grant 1016204.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, H. A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization. Comput Optim Appl 57, 27–43 (2014). https://doi.org/10.1007/s10589-013-9588-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9588-x

Keywords

Navigation