Abstract
We proposed implementable conjugate gradient type methods for solving a possibly nondifferentiable convex minimization problem by converting the original objective function to a once continuously differentiable function by way of the Moreau–Yosida regularization. The proposed methods make use of approximate function and gradient values of the Moreau–Yosida regularization instead of the corresponding exact values. The global convergence is established under mild conditions.
Similar content being viewed by others
References
Auslender A.: Numerical methods for nondifferentiable convex optimization. Math. Program. Study 30, 102–126 (1987)
Bagirov A.M., Ganjehlou A.N.: A quasisecant method for minimizing nonsmooth functions. Optim. Methods Softw. 25(1), 3–18 (2009)
Bagirov, A.M., Ganjehlou, A.N.: A secant method for nonsmooth optimization (2009, submitted)
Bagirov A.M., Karasozen B., Sezer M.: Discrete gradient method: a derivative free method for nonsmooth optimization. J. Optim. Theory Appl. 137, 317–334 (2008)
Bonnans J.F., Gilbert J.C., Lemarechal C., Sagastizabal C.: A family of variable-metric proximal methods. Math. Program. 68, 15–47 (1995)
Burke J.V., Qian M.: On the superlinear convergence of the variable—metric proximal point- algorithm using Broyden and BFGS matrix secant updating. Math. Program. 88, 157–181 (2000)
Burke I.V., Lewis A.S.: The speed of Shor’s r-algorithm. IMA J. Numer. Anal. 28, 711–720 (2008)
Cheng W.Y.: A two term PRP based descent method. Numer. Funct. Anal. Optim. 28, 1217–1230 (2007)
Chen X., Fukushima M.: Proximal quasi-Newton methods for nondifferentiable convex optimization. Math. Program. 85, 313–334 (1999)
Correa R., Lemarechal C.: Convergence of some algorithms for convex minimization. Math. Program. 62, 261–275 (1993)
Dai Z., Tian B.: Global convergence of some modified PRP nonlinear conjugate gradient methods. Optim. Lett. 5(4), 615–630 (2011)
Floudas C.A., Pardalos P.M.: Encyclopedia of Optimization. Springer, Berlin (2009)
Fukushima M.: A descent algorithm for nonsmooth convex optimization. Math. Program. 30, 163–175 (1984)
Fukushima M., Qi L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optim. 6, 1106–1120 (1996)
Hager W.W., Zhang H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)
Hager W.W., Zhang H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 35–58 (2006)
Hiriart-Urruty J.-B., Lemarechal C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1993)
Haarala N., Miettinen K., Mäkelä M.M.: Globally convergent limited memory bundle method for large-scale nonsmooth optimization. Math. Program. 109(1), 181–205 (2007)
Kappel F., Kuntsevich A.: An implementation of Shor’s r-algorithm. Comput. Optim. Appl. 15, 193–205 (2005)
Lemarechal, C., Sagastizabal, C.: An approach to variable-metric bundle methods. In: Henry, J., Yuvor, J.P. (eds.) Proceedings of the 16th IFIP-TC7 Conference on Systems Modelling and Optimization, pp. 144–162. Springer, New York (1994)
Lemarechal C., Sagastizabal C.: Practical aspects of the Moreau–Yosida regularization, I: theoretical preliminaries. SIAM J. Optim. 7, 367–385 (1997)
Lu, S., Wei, Z., Li, L.: A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization. Comput. Optim. Appl. (2010, published online)
Lukšan L., Vlček J.: A bundle-Newton method for nonsmooth unconstrained minimization. Math. Program. 83, 373–391 (1998)
Mifflin R.: A quasi-second-order proximal bundle algorithm. Math. Program. 73, 51–72 (1996)
Mifflin R., Sun D., Qi L.: Quasi-Newton bundle-type methods for nondifferentiable convex optimization. SIAM J. Optim. 8, 583–603 (1998)
Mäkelä M.M., Neittaanmäki P.: Nonsmooth Optimization: Analysis and Algorithms with Applications to Optimal Control. World Scientific Publishing Co., Singapore (1992)
Moreau J.J., Panagiotopoulos P.D., Strang G.: Topics in Nonsmooth Mechanics. Birkhäuser Verlag, Basel (1988)
Nesterov YU.: Excessive gap technique in nonsmooth convex minimization. SIAM J. Optim. 16(1), 235–249 (2005)
Nesterov YU.: Smooth minimization of nonsmooth functions. Math. Program. Ser. A 103, 127–152 (2005)
Outrata J., KoÄŤvara M., Zowe J.: Nonsmooth Approach to Optimization Problems with Equilibrium Constraints. Theory, Applications and Numerical Results. Kluwer, Dordrecht (1998)
Pardalos P.M., Rassias T.M., Khan A.A.: Nonlinear Analysis and Variational Problems. Springer, Berlin (2010)
Qi L., Chen X.: A preconditioned proximal Newton method for nondifferentiable convex optimization. Math. Program. 76, 411–429 (1997)
Rauf A.I., Fukushima M.: Global convergent BFGS method for nonsmooth convex optimization. J. Optim. Theory Appl. 104(3), 539–558 (2000)
Sagara N., Fukushima M.: A trust region method for nonsmooth convex optimization. J. Ind. Manag. Optim. 1(2), 171–180 (2005)
Shor N.Z.: Minimization Methods for Non-Differentiable Functions. Springer, Berlin (1985)
Yu J., Vishwanathan S.V.N., Günter S., Schraudolph N.N.: A quasi-Newton approach to nonsmooth convex optimization in machine learning. J. Mach. Learn. Res. 11, 1145–1200 (2010)
Zhang L., Zhou W.J., Li D.H.: A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26, 629–640 (2006)
Zhang L.: A new trust region algorithm for nonsmooth convex minimization. Appl. Math. Comput. 193, 135–142 (2007)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Li, Q. Conjugate gradient type methods for the nondifferentiable convex minimization. Optim Lett 7, 533–545 (2013). https://doi.org/10.1007/s11590-011-0437-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-011-0437-5