Abstract
Focus of this work is solving a non-smooth constraint minimization problem by a primal-dual splitting algorithm involving proximity operators. The problem is penalized by the Bregman divergence associated with the non-smooth total variation (TV) functional. We analyze two aspects: Firstly, the convergence of the regularized solution of the minimization problem to the minimum norm solution. Second, the convergence of the iteratively regularized minimizer to the minimum norm solution by a primal-dual algorithm. For both aspects, we use the assumption of a variational source condition (VSC). This work emphasizes the impact of the choice of the parameters in stabilization of a primal-dual algorithm. Rates of convergence are obtained in terms of some concave, positive definite index function. The algorithm is applied to a simple two-dimensional image processing problem. Sufficient error analysis profiles are provided based on the size of the forward operator and the noise level in the measurement.
Similar content being viewed by others
References
Acar, R., Vogel, C.R.: Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 10(6), 1217–1229 (1994)
Altuntac, E.: Variational regularization strategy for atmospheric tomography. Institute for Numerical and Applied Mathematics, University of Goettingen (2016)
Anzengruber, S., Hofmann, B., Mathé, P.: Regularization properties of the sequential discrepancy principle for Tikhonov regularization in Banach spaces, vol. 93 (2014)
Anzengruber, S., Ramlau, R.: Morozov’s discrepancy principle for Tikhonov-type functionals with nonlinear operators. Inverse Probl. 26, 025001 (2010). 17pp
Anzengruber, S., Ramlau, R.: Convergence rates for Morozov’s discrepancy principle using variational inequalities. Inverse Probl. 27, 105007 (2011). 18pp
Bachmayr, M., Burger, M.: Iterative total variation schemes for nonlinear inverse problems. Inverse Probl. 25, 105004 (2009). 26pp
Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces Springer New York (2011)
Bardsley, J.M., Luttman, A.: Total variation-penalized Poisson liklehood estimation for ill-posed problems. Adv Comput. Math. 31, 25–59 (2009)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)
Benning, M., Burger, M.: Modern regularization methods for inverse probl. arXiv:1801.09922 (2017)
Benning, M., Betcke, M.M., Ehrhardt, M.J., Schönlieb, C.B.: Choose your path wisely: gradient descent in a Bregman distance framework, arXiv:1712.04045 (2017)
Bergounioux, M.: On Poincaré,-Wirtinger inequalities in space of functions bounded variation. Control Cybernet 40(4), 921–29 (2011)
Borwein, J., Luke, R.: Entropic regularization of the ℓ0 function. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49. Springer, New York (2011)
Burger, M., Osher, S.: Convergence rates of convex variational regularization. Inverse Probl. 20(5), 1411–1421 (2004)
Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems, vol. 76 (1997)
Chan, T.F., Chen, K.: An optimization-based multilevel algorithm for total variation image denoising. Multiscale Model Simul. 5(2), 615–645 (2006)
Chan, T., Golub, G., Mulet, P.: A nonlinear primal-dual method for total variation-baes image restoration. SIAM J. Sci. Comp. 20, 1964–1977 (1999)
Chen, J., Loris, I. On starting and stopping criteria for nested primal-dual iterations. arXiv:1806.07677 (2018)
Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and its Applications, vol. 49. Springer, New York (2011)
Dobson, D., Scherzer, O.: Analysis of regularized total variation penalty methods for denoising. Inverse Probl. 12(5), 601–617 (1996)
Dobson, D.C., Vogel, C.R.: Convergence of an iterative method for total variation denoising. SIAM J.Numer. Anal. 34(5), 1779–1791 (1997)
Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems, Math. Appl., vol. 375. Kluwer Academic Publishers Group, Dordrecht (1996)
Flemming, J.: Existence of variational source conditions for nonlinear inverse problems in Banach spaces. J. Inverse Ill-Posed Probl. 26(2), 277–286 (2018)
Garrigos, G., Rosasco, L., Villa, S.: Iterative regularization via dual diagonal descent. J. Math. Imag. Vis. 60(2), 189–215 (2018)
Grasmair, M.: Generalized Bregman distances and convergence rates for non-convex regularization methods, vol. 26. 16pp (2010)
Grasmair, M.: Variational inequalities and higher order convergence rates for Tikhonov regularisation on Banach spaces. J Variational Inverse Ill-Posed Probl 21, 379–394 (2013)
Grasmair, M., Haltmeier, M., Scherzer, O.: Necessary and sufficient conditions for linear convergence of ł1-regularization. Comm. Pure Appl. Math. 64(2), 161–182 (2011)
Hofmann, B., Kaltenbacher, B., P‘̀oschl, C., Scherzer, O.: A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operator. Inverse Probl. 23(3), 987–1010 (2007)
Hofmann, B., Mathé, P.: Parameter choice in Banach space regularization under variational inequalities. Inverse Probl. 28, 104006 (2012). 17pp
Hohage, T., Weidling, F.: Verification of a variational source condition for acoustic inverse medium scattering problems. Inverse Probl. 31(14pp), 075006 (2015)
Hohage, T., Schormann, C.: A Newton-type method for a transmission problem in inverse scattering. Inverse Probl. 14, 1207–1227 (1998)
Hohage, T., Weidling, F.: Characterizations of variational source conditions, converse results, and maxisets of spectral regularization methods. arXiv:1603.05133 (2016)
Kindermann, S.: Convex Tikhonov regularization in Banach spaces: New results on convergence rates. J. Inverse Ill-Posed Probl. 24(3), 341–350 (2016)
Kirsch, A.: An Introduction to the Mathematical Theory of Inverse Problems. Second Edition Applied Mathematical Sciences, vol. 120. Springer, New York (2011)
Lorenz, D.A.: Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. J. Inv. Ill-Posed Probl. 16, 463–478 (2008)
Loris, I., Verhoeven, C.: On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty, vol. 27. 15pp (2011)
Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iiterative regularization method for total Variation-Based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)
Rudin, L.I., Osher, S.J., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)
Schuster, T., Kaltenbacher, B., Hofmann, B., Kazimierski, K.S.: Regularization methods in banach spaces. RICAM, 10 De Gruyter (2012)
Sprung, B., Hohage, T. (2017)
Takahashi, W., Wong, N.C., Yao, J.C.: Fixed point theorems for nonlinear non-self mappings in Hilbert spaces and applications. JFPTA 2013, 116 (2013)
Tikhonov, A.N.: On the solution of ill-posed problems and the method of regularization, Dokl. Akad. Nauk SSSR 151, 501–504 (1963)
Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill-posed problems. Translated from the Russian. Preface by translation editor Fritz John. Scripta Series in Mathematics. V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, xiii+ 258 pp (1977)
Vogel, C.R., Oman, M.E.: Iterative methods for total variation denoising. Siam J. Sci. Comput. 17(1), 227–238 (1996)
Acknowledgements
The author is indebted to Ignace Loris for the fruitful discussions throughout the development of the work. Furthermore, the author is highly grateful to Maria A. Gonzalez-Huici and David Mateos-Nunez for the encouragement and support to finalize the work.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Major part of this work has been done within the framework of ARC grant at Université Libre de Bruxelles during author‘s PostDoc research period 2017–2019.
Appendices
Appendix A: VSC as upper bound for the Bregman distance
The total error estimation can also be stabilized due to the following assumption that has been derived in the literature listed in Section 4.2,
Therefore, for stabilization of E, we seek a stable upper bound for the Bregman distance (1.1).
Assumption A.1
[Variational Source Condition] Let \(T : \mathcal {X} \rightarrow \mathcal {Y}\) be linear, injective forward operator and v‡∈range(T). There exists some constant σ ∈ (0, 1] and a concave, monotonically increasing index function Ψ with Ψ(0) = 0 and \({\Psi } : [0 , \infty ) \rightarrow [0 , \infty )\) such that for \(q^{\dagger } \in \partial \mathcal {J}(u^{\dagger })\) the minimum norm solution u‡∈BV(Ω) satisfies
Recall that quantitative stability analysis in the continuous mathematical setting aims to find upper bound for the total error estimation functional E in (4.4). According to (1.1), by means of finding stable upper bound for the Bregman distance between the regularized minimizer \({u}_{\alpha }^{\delta }\) and the minimum norm solution u‡ will yield one of the two convergence results of this section. With the established choice of the regularization parameter and the asserted \(\mathcal {J}\) difference estimation in Lemma 6.2, the last ingredient of the Bregman distance following up the Assumption A2.1 is formulated below.
Lemma A.2
Let \(\alpha (\delta ,v^{\delta })\in \overline {S}\cap \underline {S}\) be the regularization parameter for the regularized solution \({u}_{\alpha }^{\delta }\) to the problem (4.2). If the minimum norm solution u‡ satisfies Assumption A2.1, then
holds.
Proof
It follows from VSC (4.5) that
After arranging the terms,
□
Theorem A.3
Let the minimum norm solution u‡∈Ω satisfy the VSC given by Assumption A2.1. Then, in the light of the assumptions of lemmata 6.1, 6.2 and finally A2.2, the following estimation holds:
Proof
The proof simply follows from verifying the previously established estimations on each components of the Bregman distance as shown below:
□
Appendix B: Further numerical results
In this section, we will present some further numerical results to emphasize the condition on the step length μ in Theorem 7.2. Although the formulated condition allows one to choose the step length as \(\mu = \frac {2}{\Vert T \Vert ^{2}},\) we have observed divergence when we have made this choice of μ (see Fig. 7).
Rights and permissions
About this article
Cite this article
Altuntac, E. Choice of the parameters in a primal-dual algorithm for Bregman iterated variational regularization. Numer Algor 86, 729–759 (2021). https://doi.org/10.1007/s11075-020-00909-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-020-00909-6