Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Robust Low Transformed Multi-Rank Tensor Methods for Image Alignment

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Aligning a group of linearly correlated images is an important task in computer vision. In this paper, we propose a combination of transformed tensor nuclear norm and tensor \(\ell _1\) norm to deal with this image alignment problem, where the observed images, stacked into a third-order tensor, are deformed by unknown domain transformations and corrupted by sparse noise like impulse noise, partial occlusions, and illumination variation. The key advantage of the proposed method is that both spatial correlation and images variation can be captured by the use of transformed tensor nuclear norm. We show that when the underlying of correlated images is a low multi-rank tensor, an upper error bound of the estimator of the proposed model can be established and this bound can be better than the previous result. Besides the proposed convex transformed tensor model, the method can be further studied by incorporating nonconvex functions in the transformed tensor nuclear norm and the sparsity norm. Both the proposed convex and nonconvex optimization models are solved by generalized Gauss–Newton algorithms. Also the global convergence of the numerical methods for solving the subproblems of convex and nonconvex optimization models can be provided under very mild conditions. Extensive numerical experiments on real images with misalignment and sparse corruptions demonstrate the performance of our proposed methods is better than that of several state-of-the-art methods in terms of accuracy and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. http://people.eecs.berkeley.edu/~yima/matrix-rank/rasl.html

  2. http://sites.google.com/site/hejunzz/t-grasta

  3. http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html

  4. http://yann.lecun.com/exdb/mnist/

References

  1. Aguerrebere, C., Delbracio, M., Bartesaghi, A., Sapiro, G.: Fundamental limits in multi-image alignment. IEEE Trans. Signal Process. 64(21), 5707–5722 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  2. Arica, N., Yarman-Vural, F.T.: Optical character recognition for cursive handwriting. IEEE Trans. Pattern Anal. Mach. Intell. 24(6), 801–813 (2002)

    Article  Google Scholar 

  3. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1–2), 91–129 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Basri, R., Jacobs, D.: Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003)

    Article  Google Scholar 

  6. Bengua, J.A., Phien, H.N., Tuan, H.D., Do, M.N.: Efficient tensor completion for color image and video recovery: low-rank tensor train. IEEE Trans. Image Process. 26(5), 2466–2479 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205–1223 (2007)

    Article  MATH  Google Scholar 

  8. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  9. Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program. 71(2), 179–194 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  10. Carroll, J.D., Chang, J.-J.: Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart–Young” decomposition. Psychometrika 35(3), 283–319 (1970)

    Article  MATH  Google Scholar 

  11. Chen, L., Sun, D., Toh, K.-C.: An efficient inexact symmetric Gauss–Seidel based majorized ADMM for high-dimensional convex composite conic programming. Math. Program. 161(1–2), 237–270 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen, X., Han, Z., Wang, Y., Tang, Y., Yu, H.: Nonconvex plus quadratic penalized low-rank and sparse decomposition for noisy image alignment. Sci. China Inform. Sci. 59(5), 052107 (2016)

    Article  Google Scholar 

  13. Cox, M., Sridharan, S., Lucey, S., Cohn, J.: Least squares congealing for unsupervised alignment of images. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  14. Cox, M., Sridharan, S., Lucey, S., Cohn, J.: Least-squares congealing for large numbers of images. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1949–1956 (2009)

  15. Cromme, L.: Strong uniqueness: a far-reaching criterion for the convergence analysis of iterative procedures. Numer. Math. 29(2), 179–193 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ding, C., Qi, H.-D.: Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction. Math. Program. 164(1–2), 341–381 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Donoho, D.L., Grimes, C.: Image manifolds which are isometric to Euclidean space. J. Math. Imaging Vis. 23(1), 5–24 (2005)

    Article  MathSciNet  Google Scholar 

  18. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  19. Fan, J., Xue, L., Zou, H.: Strong oracle optimality of folded concave penalized estimation. Ann. Stat. 42(3), 819–849 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Goldfarb, D., Qin, Z.: Robust low-rank tensor recovery: models and algorithms. SIAM J. Matrix Anal. Appl. 35(1), 225–253 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  21. Gong, P, Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013)

  22. He, J., Zhang, D., Balzano, L., Tao, T.: Iterative Grassmannian optimization for robust image alignment. Image Vis. Comput. 32(10), 800–813 (2014)

    Article  Google Scholar 

  23. Hillar, C.J., Lim, L.-H.: Most tensor problems are NP-hard. J. ACM 60(6), 45 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Huang, G.B., Jain, V., Learned-Miller, E.: Unsupervised joint alignment of complex images. In: 2007 IEEE International Conference on Computer Vision, pp. 1–8 (2007)

  25. Jiang, Q., Ng, M.: Robust low-tubal-rank tensor completion via convex optimization. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp 2649–2655. AAAI Press (2019)

  26. Jittorntrum, K., Osborne, M.R.: Strong uniqueness and second order convergence in nonlinear discrete approximation. Numer. Math. 34(4), 439–455 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  27. Kernfeld, E., Kilmer, M., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 485, 545–570 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. Inst. Fourier 48(3), 769–783 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  32. Learned-Miller, E.G.: Data driven image models through continuous joint alignment. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 236–250 (2006)

    Article  Google Scholar 

  33. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  34. Lewis, A.S., Wright, S.J.: A proximal method for composite minimization. Math. Program. 158(1–2), 501–546 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  35. Li, P., Feng, J., Jin, X., Zhang, L., Xu, X., Yan, S.: Online robust low-rank tensor modeling for streaming data analysis. IEEE Trans. Neural Netw. Learn. Syst. 30(4), 1061–1075 (2019)

    Article  MathSciNet  Google Scholar 

  36. Li, X., Sun, D., Toh, K.-C.: A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions. Math. Program. 155(1–2), 333–373 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  37. Li, Y., Chen, C., Yang, F., Huang, J.: Hierarchical sparse representation for robust image registration. IEEE Trans. Pattern Anal. Mach. Intell. 40(9), 2151–2164 (2018)

    Article  Google Scholar 

  38. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Article  Google Scholar 

  39. Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels. In Les Équations aux Dérivées Partielles, Éditions du centre National de la Recherche Scientifique, pp. 87–89 (1963)

  40. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Trans. Pattern Anal. Mach. Intell. 42(4), 925–938 (2020)

    Article  Google Scholar 

  41. Ma, Y., Soatto, S., Kosecka, J., Sastry, S.S.: An invitation to 3-D vision: from images to geometric models. Springer, New York (2004)

    Book  MATH  Google Scholar 

  42. Marjanovic, G., Solo, V.: On \(\ell _q\) optimization and matrix completion. IEEE Trans. Signal Process. 60(11), 5714–5724 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  43. Martin, C.D., Shafer, R., LaRue, B.: An order-p tensor factorization with applications in imaging. SIAM J. Sci. Comput. 35(1), A474–A490 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  44. Martinez, A.M.: The AR face database. Computer Vision Center, Technical Report, 24 (1998)

  45. Mu, C., Huang, B., Wright, J., Goldfarb, D.: Square deal: lower bounds and improved relaxations for tensor recovery. Proc. Int. Conf. Mach. Learn. 32, 73–81 (2014)

    Google Scholar 

  46. Ng, M.K., Zhang, X., Zhao, X.-L.: Patched-tubes unitary transform for robust tensor completion. Pattern Recognit. 100, 107181 (2020)

    Article  Google Scholar 

  47. Nikolova, M., Ng, M.K., Tam, C.-P.: On \(\ell _1\) data fitting and concave regularization for image recovery. SIAM J. Sci. Comput. 35(1), A397–A430 (2013)

    Article  MATH  Google Scholar 

  48. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  49. Peng, Y., Ganesh, A., Wright, J., Xu, W., Ma, Y.: RASL: robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2233–2246 (2012)

    Article  Google Scholar 

  50. Penney, G.P., Weese, J., Little, J.A., Desmedt, P., Hill, D.L., Hawkes, D.J.: A comparison of similarity measures for use in 2-D-3-D medical image registration. IEEE Trans. Med. Imaging 17(4), 586–595 (1998)

    Article  Google Scholar 

  51. Pluim, J.P., Maintz, J.A., Viergever, M.A.: Mutual-information-based registration of medical images: a survey. IEEE Trans. Med. Imaging 22(8), 986–1004 (2003)

    Article  Google Scholar 

  52. Qiu, D., Bai, M., Ng, M., Zhang, X.: Nonlocal robust tensor recovery with nonconvex regularization. Inverse Problems 37(3), 035001 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  53. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (2009)

    MATH  Google Scholar 

  54. Romera-Paredes, B., Pontil, M.: A new convex relaxation for tensor completion. In: Advances in Neural Information Processing Systems, pp. 2967–2975 (2013)

  55. Song, G., Ng, M.K., Zhang, X.: Robust tensor completion using transformed tensor singular value decomposition. Numer. Linear Algebra Appl. 27(3), e2299 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  56. Song, W., Zhu, J., Li, Y., Chen, C.: Image alignment by online robust PCA via stochastic gradient descent. IEEE Trans. Circuits Syst. Video Technol. 26(7), 1241–1250 (2016)

    Article  Google Scholar 

  57. Szeliski, R.: Image alignment and stitching: a tutorial. Found. Trends Comput. Graph. Vis. 2(1), 1–104 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  58. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  59. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  60. Turk, M. A., Pentland, A. P.: Face recognition using eigenfaces. In: Proceedings of the 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586–587 (1991)

  61. Vedaldi, A., Guidi, G., Soatto, S.: Joint data alignment up to (lossy) transformations. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  62. Wu, Y., Shen, B., Ling, H.: Online robust image alignment via iterative convex optimization. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1808–1814. IEEE (2012)

  63. Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  64. Zhang, T.: Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 11(35), 1081–1107 (2010)

    MathSciNet  MATH  Google Scholar 

  65. Zhang, X.: A nonconvex relaxation approach to low-rank tensor completion. IEEE Trans. Neural Netw. Learn. Syst. 30(6), 1659–1671 (2019)

    Article  MathSciNet  Google Scholar 

  66. Zhang, X., Bai, M., Ng, M.K.: Nonconvex-TV based image restoration with impulse noise removal. SIAM J. Imaging Sci. 10(3), 1627–1667 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  67. Zhang, X., Ng, M.K.: A corrected tensor nuclear norm minimization method for noisy low-rank tensor completion. SIAM J. Imaging Sci. 12(2), 1231–1273 (2019)

    Article  MathSciNet  Google Scholar 

  68. Zhang, X., Wang, D., Zhou, Z., Ma, Y.: Simultaneous rectification and alignment via robust recovery of low-rank tensors. Adv. Neural Inform. Process. Syst. 26, 1637–1645 (2013)

    Google Scholar 

  69. Zhang, X., Wang, D., Zhou, Z., Ma, Y.: Robust low-rank tensor recovery with rectification and alignment. IEEE Trans. Pattern Anal. Mach. Intell. 43(1), 238–255 (2021)

    Article  Google Scholar 

  70. Zhang, X., Zhou, Z., Wang, D., Ma, Y.: Hybrid singular value thresholding for tensor completion. Proc. AAAI Conf. Artif. Intell. 28, 1362–1368 (2014)

    Google Scholar 

  71. Zhang, X.-D.: Matrix Analysis and Applications. Cambridge University Press, Cambridge (2017)

    Book  MATH  Google Scholar 

  72. Zhang, Z., Aeron, S.: Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 65(6), 1511–1526 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  73. Zhang, Z., Ely, G., Aeron, S., Hao, N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3842–3849 (2014)

  74. Zhang, Z., Ganesh, A., Liang, X., Ma, Y.: TILT: transform invariant low-rank textures. Int. J. Comput. Vis. 99(1), 1–24 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  75. Zheng, Q., Wang, Y., Heng, P.A.: Online subspace learning from gradient orientations for robust image alignment. IEEE Trans. Image Process. 28(7), 3383–3394 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  76. Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101(476), 1418–1429 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank Professor Xiaoqin Zhang and Dr. Xiai Chen for sending us the codes of [69] and [12], respectively. We would also like to thank Professor Di Wang for helpful discussions on the domain transformations and sharing the code of [70]. Thanks also go to the anonymous referees for their constructive suggestions to improve the presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minru Bai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

M. Bai: The research of this author was partially supported by the National Natural Science Foundation of China under grant 11971159. M. K. Ng: The research of this author was partially supported by the HKRGC GRF 12306616, 12200317, 12300218, 12300519 and 17201020, and HKU Grant 104005583. X. Zhang: The research of this author was partially supported by the National Natural Science Foundation of China under grants 11801206, 11871025, Hubei Provincial Natural Science Foundation of China under grant 2018CFB105, and Fundamental Research Funds for the Central Universities under grant CCNU19ZN017.

Appendix

Appendix

We recall the definitions of subdifferential of a nonconvex function and the Kurdyka–Łojasiewicz (KL) property. The most fundamental works on KL property are due to Łojasiewicz [39] and Kurdyka [31].

Consider a function \(f:{\mathbb {R}}^n\rightarrow (-\infty , +\infty ]\) and a point \({\mathbf {x}}\) with \(f({\mathbf {x}})\) finite. The subdifferential of f at \({\mathbf {x}}\), denoted by \(\partial f({\mathbf {x}})\), is defined as [53, Definition 8.3]

$$\begin{aligned} \partial f({\mathbf {x}}):=\left\{ {\mathbf {v}}\in {\mathbb {R}}^n:\ \exists \ {\mathbf {x}}^k\rightarrow {\mathbf {x}}, f({\mathbf {x}}^k)\rightarrow f({\mathbf {x}}), {\mathbf {v}}^k\rightarrow {\mathbf {v}} \ \ \text {with} \ \ {\mathbf {v}}^k\in {\widehat{\partial }} f({\mathbf {x}}^k) \ \ \text {as} \ \ k\rightarrow +\infty \right\} , \end{aligned}$$

where \({\widehat{\partial }} f({\mathbf {x}})\) denotes the Fréchet subdifferential of f at \({\mathbf {x}}\in \text {dom}(f)\), and is the set of all \({\mathbf {v}}\) satisfying

$$\begin{aligned} \liminf _{\begin{array}{c} {\mathbf {z}}\rightarrow {\mathbf {x}},\\ {\mathbf {z}}\ne {\mathbf {x}} \end{array}} \frac{f({\mathbf {z}})-f({\mathbf {x}})-\langle {\mathbf {v}},{\mathbf {z}}-{\mathbf {x}} \rangle }{\Vert {\mathbf {z}}-{\mathbf {x}}\Vert }\ge 0. \end{aligned}$$

Here the effective domain of a function f is defined as \(\text{ dom }(f):=\{x:f(x)<+\infty \}\). The proximal mapping of f at \({\mathbf {y}}\) is defined as

$$\begin{aligned} \text {Prox}_{tf}({\mathbf {y}}):=\arg \min _{\mathbf {x}}\left\{ tf({\mathbf {x}})+\frac{1}{2}\Vert {\mathbf {x}}-{\mathbf {y}}\Vert ^2\right\} , \end{aligned}$$

where \(t>0\) is a given parameter.

Let \({\mathfrak {L}}\) be a Euclidean space and \({\mathfrak {D}}\subset {\mathfrak {L}}\). For any \({\mathcal {X}}\in {\mathfrak {L}}\), the distance from \({\mathcal {X}}\) to \({\mathfrak {D}}\) is defined and denoted by \(\text{ dist }({\mathcal {X}},{\mathfrak {U}}):=\inf _{{\mathcal {Y}}\in {\mathfrak {D}}}\{\Vert {\mathcal {X}}-{\mathcal {Y}}\Vert _F\}\). Next, we recall the KL property [3, Definition 3.1], see also [4, Definition 2.4]. Let \(f:{\mathbb {R}}^n\rightarrow (-\infty ,+\infty ]\) be a proper lower semicontinuous function. For \(-\infty<\xi _1< \xi _2\le +\infty \), we define \([\xi _1<f({\mathbf {x}})<\xi _2] := \{{\mathbf {x}}\in {\mathbb {R}}^n: \xi _1< f({\mathbf {x}})<\xi _2\}\).

Definition 6

[3, Definition 3.1] The function f is said to have the Kurdyka-Łojasiewicz (KL) property at \({\mathbf {x}}\in \text{ dom }(\partial f)\) if there exist \(\eta \in (0,+\infty ]\), a neighborhood \({\mathfrak {U}}\) of \({\mathbf {x}}\) and a continuous concave function \(\varphi :[0,\eta )\rightarrow [0,+\infty )\) such that: (i) \(\varphi (0)=0\); (ii) \(\varphi \) is continuous and differentiable on \((0,\eta )\); (iii) \(\varphi {'}(s)>0\) for all \(s\in (0,\eta )\); (iv) for all \({\mathbf {z}}\in {\mathfrak {U}}\cap [f({\mathbf {x}})<f({\mathbf {z}})<f({\mathbf {x}})+\eta ]\), the KL inequality holds: \( \varphi {'}(f({\mathbf {z}})-f({\mathbf {x}}))\text{ dist }(0,\partial f({\mathbf {z}}))\ge 1.\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiu, D., Bai, M., Ng, M.K. et al. Robust Low Transformed Multi-Rank Tensor Methods for Image Alignment. J Sci Comput 87, 24 (2021). https://doi.org/10.1007/s10915-021-01437-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01437-8

Keywords

Mathematics Subject Classification 2020

Navigation