Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

A Fast Algorithm for Rank-(LMN) Block Term Decomposition of Multi-Dimensional Data

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Attribute to its powerful representation ability, block term decomposition (BTD) has recently attracted many views of multi-dimensional data processing, e.g., hyperspectral image unmixing and blind source separation. However, the popular alternating least squares algorithm for rank-(LMN) BTD (BTD-ALS) suffers expensive time and space costs from Kronecker products and solving low-rank approximation subproblems, hindering the deployment of BTD for real applications, especially for large-scale data. In this paper, we propose a fast sketching-based Kronecker product-free algorithm for rank-(LMN) BTD (termed as KPF-BTD), which is suitable for real-world multi-dimensional data. Specifically, we first decompose the original optimization problem into several rank-(LMN) approximation subproblems, and then we design the bilateral sketching to obtain the approximate solutions of these subproblems instead of the exact solutions, which allows us to avoid Kronecker products and rapidly solve rank-(LMN) approximation subproblems. As compared with BTD-ALS, the time and space complexities \(\mathcal {O}{(2(p+1)(I^3LR+I^2L^2R+IL^3R)+I^3LR)}\) and \(\mathcal {O}{(I^3)}\) of KPF-BTD are significantly cheaper than \(\mathcal {O}{(I^3L^6R^2+I^3L^3R+I^3LR+I^2L^3R^2+I^2L^2R)}\) and \(\mathcal {O}{(I^3L^3R)}\) of BTD-ALS, where \(p \ll I\). Moreover, we establish the theoretical error bound for KPF-BTD. Extensive synthetic and real experiments show KPF-BTD achieves substantial speedup and memory saving while maintaining accuracy (e.g., for a \(150\times 150\times 150\) synthetic tensor, the running time 0.2 seconds per iteration of KPF-BTD is significantly faster than 96.2 seconds per iteration of BTD-ALS while their accuracies are comparable).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Algorithm 3
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availibility Statement

The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Ma, L., Xu, L., Zeng, T.: Low rank prior and total variation regularization for image deblurring. J. Sci. Comput. 70, 1336–1357 (2017)

    Article  MathSciNet  Google Scholar 

  2. Ding, M., Huang, T.-Z., Ji, T.-Y., Zhao, X.-L., Yang, J.-H.: Low-rank tensor completion using matrix factorization based on tensor train rank and total variation. J. Sci. Comput. 81(2), 941–964 (2019)

    Article  MathSciNet  Google Scholar 

  3. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)

    Article  MathSciNet  Google Scholar 

  4. Qiu, D., Bai, M., Ng, M.K., Zhang, X.: Robust low transformed multi-rank tensor methods for image alignment. J. Sci. Comput. 87, 24 (2021)

    Article  MathSciNet  Google Scholar 

  5. Zeng, C.: Rank properties and computational methods for orthogonal tensor decompositions. J. Sci. Comput. 94, 6 (2022)

    Article  MathSciNet  Google Scholar 

  6. Renard, N., Bourennane, S., Blanc-Talon, J.: Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 5(2), 138–142 (2008)

    Article  Google Scholar 

  7. Karami, A., Yazdi, M., Asli, A.Z.: Noise reduction of hyperspectral images using kernel non-negative tucker decomposition. IEEE J. Sel. Topics Signal Process. 46(7), 487–493 (2011)

    Article  Google Scholar 

  8. Goulart, J.H.M., de Oliveira, P.M.R., Farias, R.C., Zarzoso, V., Comon, P.: Alternating group lasso for block-term tensor decomposition and application to ECG source separation. IEEE Trans Signal Process. 68, 2682–2696 (2020)

    Article  MathSciNet  Google Scholar 

  9. Zhang, G., Fu, X., Wang, J., Zhao, X.-L., Hong, M.: Spectrum cartography via coupled block-term tensor decomposition. IEEE Trans. Signal Process. 68, 3660–3675 (2020)

    Article  MathSciNet  Google Scholar 

  10. Luo, Y.-S., Zhao, X.-L., Li, Z., Ng, M.K., Meng, D.: Low-rank tensor function representation for multi-dimensional data recovery. IEEE Trans. Pattern Anal. Mach. Intell. 46(5), 3351–3369 (2024)

    Article  Google Scholar 

  11. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  12. Lathauwer, L.D.: Decompositions of a higher-order tensor in block terms\(-\)Part II: definitions and uniqueness. SIAM J. Matrix Anal. Appl. 30(3), 1033–1066 (2008)

    Article  MathSciNet  Google Scholar 

  13. Kilmer, M.E., Horesh, L., Avron, H., Newman, E.: Tensor-tensor algebra for optimal representation and compression of multiway data. Proc. Natl. Acad. Sci. U.S.A. 118, e2015851118 (2021)

    Article  MathSciNet  Google Scholar 

  14. Liu, X., Bourennane, S., Fossati, C.: Denoising of hyperspectral images using the parafac model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens. 50(10), 3717–3724 (2012)

    Article  Google Scholar 

  15. Quan, Y., Ji, H., Shen, Z.: Data-driven multi-scale non-local wavelet frame construction and image recovery. J. Sci. Comput. 63, 307–329 (2015)

    Article  MathSciNet  Google Scholar 

  16. Liu, J., Przemyslaw, M., Peter, W., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Article  Google Scholar 

  17. Cichocki, A., Lee, N., Oseledets, I., Phan, A.Z.Q.: Mandic DP Tensor networks for dimensionality reduction and large-scale optimization: part 1 low-rank tensor decompositions. Found. Trends Mach. Learn. 9, 249–429 (2016)

    Article  Google Scholar 

  18. Cichocki, A., Phan, A.H., Zhao, Q., Lee, N., Oseledets, I., Sugiyama, M., Sugiyama, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 2 applications and future perspectives. Found. Trends Mach. Learn. 9, 431–673 (2019)

    Google Scholar 

  19. Bengua, J.A., Phien, H.N., Tuan, H.D., Do, M.N.: Efficient tensor completion for color image and video recovery: low-rank tensor train. IEEE Trans. Image Process. 26(5), 2466–2479 (2017)

    Article  MathSciNet  Google Scholar 

  20. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algeb. Appl. 435(3), 641–658 (2011)

    Article  MathSciNet  Google Scholar 

  21. Li, M., Li, W., Chen, Y., Xiao, M.: The nonconvex tensor robust principal component analysis approximation model via the weighted \(\ell _p\) -norm regularization. J. Sci. Comput. 89, 68 (2021)

    Article  MathSciNet  Google Scholar 

  22. Kernfeld, E., Kilmer, M., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebr. Appl. 485, 545–570 (2015)

    Article  MathSciNet  Google Scholar 

  23. Lu, C., Peng, X., Wei, Y.: Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. pp. 5989–5997 (2019)

  24. Jiang, T.-X., Ng, M.K., Zhao, X.-L., Huang, T.-Z.: Framelet representation of tensor nuclear norm for third-order tensor completion. IEEE Trans. Image Process. 29, 7233–7244 (2020)

    Article  MathSciNet  Google Scholar 

  25. Song, G.-J., Ng, M.K., Zhang, X.-J.: Robust tensor completion using transformed tensor singular value decomposition. Numer. Linear Algebr. Appl. 27, e2299 (2020)

    Article  MathSciNet  Google Scholar 

  26. Kong, H., Lu, C., Lin, Z.: Tensor Q-rank: New data dependent tensor rank. Mach. Learn. pp. 1–34, (2021)

  27. Lin, J., Huang, T.Z., Zhao, X.L., Ji, T.Y., Zhao, Q.: Tensor robust kernel pca for multidimensional data. IEEE Trans. Neural Netw. Learn. Syst. (2024). https://doi.org/10.1109/TNNLS.2024.3356228

    Article  Google Scholar 

  28. Cai, Y., Li, P.: A blind block term decomposition of high order tensors. Proc. AAAI Conf. Artif. Intell. 35(8), 6868–6876 (2021)

    Google Scholar 

  29. Lathauwer, L.D., Nion, D.: Decompositions of a higher-order tensor in block terms-part iii: alternating least squares algorithms. SIAM J. Matrix Anal. Appl. 30(3), 1067–1083 (2008)

    Article  MathSciNet  Google Scholar 

  30. Che, M., Wei, Y., Yan, H.: Randomized algorithms for the low multilinear rank approximations of tensors. J. Comput. Appl. Math. 390, 113380 (2021)

    Article  MathSciNet  Google Scholar 

  31. Fu, X., Ibrahim, S., Wai, H.-T., Gao, C., Huang, K.: Block-randomized stochastic proximal gradient for low-rank tensor factorization. IEEE Trans. Signal Process. 68, 2170–2185 (2020)

    Article  MathSciNet  Google Scholar 

  32. Duan, X.-F., Duan, S.-Q., Li, J., Li, J., Wang, Q.-W.: Block-randomized stochastic proximal gradient for low-rank tensor factorization. Numer. Linear Algebr. Appl. 28, e2385 (2021)

    Article  Google Scholar 

  33. Che, M., Wei, Y., Yan, H.: An efficient randomized algorithm for computing the approximate tucker decomposition. J. Sci. Comput. 88, 32 (2021)

    Article  MathSciNet  Google Scholar 

  34. Dong, W., Yu, G., Qi, L., Cai, X.: Practical sketching algorithms for low-rank tucker approximation of large tensors. J. Sci. Comput. 95, 52 (2023)

    Article  MathSciNet  Google Scholar 

  35. Battaglino, C., Ballard, G., Kolda, T.G.: A practical randomized CP tensor decomposition. SIAM J. Matrix Anal. Appl. 39, 876–901 (2018)

    Article  MathSciNet  Google Scholar 

  36. Aggour, K.S., Gittens, A., Yener, B.: Adaptive sketching for fast and convergent canonical polyadic decomposition. In: Proc. Int. Conf. Mach. Learn. p. 119 (2020)

  37. Becker, S., Malik, O.A.: Low-rank tucker decomposition of large tensors using TensorSketch. Proc. Int. Conf. Neural Inf. Process. Syst. 10, 117–127 (2018)

    Google Scholar 

  38. Malik, O.A., Becker, S.: A sampling-based method for tensor ring decomposition. Proc. Int. Conf. Mach. Learn. 139, 7400–7411 (2021)

    Google Scholar 

  39. Battaglino, C., Ballard, G., Kolda, T.G.: A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebr. Appl. 420, 553–571 (2007)

    Article  MathSciNet  Google Scholar 

  40. Che, M., Wei, Y.: Randomized algorithms for the approximations of tucker and the tensor train decompositions. Adv. Comput. Math. 45(1), 395–428 (2019)

    Article  MathSciNet  Google Scholar 

  41. Zhang, J., Saibaba, A.K., Kilmer, M.E., Aeron, S.: A randomized tensor singular value decomposition based on the t-product. Numer. Linear Algebr. Appl. 25, e2179 (2018)

    Article  MathSciNet  Google Scholar 

  42. Lathauwer, L.D., Moor, B.D., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  Google Scholar 

  43. Vannieuwenhoven, N., Vandebril, R., Meerbergen, K.: A new truncation strategy for the higher-order singular value decomposition. SIAM J. Sci. Comput. 32(2), A1027–A1052 (2012)

    Article  MathSciNet  Google Scholar 

  44. Fazel, M., Candes, E., Recht, B., Parrilo, P.: Compressed sensing and robust recovery of low rank matrices. IN: Proc. Asilomar Conf. Signals Syst. Comput. pp. 1043–1047 (2009)

  45. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)

    Article  MathSciNet  Google Scholar 

  46. Zhou, T., Tao, D.: Bilateral random projections. In: Proc. IEEE Int. Symp. Inf. Theory (ISIT), pp. 1286–1290 (2012)

  47. Hackbusch, W.: Tensor spaces and numerical tensor calculus. Springer Sci. Bus. Media, vol. 42, (2012)

  48. Horé, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: Proc. Int. Conf. Pattern Recognit. (ICPR), pp. 23–26 (2010)

Download references

Funding

This research is supported by NSFC (No. 12171072, 12371456, 62131005), Sichuan Science and Technology Program (No. 2024NSFSC0038, 2024NSFJQ0038, 2023ZYD0007), National Key Research and Development Program of China (No. 2020YFA0714001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xi-Le Zhao.

Ethics declarations

Conflicts of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Huang, TZ., Zhao, XL. et al. A Fast Algorithm for Rank-(LMN) Block Term Decomposition of Multi-Dimensional Data. J Sci Comput 101, 16 (2024). https://doi.org/10.1007/s10915-024-02653-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-024-02653-8

Keywords

Navigation