Abstract
A problem often encountered in analysis of the large-scale data pertains to approximation of a given matrix \(A\in \mathbf {R}^{m\times n}\) by \(UV^T\), where \(U\in \mathbf {R}^{m\times r}\), \(V\in \mathbf {R}^{n\times r}\) and \(r < \min \{ m, n \}\). The aim of this paper is to tackle this problem through proposing an accelerated gradient descent algorithm as well as its stochastic counterpart. These frameworks are suitable candidates to surmount the computational difficulties in computing the SVD form of big matrices. On the other hand, big data are usually presented and stored in some fixed-size blocks, which is an incentive to further propose a block-wise gradient descent algorithm for their low-rank approximation. A stochastic block-wise gradient method will further be suggested to enhance the computational efficiencies when a large number of blocks are presented in the problem. Under some standard assumptions, we investigate the convergence property of the block-wise approach. Computational results for both synthetic data as well as the real-world data are provided in this paper.
The authors would like to thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Park, D., Kyrillidis, A., Caramanis, C., Sanghavi, S.: Finding low-rank solutions via non-convex matrix factorization, efficiently and provably. arXiv preprint arXiv:1606.03168 (2016)
Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
Lee, K., Bresler, Y.: Admira: atomic decomposition for minimum rank approximation. IEEE Trans. Inf. Theory 56(9), 4402–4416 (2010)
Tanner, J., Wei, K.: Normalized iterative hard thresholding for matrix completion. SIAM J. Sci. Comput. 35(5), S104–S125 (2013)
Oswal, U., Jain, S., Xu, K.S., Eriksson, B.: Block cur: Decomposing large distributed matrices. arXiv preprint arXiv:1703.06065 (2017)
Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, pp. 665–674. ACM (2013)
Chen, Y., Wainwright, M.J.: Fast low-rank estimation by projected gradient descent: general statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025 (2015)
Tu, S., Boczar, R., Simchowitz, M., Soltanolkotabi, M., Recht, B.: Low-rank solutions of linear matrix equations via procrustes flow. arXiv preprint arXiv:1507.03566 (2015)
Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence O(1/k\(^2\)). Doklady AN USSR 269, 543–547 (1983)
Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. 156(1–2), 59–99 (2016)
Wang, L., Zhang, X., Gu, Q.: A universal variance reduction-based catalyst for nonconvex low-rank matrix recovery. arXiv preprint arXiv:1701.02301 (2017)
Battaglino, C., Ballard, G., Kolda, T.G.: A practical randomized CP tensor decomposition. arXiv preprint arXiv:1701.06600 (2017)
Sanderson, C., Curtin, R.: Armadillo: a template-based C++ library for linear algebra. J. Open Source Softw. 1, 26 (2016)
Roe, B.P., Yang, H.J., Zhu, J., Liu, Y., Stancu, I., McGregor, G.: Boosted decision trees as an alternative to artificial neural networks for particle identification. Nucl. Instrum. Methods Phys. Res., Sect. A 543(2–3), 577–584 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Peyghami, M.R., Yang, K., Chen, S., Yang, Z., Ataei, M. (2018). Accelerated Gradient and Block-Wise Gradient Methods for Big Data Factorization. In: Bagheri, E., Cheung, J. (eds) Advances in Artificial Intelligence. Canadian AI 2018. Lecture Notes in Computer Science(), vol 10832. Springer, Cham. https://doi.org/10.1007/978-3-319-89656-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-89656-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-89655-7
Online ISBN: 978-3-319-89656-4
eBook Packages: Computer ScienceComputer Science (R0)