Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Graph-dual Laplacian principal component analysis

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Principal component analysis is the most widely used method for linear dimensionality reduction, due to its effectiveness in exploring low-dimensional global geometric structures embedded in data. To preserve the intrinsic local geometrical structures of data, graph-Laplacian PCA (gLPCA) incorporates Laplacian embedding into PCA framework for learning local similarities between data points, which leads to significant performance improvement in clustering and classification. Some recent works showed that not only the high dimensional data reside on a low-dimensional manifold in the data space, but also the features lie on a manifold in feature space. However, both PCA and gLPCA overlook the local geometric information contained in the feature space. By considering the duality between data manifold and feature manifold, graph-dual Laplacian PCA (gDLPCA) is proposed, which incorporates data graph regularization and feature graph regularization into PCA framework to exploit local geometric structures of data manifold and feature manifold simultaneously. The experimental results on four benchmark data sets have confirmed its effectiveness and suggested that gDLPCA outperformed gLPCA on classification and clustering tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. http://www-stat.stanford.edu/~tibs/ElemStatLearn/data.html.

  2. https://archive.ics.uci.edu/ml/datasets/ISOLET.

  3. http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php.

  4. http://archive.ics.uci.edu/ml/datasets/Semeion+Handwritten+Digit.

References

  • Bao BK, Liu G, Xu C, Yan S (2012) Inductive robust principal component analysis. IEEE Trans Image Process 21:3794–3800

    Article  MathSciNet  MATH  Google Scholar 

  • Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv Neural Inf Process Syst 14:585–591

    Google Scholar 

  • Bi M, Xu J, Wang M, Zhou F (2016) Anomaly detection model of user behavior based on principal component analysis. J Ambient Intell Hum Comput 7:547–554. https://doi.org/10.1007/s12652-015-0341-4

    Article  Google Scholar 

  • Brooks J, Dulá J, Boone E (2013) A pure L1-norm principal component analysis. Comput Stat Data Anal 61:83

    Article  MATH  Google Scholar 

  • Cai D, He X, Han J, Huang TS (2011) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33:1548–1560

    Article  Google Scholar 

  • Candes EJ, Li X, Ma Y, Wright J (2009) Robust principal component analysis? J ACM 58(3):11

    MathSciNet  MATH  Google Scholar 

  • Gu B, Sheng VS (2017) A robust regularization path algorithm for ν-support vector classification. IEEE Trans Neural Netw Learn Syst 28:1241–1248

    Article  Google Scholar 

  • Gu Q, Zhou J (2009) Co-clustering on manifolds. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 359–368

  • Gu B, Sheng VS, Wang Z, Ho D, Osman S, Li S (2015) Incremental learning for ν-support vector regression. Neural Netw 67:140–150

    Article  MATH  Google Scholar 

  • Gu B, Sun X, Sheng VS (2016) Structural minimax probability machine. IEEE Trans Neural Netw Learn Syst 28:1646–1656

    Article  MathSciNet  Google Scholar 

  • Guan R, Wang X, Marchese M, Yang MQ, Liang Y, Yang C (2018) Feature space learning model. J Ambient Intell Hum Comput. https://doi.org/10.1007/s12652-018-0805-4

    Google Scholar 

  • Guo Z, Liu G, Li D, Wang S (2017a) Self-adaptive differential evolution with global neighborhood search. Soft Comput 21:3759–3768

    Article  Google Scholar 

  • Guo Z, Wang S, Yue X, Yang H (2017b) Global harmony search with generalized opposition-based learning. Soft Comput 21:2129–2137

    Article  Google Scholar 

  • He X, Niyogi P (2004) Locality preserving projections. In: Advances in neural information processing systems. MIT, London, pp 153–160

    Google Scholar 

  • Jiang B, Ding C, Luo B, Tang J, Graph-Laplacian PCA (2013) Closed-form solution and robustness. In: Computer vision and pattern recognition. Taylor & Francis, Routledge, pp 3492–3498

    Google Scholar 

  • Jin T, Yu J, You J, Zeng K, Li C, Yu Z (2015) Low-rank matrix factorization with multiple hypergraph regularizer. Pattern Recogn 48:1011–1022

    Article  MATH  Google Scholar 

  • Jolliffe IT (2011) Principal component analysis. J Mark Res 87:513

    Google Scholar 

  • Kargupta H, Huang W, Sivakumar K, Johnson E (2001) Distributed clustering using collective principal component analysis. Knowl Inf Syst 3:422–448

    Article  MATH  Google Scholar 

  • Kwak N (2014) Principal component analysis by-norm maximization. IEEE Trans Cybern 44:594–609

    Article  Google Scholar 

  • Lee D (1999) Learning the parts of objects with nonnegative matrix factorization. Nature 401:788

    Article  MATH  Google Scholar 

  • Liang Z, Xia S, Zhou Y, Zhang L, Li Y (2013) Feature extraction based on Lp-norm generalized principal component analysis. Pattern Recogn Lett 34:1037–1045

    Article  Google Scholar 

  • Lin G, Tang N, Wang H (2014) Locally principal component analysis based on L1-norm maximisation. Image Process Iet 9:91–96

    Article  Google Scholar 

  • Lovász L, Plummer MD (2009) Matching theory, vol 367. American Mathematical Society, Providence

    MATH  Google Scholar 

  • Powers DM (2011) Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation

  • Ren Y, Shen J, Wang J, Han J, Lee S (2015) Mutual verifiable provable data auditing in public cloud storage. J Int Technol 16(2):317–323

    Google Scholar 

  • Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  • Shahid N, Kalofolias V, Bresson X, Bronstein M, Vandergheynst P (2015) Robust principal component analysis on graphs. In: IEEE international conference on computer vision, pp 2812–2820

  • Shahid N, Perraudin N, Kalofolias V, Ricaud B, Vandergheynst P (2016) PCA using graph total variation. In: IEEE international conference on acoustics, speech and signal processing

  • Shang F, Jiao LC, Wang F (2012) Graph dual regularization non-negative matrix factorization for co-clustering. Pattern Recogn 45:2237–2250

    Article  MATH  Google Scholar 

  • Shang R, Zhang Z, Jiao L, Liu C, Li Y (2016) Self-representation based dual-graph regularized feature selection clustering. Neurocomputing 171:1242–1253

    Article  Google Scholar 

  • Shen J, Tan H, Wang J, Wang J, Lee S (2015) A novel routing protocol providing good transmission reliability in underwater sensor networks. J Internet Technol 16:171–178

    Google Scholar 

  • Sindhwani V, Hu J (2009) Mojsilovic a regularized co-clustering with dual supervision. In: Advances in Neural information processing systems, pp 1505–1512

  • Smola AJ (1997) Kernel principal component analysis. In: International conference on artificial neural networks, pp 583–588

  • Sturn A, Quackenbush J, Trajanoski Z (2002) Genesis: cluster analysis of microarray data. Bioinformatics 18:207–208

    Article  Google Scholar 

  • Sun Q, Xiang S, Ye J (2013) Robust principal component analysis via capped norms. In: ACM SIGKDD international conference on knowledge discovery and data mining, pp 311–319

  • Turk MA, Pentland AP (2002) Face recognition using eigenfaces. In: Computer vision and pattern recognition, 1991. proceedings CVPR ‘91., IEEE computer society conference on, pp 586–591

  • Wang H (2012) Block principal component analysis with L1-norm for image analysis. Pattern Recogn Lett 33:537–542

    Article  Google Scholar 

  • Wang J (2016) Generalized 2-D principal component analysis by Lp-norm for image analysis. IEEE Trans Cybern 46:792–803

    Article  Google Scholar 

  • Wen X, Shao L, Xue Y, Fang W (2015) A rapid learning algorithm for vehicle classification. Inf Sci 295:395–406

    Article  Google Scholar 

  • Yan S, Xu D, Zhang B, Zhang H-J, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29:40–51

    Article  Google Scholar 

  • Yang T, Gao X, Sellars S, Sorooshian S (2015) Improving the multi-objective evolutionary optimization algorithm for hydropower reservoir operations in the California Oroville-Thermalito complex environmental. Model Softw 69:262–279

    Article  Google Scholar 

  • Yang T, Asanjan AA, Faridzad M, Hayatbini N, Gao X, Sorooshian S (2017) An enhanced artificial neural network with a shuffled complex evolutionary global optimization with principal component analysis. Inf Sci 418:302–316

    Article  Google Scholar 

  • Yang T, Tao Y, Li J, Zhu Q, Su L, He X, Zhang X (2018) Multi-criterion model ensemble of CMIP5 surface air temperature over China. Theor Appl Climatol 132:1057–1072

    Article  Google Scholar 

  • Yin M, Gao J, Lin Z, Shi Q, Guo Y (2015) Dual graph regularized latent low-rank representation for subspace clustering. IEEE Trans Image Process 24:4918–4933

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang Z, Zhao K (2013) Low-rank matrix approximation with manifold regularization. IEEE Trans Pattern Anal Mach Intell 35:1717–1729

    Article  Google Scholar 

  • Zhou J, Tang M, Tian Y, Al-Dhelaan A, Al-Rodhaan M, LEE S (2015) Social network and tag sources based augmenting collaborative recommender system. IEICE Trans Inf Syst 98:902–910

    Google Scholar 

  • Zou H, Hastie T, Tibshirani R (2006) Sparse principal component analysis. J Comput Gr Stat 15:265–286

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers and the editor for their helpful comments and suggestions to improve the quality of this paper. We also thank Zhaolu Guo of Jiangxi University of Science and Technology for helpful discussions and Jie Su of College of Information Engineering in Northwest A&F University for his much experimental works. This work was supported in part by National Natural Science Foundation of China under Grant 61602388, the China Postdoctoral Science Foundation under Grant 2018M633585, Natural Science Basic Research Plan in Shaanxi Province of China under Grant 2018JQ6060, Doctoral Starting up Foundation of Northwest A&F University under Grant 2452015302, and the Open Project Program of the National Laboratory of Pattern Recognition (NLPR) under Grant 201700009.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinrong He.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, J., Bi, Y., Liu, B. et al. Graph-dual Laplacian principal component analysis. J Ambient Intell Human Comput 10, 3249–3262 (2019). https://doi.org/10.1007/s12652-018-1096-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-018-1096-5

Keywords

Navigation