Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Sparse margin–based discriminant analysis for feature extraction

  • Review
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The existing margin-based discriminant analysis methods such as nonparametric discriminant analysis use K-nearest neighbor (K-NN) technique to characterize the margin. The manifold learning–based methods use K-NN technique to characterize the local structure. These methods encounter a common problem, that is, the nearest neighbor parameter K should be chosen in advance. How to choose an optimal K is a theoretically difficult problem. In this paper, we present a new margin characterization method named sparse margin–based discriminant analysis (SMDA) using the sparse representation. SMDA can successfully avoid the difficulty of parameter selection. Sparse representation can be considered as a generalization of K-NN technique. For a test sample, it can adaptively select the training samples that give the most compact representation. We characterize the margin by sparse representation. The proposed method is evaluated by using AR, Extended Yale B database, and the CENPARMI handwritten numeral database. Experimental results show the effectiveness of the proposed method; its performance is better than some other state-of-the-art feature extraction methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Jain AK, Duin RPW, Mao J (2000) Statistical pattern recognition: a review. IEEE Trans PAMI 22(1):4–37

    Article  Google Scholar 

  2. Turk M, Pentland A (1991) Face recognition using eigenfaces. In: IEEE conference on computer vision and pattern recognition. Maui

  3. Joliffe I (1986) Principal component analysis. Springer, New York

    Book  Google Scholar 

  4. Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugenics 7(2):179–188

    Article  Google Scholar 

  5. Belhumeur P, Hespanha J, Kriegman D (1997) Eigenfaces versus fisherfaces: recognition using class specific linear projection. IEEE Trans Patt Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  6. Fukunaga K, Mantock J (1983) Nonparametric discriminant analysis. IEEE Trans Patt Anal Mach Intell 5:671C678

    Google Scholar 

  7. Li Z, Liu W, Lin D, Tang X (2005) Nonparametric subspace analysis for face recognition. In: Proceedings of IEEE conference on computer vision and pattern recognition

  8. Li ZL, Lin DH, Tang XO (2009) Nonparametric discriminant analysis for face recognition. IEEE Trans Patt Anal Mach Intell 31(4):2691–2698

    Google Scholar 

  9. Qiu XP, Wu LD (2005) Face recognition by stepwise nonparametric margin maximum criterion. In: Proceedings of IEEE conference on computer vision (ICCV 2005), Beijing

  10. Tenenbaum JB, deSilva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323

    Article  Google Scholar 

  11. He X, Niyogi P (2002) Locality preserving projections (LPP). TR-2002-09, 29 October

  12. He X, Cai D, Yan S, Zhang H (2005) Neighborhood preserving embedding. In: Proceedings in international conference on computer vision (ICCV)

  13. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  14. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  MATH  Google Scholar 

  15. Yan S, Xu D, Zhang B, Zhang H-J (2005) Graph embedding: a general framework for dimensionality reduction. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp. 830–837

  16. Mallat S, Zhang Z (1993) Matching pursuit in a time-frequency dictionary. IEEE Trans Sig Process 41:3397–3415

    Article  MATH  Google Scholar 

  17. Chen SS, Donoho DL, Saunders MA (1999) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20:33–61

    Article  MathSciNet  MATH  Google Scholar 

  18. Donoho DL, Huo X (2001) Uncertainty principles and ideal atomic decomposition. IEEE Trans Inf Theor 47:2845–2862

    Article  MathSciNet  MATH  Google Scholar 

  19. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theor 52(4):1289–1306

    Article  MathSciNet  Google Scholar 

  20. Candes EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE Sig Process Mag 47:2845–2862

    Google Scholar 

  21. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2008) Robust face recognition via sparse representation. Patt Anal Mach Intell IEEE Trans 31(2):210–227

    Article  Google Scholar 

  22. Zhang Q, Li B (2010) Discriminatie K-SVD for dictionary learning in face recognition. IEEE, CVPR, pp 2691–2698

  23. Calderbank R, Jafarpour S, Schapire R (2009) Compressed learning: universal sparse dimensionality reduction and learning in the measurement domain. Preprint

  24. Gao S, Tsang I, Chia LT (2010) Kernel sparse representation for image classification and face recognition, computer vision—ECCV. Springer, Berlin, pp 1–14

  25. Qiao L, et al (2009) Sparsity preserving projections with application to face recognition. Patt Recogn 59:797–829

    Google Scholar 

  26. Aharon M, Elad M, Bruckstein AM (2006) The K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans Sig Process 54(11):4311–4322

    Article  Google Scholar 

  27. Donoho D (2006) For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution. Comm Pure Appl Math 59(6):797–829

    Article  MathSciNet  MATH  Google Scholar 

  28. Chen S, Donoho D, Saunders M (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159

    Article  MathSciNet  MATH  Google Scholar 

  29. Donoho D, Drori I, Stodden V, Tsaig Y (2005) Sparselab, http://sparselab.stanford.edu/

  30. Martinez A, Benavente R (1998) The AR face database. CVC technical report 24

  31. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using laplacianfaces. IEEE Trans Patt Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  32. Georghiades AS, Belhumeur PN, Kriegman DJ (2001) From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans Patt Anal Mach Intell 23(6):643–660

    Article  Google Scholar 

  33. Lee KC, Ho J, Driegman D (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Patt Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  34. Liao SX, Pawlak M (1996) On image analysis by moments. IEEE Trans Patt Anal Mach Intell 18(3):254–266

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the Program for New Century Excellent Talents in University of China, the NUST Outstanding Scholar Supporting Program, the National Science Foundation of China under Grants No. 60973098, 60632050 and 90820306.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenghong Gu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gu, Z., Yang, J. Sparse margin–based discriminant analysis for feature extraction. Neural Comput & Applic 23, 1523–1529 (2013). https://doi.org/10.1007/s00521-012-1124-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-1124-x

Keywords

Navigation