Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Robust latent discriminative adaptive graph preserving learning for image feature extraction

Published: 23 May 2023 Publication History

Abstract

Many feature extraction methods based on subspace learning have been proposed and applied with good performance. Most existing methods fail to achieve a balance between characterizing the data and the sparsity of the feature weights. At the same time, the assumptions on one specific type of noise may degrade the performance of feature extraction when the data contains complex noise. For these, this paper proposes a robust latent discriminative adaptive graph preserving learning model for feature extraction (RLDAGP). The F-norm is used to preserve the global structure of the data instead of the widely used nuclear norm. Moreover, we prove that the proposed method has a low-dimensional grouping effect, which means that highly correlated samples will be grouped together. Further, a correntropy-inducing metric (CIM) is introduced to the noise matrix to suppress complex noise. Besides, an adaptive graph regularizer is integrated into the model to enhance its robustness while preserving the local structure and enhancing the intra-class compactness. In particular, a transformed l 2, 1-norm regularization, which is smoothly interpolated in l 2, 1-norm and F-norm, is introduced to the projection matrix to adaptively extract discriminative features from the data. In order to solve the proposed nonconvex model, we design an algorithm based on nonconvex-ADMM framework and prove the convergence of the proposed algorithm theoretically. Experiments demonstrate the superiority of the proposed method over the existing state-of-the-art methods.

Highlights

We propose a new RLDAGP method for image feature extraction.
An adaptive graph is included to enhance the grouping effect of low-dimensional data.
The transformed l 2, 1 norm extends the generalization of RLDAGP to the type of image.
Correntropy-induced metric improves the robustness of RLDAGP against noise.

References

[1]
Liu H., Chen G., Li P., Zhao P., Wu X., Multi-label text classification via joint learning from label embedding and label correlation, Neurocomputing 460 (2021) 385–398.
[2]
Li Q., Peng H., Li J., Xia C., Yang R., Sun L., Yu P.S., He L., A survey on text classi cation: From traditional to deep learning, ACM Trans. Intell. Syst. Technol. 13 (2) (2022) 1–41.
[3]
Kang Y., Wang H., Pu B., Tao L., Chen J., Yu P.S., A hybrid two-stage teaching-learning-based optimization algorithm for feature selection in bioinformatics, IEEE/ACM Trans. Comput. Biol. Bioinform. (2022) 1–14,.
[4]
Gupta S., Gupta M.K., Shabaz M., Sharma A., Deep learning techniques for cancer classication using microarray gene expression data, Front. Physiol. 13 (2022),.
[5]
Zhou J., Shen X., Liu S., Wang L., Zhu Q., Qian P., Multi-dictionary induced low-rank representation with multi-manifold regularization, Appl. Intell. 53 (3) (2023) 3576–3593.
[6]
Bi Y., Xue B., Zhang M., Multi-objective genetic programming for feature learning in face recognition, Appl. Soft Comput. 103 (2021).
[7]
Tang C., Liu X., Zhu X., Xiong J., Li M., Xia J., Wang X., Wang L., Feature selective projection with low-rank embedding and dual laplacian regularization, IEEE Trans. Knowl. Data Eng. 32 (9) (2019) 1747–1760.
[8]
Zhang X., Tan Z., Sun H., Wang Z., Qin M., Orthogonal low-rank projection learning for robust image feature extraction, IEEE Trans. Multimed. 24 (2022) 3882–3895.
[9]
Cai M., Shen X., Abhadiomhen S.E., Cai Y., Tian S., Robust dimensionality reduction via low-rank Laplacian graph learning, ACM Trans. Intell. Syst. Technol. (2023),. Just Accepted.
[10]
Nie F., Dong X., Li X., Unsupervised and semisupervised projection with graph optimization, IEEE Trans. Neural Netw. Learn. Syst. 32 (4) (2021) 1547–1559.
[11]
Wan M., Yao Y., Zhan T., Yang G., Supervised low-rank embedded regression (slrer) for robust subspace learning, IEEE Trans. Circuits Syst. Video Technol. 32 (4) (2022) 1917–1927.
[12]
Nie F., Wu D., Wang R., Li X., Truncated robust principle component analysis with a general optimization framework, IEEE Trans. Pattern Anal. Mach. Intell. 44 (2) (2022) 1081–1097.
[13]
Auguin N., Morales-Jimenez D., McKay M.R., Large-dimensional characterization of robust linear discriminant analysis, IEEE Trans. Signal Process. 69 (2021) 2625–2638.
[14]
X. He, D. Cai, S. Yan, H.-J. Zhang, Neighborhood preserving embedding, in: IEEE International Conference on Computer Vision, Vol. 2, ICCV, 2005, pp. 1208–1213.
[15]
He X., Niyogi P., Locality preserving projections, in: Advances in Neural Information Processing Systems, Vol. 16, 2004, pp. 153–160.
[16]
Yin S., Sun Y., Gao J., Hu Y., Wang B., Yin B., Robust image representation via low rank locality preserving projection, ACM Trans. Knowl. Discov. Data 15 (4) (2021) 1–22.
[17]
Jiang L., Fang X., Sun W., Han N., Teng S., Low-rank constraint based dual projections learning for dimensionality reduction, Signal Process. 204 (2023).
[18]
Gao C., Li Y., Zhou J., Pedrycz W., Lai Z., Wan J., Lu J., Global structure-guided neighborhood preserving embedding for dimensionality reduction, Int. J. Mach. Learn. Cybern. (2022) 1–20.
[19]
Z. Fu, Y. Zhao, D. Chang, X. Zhang, Y. Wang, Double low-rank representation with projection distance penalty for clustering, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 5320–5329.
[20]
Lu J., Lin J., Lai Z., Wang H., Zhou J., Target redirected regression with dynamic neighborhood structure, Inform. Sci. 544 (2021) 564–584.
[21]
Huang Y., Xiao Q., Du S., Yu Y., Multi-view clustering based on lowrank representation and adaptive graph learning, Neural Process. Lett. 54 (1) (2022) 265–283.
[22]
Lu J., Wang H., Zhou J., Chen Y., Lai Z., Hu Q., Low-rank adaptive graph embedding for unsupervised feature extraction, Pattern Recognit. 113 (2021).
[23]
Lu X., Long J., Wen J., Fei L., Zhang B., Xu Y., Locality preserving projection with symmetric graph embedding for unsupervised dimensionality reduction, Pattern Recognit. 131 (2022).
[24]
Wang W., Qin J., Zhang Y., Deng D., Yu S., Zhang Y., Liu Y., Tnnl: a novel image dimensionality reduction method for face image recognition, Digit. Signal Process. 115 (2021).
[25]
Xu Y., Chen S., Li J., Xu C., Yang J., Fast subspace clustering by learning projective block diagonal representation, Pattern Recognit. 135 (2023).
[26]
Guo T., Yu K., Srivastava G., Wei W., Guo L., Xiong N.N., Latent discriminative low-rank projection for visual dimension reduction in green internet of things, IEEE Trans. Green Commun. Netw. 5 (2) (2021) 737–749.
[27]
Liu G., Lin Z., Yan S., Sun J., Yu Y., Ma Y., Robust recovery of subspace structures by low-rank representation, IEEE Trans. Pattern Anal. Mach. Intell. 35 (1) (2013) 171–184.
[28]
Peng X., Lu C., Yi Z., Tang H., Connections between nuclear-norm and Frobenius-norm-based representations, IEEE Trans. Neural Netw. Learn. Syst. 29 (1) (2018) 218–224.
[29]
Yu S., Yiquan W., Subspace clustering based on latent low rank representation with Frobenius norm minimization, Neurocomputing 275 (2018) 2479–2489.
[30]
Wang S., Chen Y., Yi S., Chao G., Frobenius norm-regularized robust graph learning for multi-view subspace clustering, Appl. Intell. (2022) 1–14.
[31]
G. Liu, S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, in: IEEE International Conference on Computer Vision, ICCV, 2011, pp. 1615–1622.
[32]
Wong W.K., Lai Z., Wen J., Fang X., Lu Y., Low-rank embedding for robust image feature extraction, IEEE Trans. Image Process. 26 (6) (2017) 2905–2917.
[33]
Zhang Y., Chen J., Liu Z., Adaptive distance penalty based nonnegative low-rank representation for semi-supervised learning, Appl. Intell. (2022) 1–12.
[34]
Liu Z., Lu Y., Lai Z., Ou W., Zhang K., Robust sparse low-rank embedding for image dimension reduction, Appl. Soft Comput. 113 (2021).
[35]
Wen J., Han N., Fang X., Fei L., Yan K., Zhan S., Low-rank preserving projection via graph regularized reconstruction, IEEE Trans. Cybern. 49 (4) (2018) 1279–1291.
[36]
Z. Huang, S. Zhao, L. Fei, J. Wu, Weighted graph embedded low-rank projection learning for feature extraction, in: Proc. IEEE Int. Conf. Acoust. Speech Signal Process, 2022, pp. 1501–1505.
[37]
Peng Y., Zhang L., Kong W., Qin F., Zhang J., Joint low-rank representation and spectral regression for robust subspace learning, Knowl.-Based Syst. 195 (2020).
[38]
Zhao S., Wu J., Zhang B., Fei L., Li S., Zhao P., Adaptive graph embedded preserving projection learning for feature extraction and selection, IEEE Trans. Syst. Man Cybern. 53 (2) (2022) 1060–1073.
[39]
Wang W., Fang L., Zhang W., Robust double relaxed regression for image classification, Signal Process. 203 (2023).
[40]
Huang P., Yang Z., Wang W., Zhang F., Denoising low-rank discrimination based least squares regression for image classification, Inform. Sci. 587 (2022) 247–264.
[41]
Xiang S., Nie F., Meng G., Pan C., Zhang C., Discriminative least squares regression for multiclass classification and feature selection, IEEE Trans. Neural Netw. Learn. Syst. 23 (11) (2012) 1738–1754.
[42]
Zhang X.-Y., Wang L., Xiang S., Liu C.-L., Retargeted least squares regression algorithm, IEEE Trans. Neural Netw. Learn. Syst. 26 (9) (2014) 2206–2213.
[43]
Wen J., Xu Y., Li Z., Ma Z., Xu Y., Inter-class sparsity based discriminative least square regression, Neural Netw. 102 (2018) 36–47.
[44]
Meng M., Lan M., Yu J., Wu J., Tao D., Constrained discriminative projection learning for image classification, IEEE Trans. Image Process. 29 (2020) 186–198.
[45]
Lai Z., Bao J., Kong H., Wan M., Yang G., Discriminative low-rank projection for robust subspace learning, Int. J. Mach. Learn. Cybern. 11 (10) (2020) 2247–2260.
[46]
Fang X., Han N., Wu J., Xu Y., Yang J., Wong W.K., Li X., Approximate low-rank projection learning for feature extraction, IEEE Trans. Neural Netw. Learn. Syst. 29 (11) (2018) 5228–5241.
[47]
Ren Z., Sun Q., Wu B., Zhang X., Yan W., Learning latent low-rank and sparse embedding for robust image feature extraction, IEEE Trans. Image Process. 29 (2019) 2094–2107.
[48]
Zhao S., Wu J., Zhang B., Fei L., Low-rank inter-class sparsity based semi-flexible target least squares regression for feature representation, Pattern Recognit. 123 (2022).
[49]
He R., Wang L., Sun Z., Zhang Y., Li B., Information theoretic subspace clustering, IEEE Trans. Neural Netw. Learn. Syst. 27 (12) (2016) 2643–2655.
[50]
Liu W., Pokharel P.P., Principe J.C., Correntropy: Properties and applications in non-Gaussian signal processing, IEEE Trans. Signal Process. 55 (11) (2007) 5286–5298.
[51]
F. Nie, H. Wang, H. Huang, C. Ding, Adaptive loss minimization for semi-supervised elastic embedding, in: International Joint Conference on Artificial Intelligence, 2013, pp. 1565–1571.
[52]
Boyd S., Parikh N., Chu E., Peleato B., Eckstein J., et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends® Mach. Learn. 3 (1) (2011) 1–122.
[53]
Nikolova M., Ng M.K., Analysis of half-quadratic minimization methods for signal and image recovery, SIAM J. Sci. Comput. 27 (3) (2005) 937–966.
[54]
Everson R., Orthogonal, but not orthonormal, procrustes problems, Adv. Comput. Math. 3 (4) (1998) 4655–4666.
[55]
F. Nie, X. Wang, M. Jordan, H. Huang, The constrained laplacian rank algorithm for graph-based clustering, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30, 2016.
[56]
Sim T., Baker S., Bsat M., The cmu pose, illumination and expression database of human faces, IEEE Trans. Pattern Anal. Mach. Intell. 25 (12) (2003) 1615–1618.
[57]
Georghiades A.S., Belhumeur P.N., Kriegman D.J., From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Trans. Pattern Anal. Mach. Intell. 23 (6) (2001) 643–660.
[58]
Martinez A., Benavente R., The ar face database, Rapport Tech. 24 (1998).
[59]
Hull J.J., A database for handwritten text recognition research, IEEE Trans. Pattern Anal. Mach. Intell. 16 (5) (1994) 550–554.
[60]
Krizhevsky A., Sutskever I., Hinton G., Imagenet classification with deep convolutional neural networks, Proc. Neural Inf. Process. Syst. 25 (2) (2012).
[61]
Simonyan K., Zisserman A., Very deep convolutional networks for largescale image recognition, 2014, CoRR abs/1409.1556.
[62]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 2016, pp. 770–778.
[63]
Pezzotti N., Lelieveldt B.P., Van Der Maaten L., Höllt T., Eisemann E., Vilanova A., Approximated and user steerable tsne for progressive visual analytics, IEEE Trans. Vis. Comput. Graphics 23 (7) (2016) 1739–1752.
[64]
Allain M., Idier J., Goussard Y., On global and local convergence of half-quadratic algorithms, IEEE Trans. Image Process. 15 (5) (2006) 1130–1142.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Knowledge-Based Systems
Knowledge-Based Systems  Volume 268, Issue C
May 2023
554 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 23 May 2023

Author Tags

  1. Subspace learning
  2. Feature extraction
  3. Low rank representation
  4. Frobenius norm minimization
  5. Adaptive graph

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Oct 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media