Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Self-label correction for image classification with noisy labels

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Label noise is inevitable in image classification. Existing methods usually lack the reliability of selecting clean data samples and rely on an auxiliary model to correct clean samples, which quality has a great impact on the classification results. In this paper, we propose the Dual-model and Self-Label Correction (DSLC) method to select clean samples and correct labels without auxiliary models. First, we use a dual-model structure combining contrastive learning to select clean samples. Then, we design a novel label correction method to modify the noisy labels. Finally, we propose a joint loss to improve the generalization ability of our models. In the experiment, we demonstrate the effectiveness of DSLC on various datasets, which achieves comparable performance to state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data Availibility Statement

The data are available.

References

  1. Li Z, Liu J, Tang J, Lu H (2015) Robust structured subspace learning for data representation. IEEE Trans Pattern Anal Mach Intell 37(10):2085–2098

    Article  Google Scholar 

  2. Li Z, Tang J (2016) Weakly supervised deep matrix factorization for social image understanding. IEEE Trans Image Process 26(1):276–288

    Article  MathSciNet  MATH  Google Scholar 

  3. Li Z, Tang J, Mei T (2018) Deep collaborative embedding for social image understanding. IEEE Trans Pattern Anal Mach Intell 41(9):2070–2083

    Article  Google Scholar 

  4. Xie GS, Liu L, Zhu F, Zhao F, Zhang Z, Yao Y, Qin J, Shao L (2020) Region graph embedding network for zero-shot learning. In: ECCV. Springer, pp 562–580

  5. Zhang C, Bengio S, Hardt M, Recht B, Vinyals O (2021) Understanding deep learning (still) requires rethinking generalization. Commun ACM 64(3):107–115

    Article  Google Scholar 

  6. Liu T, Tao D (2015) Classification with noisy labels by importance reweighting. IEEE Trans Pattern Anal Mach Intell 38(3):447–461

    Article  Google Scholar 

  7. Li X, Liu T, Han B, Niu G, Sugiyama M (2021) Provably end-to-end label-noise learning without anchor points. In: ICML. PMLR, pp 6403–6413

  8. Ghosh A, Kumar H, Sastry PS (2017) Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 31, no 1

  9. Zhang Z, Sabuncu M (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. Adv Neural Inform Process Syst 31

  10. Wang Y, Ma X, Chen Z, Luo Y, Yi J, Bailey J (2019) Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 322–330

  11. Zhou X, Liu X, Jiang J, Gao X, Ji X (2021) Asymmetric loss functions for learning with noisy labels. In: Proceedings of the international conference on machine learning, PMLR, pp 12846–12856

  12. Ren M, Zeng W, Yang B, Urtasun R (2018) Learning to reweight examples for robust deep learning. In: International conference on machine learning. PMLR, pp 4334–4343

  13. Jiang L, Zhou Z, Leung T, Li LJ, Fei-Fei L (2018) Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. International conference on machine learning. PMLR, Stockholm, pp 2304–2313

  14. Ibrahim M (2020) An empirical comparison of random forest-based and other learning-to-rank algorithms. Pattern Anal Appl 23(3):1133–1155

    Article  Google Scholar 

  15. Settouti N et al. An instance and variable selection approach in pixel-based classification for automatic white blood cells segmentation. Pattern Anal Appl 23(4):1709–1726

  16. Wei H, Feng L, Chen X, An B (2020) Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 13726–13735

  17. Malach E, Shalev-Shwartz S (2017) Decoupling “when to update” from “how to update”. Adv Neural Inform Process Syst 30

  18. Han B, Yao Q, Yu X, Niu G, Xu M, Hu W, Tsang I, Sugiyama M (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in neural information processing systems. Neural Information Processing Systems Foundation, Inc., vol 31, pp 8527–8537

  19. Yu X, Han B, Yao J, Niu G, Tsang I, Sugiyama M (2019) How does disagreement help generalization against label corruption? In: International Conference on Machine Learning. PMLR, pp 7164-7173

  20. Wang X, Hua Y, Kodirov E, Clifton DA, Robertson NM (2021) Proselflc: Progressive self label correction for training robust deep neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 752–761

  21. Azad-Manjiri M, Amiri A, Saleh Sedghpour A (2020) ML-SLSTSVM: a new structural least square twin support vector machine for multi-label learning. Pattern Anal Appl 23(1):295–308

    Article  MathSciNet  Google Scholar 

  22. Zhu J, Zhang J, Han B, Liu T, Niu G, Yang H, Kankanhalli M, Sugiyama M (2021) Understanding the interaction of adversarial training with noisy labels. arXiv preprint arXiv:2102.03482

  23. Bootkrajang J, Chaijaruwanich J (2020) Towards instance-dependent label noise-tolerant classification: a probabilistic approach. Pattern Anal Appl 23(1):95–111

    Article  MathSciNet  Google Scholar 

  24. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2818–2826

  25. Müller R, Kornblith S, Hinton GE (2019) When does label smoothing help? Adv Neural Inform Process Syst 32

  26. Pereyra G, Tucker G, Chorowski J, Kaiser Ł, Hinton G (2017) Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548

  27. Tanaka D, Ikami D, Yamasaki T, Aizawa K (2018) Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5552–5560

  28. Lee DH, Zhang S, Lee SW (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on challenges in representation learning. ICML, vol 3, no 2, pp 896

  29. Reed S, Lee H, Anguelov D, Szegedy C, Erhan D, Rabinovich A (2014) Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596

  30. Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531

  31. Zhang M, Xu N, Geng X (2022) Feature-induced label distribution for learning with noisy labels. Pattern Recogn Lett 155:107–113

  32. Wang M, Yu H-T, Min F (2021) Noise label learning through label confidence statistical inference. Knowl-Based Syst 227:107234

  33. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1-9

  34. Yao Y, Sun Z, Zhang C, Shen F, Wu Q, Zhang J, Tang Z (2021) Jo-src: A contrastive approach for combating noisy labels. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 5192–5201

  35. Arpit D, Jastrzębski S, Ballas N, Krueger D, Bengio E, Kanwal MS, Maharaj T, Fischer A, Courville A, Bengio Y et al (2017) A closer look at memorization in deep networks. In: International conference on machine learning. PMLR, pp 233–242

  36. Lin J (1991) Divergence measures based on the Shannon entropy. IEEE Trans Inform Theory 37(1):145–151

    Article  MathSciNet  MATH  Google Scholar 

  37. Blum A, Mitchell T (1998) Combining labeled and unlabeled data with co-training. In: Proceedings of the eleventh annual conference on Computational learning theory. pp 92–100

  38. Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images. ON, Canada, Toronto

  39. Xiao T, Xia T, Yang Y, Huang C, Wang X (2015) Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2691–2699

  40. Lee KH, He X, Zhang L, Yang L (2018) Cleannet: transfer learning for scalable image classifier training with label noise. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5447–5456

  41. Patrini G, Rozza A, Krishna Menon A, Nock R, Qu L (2017) Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1944–1952

  42. Han J, Luo P, Wang X (2019) Deep self-learning from noisy labels. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 5138–5147

  43. de Aquino Afonso BK, Berton L (2020) Identifying noisy labels with a transductive semi-supervised leave-one-out filter. Pattern Recogn Lett 140:127–134

    Article  Google Scholar 

  44. Fouquet EG, Fauvel M, Mallet C (2021) Fast estimation for robust supervised classification with mixture models. Pattern Recogn Lett 152:320–326

    Article  Google Scholar 

  45. Flores JL, Calvo B, Pérez A (2022) Non-parametric discretization for probabilistic labeled data. Pattern Recogn Lett 161:52–58

    Article  Google Scholar 

  46. Ma D, Zhou Y, Zhao J, Chen Y, Yao R, Chen H (2021) Video-based person re-identification by semi-supervised adaptive stepwise learning. Pattern Anal Appl 24(4):1769–1776

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China (2018AAA0100104, 2018AAA0100100) and Natural Science Foundation of Jiangsu Province (BK20211164). We thank the Big Data Computing Center of Southeast University for providing the facility support on the numerical calculations in this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Zhang.

Ethics declarations

Conflict of interest statement

The authors declared that they have no conflicts of interest to this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Lin, F., Mi, S. et al. Self-label correction for image classification with noisy labels. Pattern Anal Applic 26, 1505–1514 (2023). https://doi.org/10.1007/s10044-023-01180-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-023-01180-w

Keywords

Navigation