Learning from noisy labels with coarse-to-fine sample credibility modeling

B Zhang, Y Li, Y Tu, J Peng, Y Wang, C Wu… - … on Computer Vision, 2022 - Springer
European Conference on Computer Vision, 2022Springer
Training deep neural network (DNN) with noisy labels is practically challenging since
inaccurate labels severely degrade the generalization ability of DNN. Previous efforts tend to
handle part or full data in a unified denoising flow via identifying noisy data with a coarse
small-loss criterion to mitigate the interference from noisy labels, ignoring the fact that the
difficulties of noisy samples are different, thus a rigid and unified data selection pipeline
cannot tackle this problem well. In this paper, we first propose a coarse-to-fine robust …
Abstract
Training deep neural network (DNN) with noisy labels is practically challenging since inaccurate labels severely degrade the generalization ability of DNN. Previous efforts tend to handle part or full data in a unified denoising flow via identifying noisy data with a coarse small-loss criterion to mitigate the interference from noisy labels, ignoring the fact that the difficulties of noisy samples are different, thus a rigid and unified data selection pipeline cannot tackle this problem well . In this paper, we first propose a coarse-to-fine robust learning method called CREMA, to handle noisy data in a divide-and-conquer manner. In coarse-level, clean and noisy sets are firstly separated in terms of credibility in a statistical sense. Since it is practically impossible to categorize all noisy samples correctly, we further process them in a fine-grained manner via modeling the credibility of each sample. Specifically, for the clean set, we deliberately design a memory-based modulation scheme to dynamically adjust the contribution of each sample in terms of its historical credibility sequence during training, thus alleviating the effect from noisy samples incorrectly grouped into the clean set. Meanwhile, for samples categorized into the noisy set, a selective label update strategy is proposed to correct noisy labels while mitigating the problem of correction error. Extensive experiments are conducted on benchmarks of different modality, including image classification (CIFAR, Clothing1M etc.) and text recognition (IMDB), with either synthetic or natural semantic noises, demonstrating the superiority and generality of CREMA.
Springer
Showing the best result for this search. See all results