Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-031-26293-7_41guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

A2: Adaptive Augmentation for Effectively Mitigating Dataset Bias

Published: 11 March 2023 Publication History

Abstract

Recently, deep neural networks (DNNs) have become the de facto standard to achieve outstanding performances and demonstrate significant impact on various computer vision tasks for real-world scenarios. However, the trained networks can often suffer from overfitting issues due to the unintended bias in a dataset causing inaccurate, unreliable, and untrustworthy results. Thus, recent studies have attempted to remove bias by augmenting the bias-conflict samples to address this challenge. Yet, it still remains a challenge since generating bias-conflict samples without human supervision is generally difficult. To tackle this problem, we propose a novel augmentation framework, Adaptive Augmentation (A2), based on a generative model that help classifiers learn debiased representations. Our framework consists of three steps: 1) extracting bias-conflict samples from a biased dataset in an unsupervised manner, 2) training a generative model with the biased dataset and adapting the learned biased distribution to the extracted bias-conflict samples’ distribution, and 3) augmenting bias-conflict samples by translating bias-align samples. Therefore, our classifier can effectively learn the debiased representation without human supervision. Our extensive experimental results demonstrate that A2 effectively augments bias-conflict samples, mitigating widespread bias issues. The code is available in here (https://github.com/anjaeju/A2-Adaptive-Augmentation-for-Effectively-Mitigating-Dataset-Bias).

References

[1]
Agarwal, V., Shetty, R., Fritz, M.: Towards causal VQA: revealing and reducing spurious correlations by invariant and covariant semantic editing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9690–9698 (2020)
[2]
Bahng, H., Chun, S., Yun, S., Choo, J., Oh, S.J.: Learning de-biased representations with biased representations. In: International Conference on Machine Learning, pp. 528–539. PMLR (2020)
[3]
Cadene, R., Dancette, C., Cord, M., Parikh, D., et al.: RUBi: reducing unimodal biases for visual question answering. In: Advances in Neural Information Processing Systems 32 (2019)
[4]
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
[5]
Clark, C., Yatskar, M., Zettlemoyer, L.: Don’t take the easy way out: ensemble based methods for avoiding known dataset biases. arXiv preprint arXiv:1909.03683 (2019)
[6]
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231 (2018)
[7]
Goel, K., Gu, A., Li, Y., Ré, C.: Model patching: closing the subgroup performance gap with data augmentation. arXiv preprint arXiv:2008.06775 (2020)
[8]
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27 (2014)
[9]
Gu S, Bao J, Chen D, and Wen F Vedaldi A, Bischof H, Brox T, and Frahm J-M GIQA: generated image quality assessment Computer Vision – ECCV 2020 2020 Cham Springer 369-385
[10]
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
[11]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
[12]
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 (2019)
[13]
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
[14]
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)
[15]
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of styleGAN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110–8119 (2020)
[16]
Kim, B., Kim, H., Kim, K., Kim, S., Kim, J.: Learning not to learn: training deep neural networks with biased data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9012–9020 (2019)
[17]
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
[18]
LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/
[19]
Lee, J., Kim, E., Lee, J., Lee, J., Choo, J.: Learning debiased representation via disentangled feature augmentation. In: Advances in Neural Information Processing Systems 34 (2021)
[20]
Li, Y., Vasconcelos, N.: Repair: removing representation bias by dataset resampling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9572–9581 (2019)
[21]
Lu D and Weng Q A survey of image classification methods and techniques for improving classification performance Int. J. Remote Sens. 2007 28 5 823-870
[22]
Minaee, S., Boykov, Y.Y., Porikli, F., Plaza, A.J., Kehtarnavaz, N., Terzopoulos, D.: Image segmentation using deep learning: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)
[23]
Nam J, Cha H, Ahn S, Lee J, and Shin J Learning from failure: de-biasing classifier from biased classifier Adv. Neural. Inf. Process. Syst. 2020 33 20673-20684
[24]
Ojha, U., et al.: Few-shot image generation via cross-domain correspondence. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10743–10752 (2021)
[25]
Van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv e-prints pp. arXiv-1807 (2018)
[26]
Puli, A.M., Zhang, L.H., Oermann, E.K., Ranganath, R.: Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. In: International Conference on Learning Representations (2021)
[27]
Sagawa, S., Koh, P.W., Hashimoto, T.B., Liang, P.: Distributionally robust neural networks for group shifts: on the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731 (2019)
[28]
Shorten C and Khoshgoftaar TM A survey on image data augmentation for deep learning J. big data 2019 6 1 1-48
[29]
Van Dyk DA and Meng XL The art of data augmentation J. Comput. Graph. Stat. 2001 10 1 1-50
[30]
Wang, H., He, Z., Lipton, Z.C., Xing, E.P.: Learning robust representations by projecting superficial statistics out. arXiv preprint arXiv:1903.06256 (2019)
[31]
Ying, X.: An overview of overfitting and its solutions. J. Phys. Conf. Ser. 1168, 022022 (2019). IOP Publishing (2019)
[32]
Zhou, C., Ma, X., Michel, P., Neubig, G.: Examining and combating spurious features under distribution shift. In: International Conference on Machine Learning, pp. 12857–12867. PMLR (2021)
[33]
Zhou, K., Yang, Y., Qiao, Y., Xiang, T.: Domain generalization with mixStyle. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=6xHJ37MVxxp
[34]
Zou, Z., Shi, Z., Guo, Y., Ye, J.: Object detection in 20 years: a survey. arXiv preprint arXiv:1905.05055 (2019)

Index Terms

  1. A2: Adaptive Augmentation for Effectively Mitigating Dataset Bias
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      Computer Vision – ACCV 2022: 16th Asian Conference on Computer Vision, Macao, China, December 4–8, 2022, Proceedings, Part VII
      Dec 2022
      745 pages
      ISBN:978-3-031-26292-0
      DOI:10.1007/978-3-031-26293-7
      • Editors:
      • Lei Wang,
      • Juergen Gall,
      • Tat-Jun Chin,
      • Imari Sato,
      • Rama Chellappa

      Publisher

      Springer-Verlag

      Berlin, Heidelberg

      Publication History

      Published: 11 March 2023

      Author Tags

      1. Computer vision
      2. Debiasing
      3. Image translation

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 25 Nov 2024

      Other Metrics

      Citations

      View Options

      View options

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media