Nothing Special   »   [go: up one dir, main page]

Skip to main content

Robust Nearest Neighbors for Source-Free Domain Adaptation Under Class Distribution Shift

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

The goal of source-free domain adaptation (SFDA) is retraining a model fit on data from a source domain (e.g. drawings) to classify data from a target domain (e.g. photos) employing only the target samples. In addition to the domain shift, in a realistic scenario, the number of samples per class on source and target would also differ (i.e. class distribution shift, or CDS). Dealing label-less with CDS via target data only is challenging, and thus previous methods assume no class imbalance in the source data. We study the SFDA pipeline and, for the first time, propose a SFDA method that can deal with class imbalance in both source and target data. While pseudolabeling is the core technique in SFDA to estimate the distribution of the target data, it relies on nearest neighbors, which makes it sensitive to class distribution shifts (CDS). We are able to calculate robust nearest neighbors by leveraging additional generic features free of the source model’s CDS bias. This provides a “second-opinion” regarding which nearest neighbors are more suitable for adaptation. We evaluate our method using various types of features, datasets and tasks, outperforming previous methods in SFDA under CDS. Our code is available at https://github.com/CyberAgentAILab/Robust_Nearest_Neighbors_SFDA-CDS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Boudiaf, M., Mueller, R., Ben Ayed, I., Bertinetto, L.: Parameter-free online test-time adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 8344–8353 (2022)

    Google Scholar 

  2. Buda, M., Maki, A., Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 106, 249–259 (2018)

    Article  Google Scholar 

  3. Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: Proceedings of European Conference on Computer Vision, pp. 135–150 (2018)

    Google Scholar 

  4. Chen, W., et al.: Contrastive syn-to-real generalization. In: Proceedings of International Conference on Learning Representations, pp. 1–12 (2021)

    Google Scholar 

  5. Chen, X., Wang, S., Long, M., Wang, J.: Transferability vs. Discriminability: batch spectral penalization for adversarial domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 1081–1090 (2019)

    Google Scholar 

  6. Cui, S., Wang, S., Zhuo, J., Su, C., Huang, Q., Tian, Q.: Gradually vanishing bridge for adversarial domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 12455–12464 (2020)

    Google Scholar 

  7. Dong, J., Fang, Z., Liu, A., Sun, G., Liu, T.: Confident anchor-induced multi-source free domain adaptation. Adv. Neural. Inf. Process. Syst. 34, 2848–2860 (2021)

    Google Scholar 

  8. Dong, Q., Gong, S., Zhu, X.: Imbalanced deep learning by minority class incremental rectification. IEEE Trans. Pattern Anal. Mach. Intell. 41(6), 1367–1381 (2018)

    Article  Google Scholar 

  9. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  10. Gong, T., Jeong, J., Kim, T., Kim, Y., Shin, J., Lee, S.J.: NOTE: robust continual test-time adaptation against temporal correlation. In: Proceedings of Advances in Neural Information Processing Systems, vol. 35, pp. 27253–27266 (2022)

    Google Scholar 

  11. Gu, X., Sun, J., Xu, Z.: Spherical space domain adaptation with robust pseudo-label loss. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 9101–9110 (2020)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  13. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  14. Hu, L., Kan, M., Shan, S., Chen, X.: Unsupervised domain adaptation with hierarchical gradient synchronization. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 4043–4052 (2020)

    Google Scholar 

  15. Jiang, X., Lao, Q., Matwin, S., Havaei, M.: Implicit class-conditioned domain alignment for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 4816–4827 (2020)

    Google Scholar 

  16. Kundu, J.N., et al.: Balancing discriminability and transferability for source-free domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 11710–11728 (2022)

    Google Scholar 

  17. Kundu, J.N., Venkat, N., Babu, R.V., et al.: Universal source-free domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 4544–4553 (2020)

    Google Scholar 

  18. Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Proceedings of International Conference on Machine Learning Workshops, vol. 3-2, p. 896 (2013)

    Google Scholar 

  19. Li, S., Xie, M., Gong, K., Liu, C.H., Wang, Y., Li, W.: Transferable semantic augmentation for domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 11516–11525 (2021)

    Google Scholar 

  20. Li, X., Li, J., Zhu, L., Wang, G., Huang, Z.: Imbalanced source-free domain adaptation. In: Proceedings of ACM International Conference on Multimedia, pp. 3330–3339 (2021)

    Google Scholar 

  21. Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 6028–6039 (2020)

    Google Scholar 

  22. Litrico, M., Del Bue, A., Morerio, P.: Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 7640–7650 (2023)

    Google Scholar 

  23. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  24. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)

    Google Scholar 

  25. Mancini, M., Karaoguz, H., Ricci, E., Jensfelt, P., Caputo, B.: Kitting in the wild through online domain adaptation. In: Proceedings of International Conference on Intelligent Robots and Systems, pp. 1103–1109 (2018)

    Google Scholar 

  26. Na, J., Jung, H., Chang, H.J., Hwang, W.: FixBi: bridging domain spaces for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1094–1103 (2021)

    Google Scholar 

  27. Park, S., Yang, S., Choo, J., Yun, S.: Label shift adapter for test-time adaptation under covariate and label shifts. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16421–16431 (2023)

    Google Scholar 

  28. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 1406–1415 (2019)

    Google Scholar 

  29. Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: VisDA: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 (2017)

  30. Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)

    Google Scholar 

  31. Shi, Z.: Improving k-nearest neighbors algorithm for imbalanced data classification. IOP Conf. Ser. Mater. Sci. Eng. 719(1), 012072 (2020)

    Article  Google Scholar 

  32. Tan, S., Peng, X., Saenko, K.: Class-imbalanced domain adaptation: an empirical odyssey. In: Proceedings of European Conference on Computer Vision Workshops, pp. 585–602 (2020)

    Google Scholar 

  33. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)

    Google Scholar 

  34. Wang, D., Shelhamer, E., Liu, S., Olshausen, B., Darrell, T.: TENT: fully test-time adaptation by entropy minimization. In: Proceedings of International Conference on Learning Representations, pp. 1–15 (2021)

    Google Scholar 

  35. Wang, Q., Fink, O., Van Gool, L., Dai, D.: Continual test-time domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 7201–7211 (2022)

    Google Scholar 

  36. Weiss, G.M., Provost, F.: The effect of class distribution on classifier learning: an empirical study. Rutgers University, Technical report (2001)

    Google Scholar 

  37. Xia, H., Zhao, H., Ding, Z.: Adaptive adversarial network for source-free domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 9010–9019 (2021)

    Google Scholar 

  38. Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., Zuo, W.: Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 2272–2281 (2017)

    Google Scholar 

  39. Yang, S., Wang, Y., Van De Weijer, J., Herranz, L., Jui, S.: Generalized source-free domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 8978–8987 (2021)

    Google Scholar 

  40. Yang, S., van de Weijer, J., Herranz, L., Jui, S., et al.: Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Adv. Neural. Inf. Process. Syst. 34, 29393–29405 (2021)

    Google Scholar 

  41. Zara, G., Conti, A., Roy, S., Lathuilière, S., Rota, P., Ricci, E.: The unreasonable effectiveness of large language-vision models for source-free video domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 10307–10317 (2023)

    Google Scholar 

  42. Zhang, W., Shen, L., Foo, C.S.: Rethinking the role of pre-trained networks in source-free domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 18841–18851 (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonio Tejero-de-Pablos .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1175 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tejero-de-Pablos, A., Togashi, R., Otani, M., Satoh, S. (2025). Robust Nearest Neighbors for Source-Free Domain Adaptation Under Class Distribution Shift. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15130. Springer, Cham. https://doi.org/10.1007/978-3-031-73220-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73220-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73219-5

  • Online ISBN: 978-3-031-73220-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics