Abstract
The goal of source-free domain adaptation (SFDA) is retraining a model fit on data from a source domain (e.g. drawings) to classify data from a target domain (e.g. photos) employing only the target samples. In addition to the domain shift, in a realistic scenario, the number of samples per class on source and target would also differ (i.e. class distribution shift, or CDS). Dealing label-less with CDS via target data only is challenging, and thus previous methods assume no class imbalance in the source data. We study the SFDA pipeline and, for the first time, propose a SFDA method that can deal with class imbalance in both source and target data. While pseudolabeling is the core technique in SFDA to estimate the distribution of the target data, it relies on nearest neighbors, which makes it sensitive to class distribution shifts (CDS). We are able to calculate robust nearest neighbors by leveraging additional generic features free of the source model’s CDS bias. This provides a “second-opinion” regarding which nearest neighbors are more suitable for adaptation. We evaluate our method using various types of features, datasets and tasks, outperforming previous methods in SFDA under CDS. Our code is available at https://github.com/CyberAgentAILab/Robust_Nearest_Neighbors_SFDA-CDS.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Boudiaf, M., Mueller, R., Ben Ayed, I., Bertinetto, L.: Parameter-free online test-time adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 8344–8353 (2022)
Buda, M., Maki, A., Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 106, 249–259 (2018)
Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: Proceedings of European Conference on Computer Vision, pp. 135–150 (2018)
Chen, W., et al.: Contrastive syn-to-real generalization. In: Proceedings of International Conference on Learning Representations, pp. 1–12 (2021)
Chen, X., Wang, S., Long, M., Wang, J.: Transferability vs. Discriminability: batch spectral penalization for adversarial domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 1081–1090 (2019)
Cui, S., Wang, S., Zhuo, J., Su, C., Huang, Q., Tian, Q.: Gradually vanishing bridge for adversarial domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 12455–12464 (2020)
Dong, J., Fang, Z., Liu, A., Sun, G., Liu, T.: Confident anchor-induced multi-source free domain adaptation. Adv. Neural. Inf. Process. Syst. 34, 2848–2860 (2021)
Dong, Q., Gong, S., Zhu, X.: Imbalanced deep learning by minority class incremental rectification. IEEE Trans. Pattern Anal. Mach. Intell. 41(6), 1367–1381 (2018)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Gong, T., Jeong, J., Kim, T., Kim, Y., Shin, J., Lee, S.J.: NOTE: robust continual test-time adaptation against temporal correlation. In: Proceedings of Advances in Neural Information Processing Systems, vol. 35, pp. 27253–27266 (2022)
Gu, X., Sun, J., Xu, Z.: Spherical space domain adaptation with robust pseudo-label loss. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 9101–9110 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Hu, L., Kan, M., Shan, S., Chen, X.: Unsupervised domain adaptation with hierarchical gradient synchronization. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 4043–4052 (2020)
Jiang, X., Lao, Q., Matwin, S., Havaei, M.: Implicit class-conditioned domain alignment for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 4816–4827 (2020)
Kundu, J.N., et al.: Balancing discriminability and transferability for source-free domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 11710–11728 (2022)
Kundu, J.N., Venkat, N., Babu, R.V., et al.: Universal source-free domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 4544–4553 (2020)
Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Proceedings of International Conference on Machine Learning Workshops, vol. 3-2, p. 896 (2013)
Li, S., Xie, M., Gong, K., Liu, C.H., Wang, Y., Li, W.: Transferable semantic augmentation for domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 11516–11525 (2021)
Li, X., Li, J., Zhu, L., Wang, G., Huang, Z.: Imbalanced source-free domain adaptation. In: Proceedings of ACM International Conference on Multimedia, pp. 3330–3339 (2021)
Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning, pp. 6028–6039 (2020)
Litrico, M., Del Bue, A., Morerio, P.: Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 7640–7650 (2023)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of International Conference on Computer Vision, pp. 10012–10022 (2021)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)
Mancini, M., Karaoguz, H., Ricci, E., Jensfelt, P., Caputo, B.: Kitting in the wild through online domain adaptation. In: Proceedings of International Conference on Intelligent Robots and Systems, pp. 1103–1109 (2018)
Na, J., Jung, H., Chang, H.J., Hwang, W.: FixBi: bridging domain spaces for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1094–1103 (2021)
Park, S., Yang, S., Choo, J., Yun, S.: Label shift adapter for test-time adaptation under covariate and label shifts. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16421–16431 (2023)
Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 1406–1415 (2019)
Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: VisDA: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 (2017)
Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)
Shi, Z.: Improving k-nearest neighbors algorithm for imbalanced data classification. IOP Conf. Ser. Mater. Sci. Eng. 719(1), 012072 (2020)
Tan, S., Peng, X., Saenko, K.: Class-imbalanced domain adaptation: an empirical odyssey. In: Proceedings of European Conference on Computer Vision Workshops, pp. 585–602 (2020)
Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)
Wang, D., Shelhamer, E., Liu, S., Olshausen, B., Darrell, T.: TENT: fully test-time adaptation by entropy minimization. In: Proceedings of International Conference on Learning Representations, pp. 1–15 (2021)
Wang, Q., Fink, O., Van Gool, L., Dai, D.: Continual test-time domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 7201–7211 (2022)
Weiss, G.M., Provost, F.: The effect of class distribution on classifier learning: an empirical study. Rutgers University, Technical report (2001)
Xia, H., Zhao, H., Ding, Z.: Adaptive adversarial network for source-free domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 9010–9019 (2021)
Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., Zuo, W.: Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 2272–2281 (2017)
Yang, S., Wang, Y., Van De Weijer, J., Herranz, L., Jui, S.: Generalized source-free domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 8978–8987 (2021)
Yang, S., van de Weijer, J., Herranz, L., Jui, S., et al.: Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Adv. Neural. Inf. Process. Syst. 34, 29393–29405 (2021)
Zara, G., Conti, A., Roy, S., Lathuilière, S., Rota, P., Ricci, E.: The unreasonable effectiveness of large language-vision models for source-free video domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 10307–10317 (2023)
Zhang, W., Shen, L., Foo, C.S.: Rethinking the role of pre-trained networks in source-free domain adaptation. In: Proceedings of International Conference on Computer Vision, pp. 18841–18851 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tejero-de-Pablos, A., Togashi, R., Otani, M., Satoh, S. (2025). Robust Nearest Neighbors for Source-Free Domain Adaptation Under Class Distribution Shift. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15130. Springer, Cham. https://doi.org/10.1007/978-3-031-73220-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-73220-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-73219-5
Online ISBN: 978-3-031-73220-1
eBook Packages: Computer ScienceComputer Science (R0)