Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3581783.3613821acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Chaos to Order: A Label Propagation Perspective on Source-Free Domain Adaptation

Published: 27 October 2023 Publication History

Abstract

Source-free domain adaptation (SFDA), where only a pre-trained source model is used to adapt to the target distribution, is a more general approach to achieving domain adaptation in the real world. However, it can be challenging to capture the inherent structure of the target features accurately due to the lack of supervised information on the target domain. By analyzing the clustering performance of the target features, we show that they still contain core features related to discriminative attributes but lack the collation of semantic information. Inspired by this insight, we present Chaos to Order (CtO), a novel approach for SFDA that strives to constrain semantic credibility and propagate label information among target subpopulations. CtO divides the target data into inner and outlier samples based on the adaptive threshold of the learning state, customizing the learning strategy to fit the data properties best. Specifically, inner samples are utilized for learning intra-class structure thanks to their relatively well-clustered properties. The low-density outlier samples are regularized by input consistency to achieve high accuracy with respect to the ground truth labels. In CtO, by employing different learning strategies to propagate the labels from the inner local to outlier instances, it clusters the global samples from chaos to order. We further adaptively regulate the neighborhood affinity of the inner samples to constrain the local semantic credibility. In theoretical and empirical analyses, we demonstrate that our algorithm not only propagates from inner to outlier but also prevents local clustering from forming spurious clusters. Empirical evidence demonstrates that CtO outperforms the state of the arts on three public benchmarks: Office-31, Office-Home, and VisDA.

Supplemental Material

MP4 File
Presentation video

References

[1]
Tianle Cai, Ruiqi Gao, Jason D. Lee, and Qi Lei. 2021. A Theory of Label Propaga-tion for Subpopulation Shift. In ICML (Proceedings of Machine Learning Research, Vol. 139). PMLR, 1170--1182.
[2]
Ekin Dogus Cubuk, Barret Zoph, Jonathon Shlens, and Quoc Le. 2020. RandAugment: Practical Automated Data Augmentation with a Reduced Search Space. In NeurIPS.
[3]
Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, and Dacheng Tao. 2022. Source-Free Domain Adaptation via Distribution Estimation. In CVPR. IEEE, 7202--7212.
[4]
Matthijs Douze, Arthur Szlam, Bharath Hariharan, and Hervé Jégou. 2018. Low-Shot Learning With Large-Scale Diffusion. In CVPR. Computer Vision Foundation / IEEE Computer Society, 3349--3358.
[5]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res. 17 (2016), 59:1--59:35.
[6]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR. IEEE Computer Society, 770--778.
[7]
Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. 2021. Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data. In NeurIPS. 3635--3649.
[8]
Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. 2019. Label Propagation for Deep Semi-Supervised Learning. In CVPR. Computer Vision Foundation / IEEE, 5070--5079.
[9]
Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. 2020. Minimum Class Confusion for Versatile Domain Adaptation. In ECCV (21) (Lecture Notes in Computer Science, Vol. 12366). Springer, 464--480.
[10]
Jogendra Nath Kundu, Akshay R. Kulkarni, Suvaansh Bhambri, Deepesh Mehta, Shreyas Anand Kulkarni, Varun Jampani, and Venkatesh Babu Radhakrishnan. 2022. Balancing Discriminability and Transferability for Source-Free Domain Adaptation. In ICML (Proceedings of Machine Learning Research, Vol. 162). PMLR, 11710--11728.
[11]
Jogendra Nath Kundu, Akshay R. Kulkarni, Amit Singh, Varun Jampani, and R. Venkatesh Babu. 2021. Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation. In ICCV. IEEE, 7026--7036.
[12]
Jonghyun Lee, Dahuin Jung, Junho Yim, and Sungroh Yoon. 2022. Confidence Score for Source-Free Unsupervised Domain Adaptation. In ICML (Proceedings of Machine Learning Research, Vol. 162). PMLR, 12365--12377.
[13]
Jingjing Li, Zhekai Du, Lei Zhu, Zhengming Ding, Ke Lu, and Heng Tao Shen. 2022. Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks. IEEE Trans. Pattern Anal. Mach. Intell. 44, 11 (2022), 8196--8211.
[14]
Rui Li, Wenming Cao, Si Wu, and Hau-San Wong. 2020. Generating Target Image-Label Pairs for Unsupervised Domain Adaptation. IEEE Trans. Image Process. 29 (2020), 7997--8011.
[15]
Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. 2020. Model Adaptation: Unsupervised Domain Adaptation Without Source Data. In CVPR. Computer Vision Foundation / IEEE, 9638--9647.
[16]
Jian Liang, Dapeng Hu, and Jiashi Feng. 2020. Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation. In ICML (Proceedings of Machine Learning Research, Vol. 119). PMLR, 6028--6039.
[17]
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. 2018. Conditional Adversarial Domain Adaptation. In NeurIPS. 1647--1657.
[18]
Jaemin Na, Heechul Jung, Hyung Jin Chang, and Wonjun Hwang. 2021. FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation. In CVPR. Computer Vision Foundation / IEEE, 1094--1103.
[19]
Jiangbo Pei, Zhuqing Jiang, Aidong Men, Liang Chen, Yang Liu, and Qingchao Chen. 2023. Uncertainty-Induced Transferability Representation for Source-Free Unsupervised Domain Adaptation. IEEE Trans. Image Process. 32 (2023), 2033--2048.
[20]
Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. 2017. VisDA: The Visual Domain Adaptation Challenge. CoRR abs/1710.06924 (2017).
[21]
Zhen Qiu, Yifan Zhang, Hongbin Lin, Shuaicheng Niu, Yanxia Liu, Qing Du, and Mingkui Tan. 2021. Source-free Domain Adaptation via Avatar Prototype Generation and Adaptation. In IJCAI. ijcai.org, 2921--2927.
[22]
Sanqing Qu, Guang Chen, Jing Zhang, Zhijun Li, Wei He, and Dacheng Tao. 2022. BMD: A General Class-Balanced Multicentric Dynamic Prototype Strategy for Source-Free Domain Adaptation. In ECCV (34) (Lecture Notes in Computer Science, Vol. 13694). Springer, 165--182.
[23]
Kate Saenko, Brian Kulis, et al. 2010. Adapting Visual Category Models to New Domains. In ECCV.
[24]
Hui Tang, Ke Chen, and Kui Jia. 2020. Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering. In CVPR. Computer Vision Foundation / IEEE, 8722--8732.
[25]
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 11 (2008).
[26]
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Pan-chanathan. 2017. Deep Hashing Network for Unsupervised Domain Adaptation. In CVPR. IEEE Computer Society, 5385--5394.
[27]
Colin Wei, Kendrick Shen, Yining Chen, and Tengyu Ma. 2021. Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data. In ICLR. Open-Review.net.
[28]
Yuan Wu, Diana Inkpen, and Ahmed El-Roby. 2020. Dual Mixup Regularized Learning for Adversarial Domain Adaptation. In ECCV (29) (Lecture Notes in Computer Science, Vol. 12374). Springer, 540--555.
[29]
Haifeng Xia, Handong Zhao, and Zhengming Ding. 2021. Adaptive Adversarial Network for Source-free Domain Adaptation. In ICCV. IEEE, 8990--8999.
[30]
Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. 2019. Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. In ICCV. IEEE, 1426--1435.
[31]
Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, and Shangling Jui. 2021. Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation. In NeurIPS. 29393--29405.
[32]
Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, and Shangling Jui. 2021. Generalized Source-free Domain Adaptation. In ICCV. IEEE, 8958--8967.
[33]
Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, et al. 2022. Attracting and dispersing: A simple approach for source-free domain adaptation. In Advances in Neural Information Processing Systems.
[34]
Li Yi, Gezheng Xu, Pengcheng Xu, Jiaqi Li, Ruizhi Pu, Charles Ling, A. Ian McLeod, and Boyu Wang. 2023. When Source-Free Domain Adaptation Meets Learning with Noisy Labels. In ICLR. OpenReview.net.
[35]
Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. 2021. FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling. In NeurIPS. 18408--18419.
[36]
Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael I. Jordan. 2019. Bridging Theory and Algorithm for Domain Adaptation. In ICML (Proceedings of Machine Learning Research, Vol. 97). PMLR, 7404--7413.
[37]
Ziyi Zhang, Weikai Chen, Hui Cheng, Zhen Li, Siyuan Li, Liang Lin, and Guanbin Li. 2022. Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning. In Advances in Neural Information Processing Systems.
[38]
Li Zhong, Zhen Fang, Feng Liu, Jie Lu, Bo Yuan, and Guangquan Zhang. 2021. How Does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?. In AAAI. AAAI Press, 11079--11087.

Cited By

View all
  • (2024)Rectifying self-training with neighborhood consistency and proximity for source-free domain adaptationNeurocomputing10.1016/j.neucom.2024.128425606(128425)Online publication date: Nov-2024

Index Terms

  1. Chaos to Order: A Label Propagation Perspective on Source-Free Domain Adaptation

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. cluster analysis
      2. label propagation
      3. source-free domain adaptation
      4. transfer learning

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      MM '23
      Sponsor:
      MM '23: The 31st ACM International Conference on Multimedia
      October 29 - November 3, 2023
      Ottawa ON, Canada

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)53
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 19 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Rectifying self-training with neighborhood consistency and proximity for source-free domain adaptationNeurocomputing10.1016/j.neucom.2024.128425606(128425)Online publication date: Nov-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media