Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Source-free Unsupervised Domain Adaptation with Trusted Pseudo Samples

Published: 16 February 2023 Publication History

Abstract

Source-free unsupervised domain adaptation (SFUDA) aims to accomplish the task of adaptation to the target domain by utilizing pre-trained source domain model and unlabeled target domain samples, without directly accessing any source domain data. Although many SFUDA works use the pseudo-labeling strategy to improve the accuracy of pseudo-labels in the target domain, these strategies ignore the influence of domain shift on calculating the reference distribution of pseudo-labels. In this article, we propose a novel kind of SFUDA with trusted pseudo samples (SFUDA-TPS), which uses reliable feature reference distribution to solve the SFUDA problem. In SFUDA-TPS, we design a target feature correcting classifier to alleviate the problem of feature reference distribution deviating from target domain samples distribution. On this basis, the more reliable feature reference distribution is calculated by selecting the target domain samples with a high amount of information, i.e., low entropy in the fixed source domain classifier and target feature correcting classifier. The implicit alignment between the source domain and target domain is realized by learning the source domain distributions hidden in the fixed source domain classifier. Experimental evaluations illustrate the effectiveness of our proposed method in solving SFUDA tasks.

References

[1]
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Mach. Learn. 79, 1 (2010), 151–175.
[2]
Weijie Chen, Luojun Lin, Shicai Yang, Di Xie, Shiliang Pu, Yueting Zhuang, and Wenqi Ren. 2021. Self-supervised noisy label learning for source-free unsupervised domain adaptation. arXiv:2102.11614. Retrieved from https://arxiv.org/abs/2102.11614.
[3]
Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. 2020. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3941–3950.
[4]
Shuyang Dai, Yu Cheng, Yizhe Zhang, Zhe Gan, Jingjing Liu, and Lawrence Carin. 2021. Contrastively smoothed class alignment for unsupervised domain adaptation. In Proceedings of the Asian Conference on Computer Vision (ACCV’21) 2020. Springer Science and Business Media Deutschland GmbH, 268–283.
[5]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 1 (2016), 2096–2030.
[6]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Communications of the ACM 63, 11 (2020), 139–144.
[7]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[8]
Yunzhong Hou and Liang Zheng. 2020. Source free domain adaptation with image translation. arXiv:2008.07514. Retrieved from https://arxiv.org/abs/2008.07514.
[9]
Yunzhong Hou and Liang Zheng. 2021. Visualizing adapted knowledge in domain transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13824–13833.
[10]
Wenhao Jiang, Hongchang Gao, Fu-lai Chung, and Heng Huang. 2016. The l2, 1-norm stacked robust autoencoders for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30.
[11]
Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, and Sungeun Hong. 2020. Domain adaptation without source data. IEEE Trans. Artif. Intell. 2, 06 (2020), 508–518.
[12]
Vinod K. Kurmi, Venkatesh K. Subramanian, and Vinay P. Namboodiri. 2021. Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 615–625.
[13]
Jingjing Li, Zhekai Du, Lei Zhu, Zhengming Ding, Ke Lu, and Heng Tao Shen. 2021. Divergence-agnostic unsupervised domain adaptation by adversarial attacks. IEEE Transactions on Pattern Analysis & Machine Intelligence 44, 11 (2021), 8196–8211.
[14]
Jingjing Li, Mengmeng Jing, Hongzu Su, Ke Lu, Lei Zhu, and Heng Tao Shen. 2021. Faster domain adaptation networks. IEEE Transactions on Knowledge and Data Engineering 34, 12 (2021), 5770–5783.
[15]
Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. 2020. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9641–9650.
[16]
Jian Liang, Dapeng Hu, and Jiashi Feng. 2020. Do we really need to access the source ddata? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. PMLR, 6028–6039.
[17]
Hong Liu, Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Qiang Yang. 2019. Separate to adapt: Open set domain adaptation via progressive separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2927–2936.
[18]
Yuang Liu, Wei Zhang, and Jun Wang. 2021. Source-free domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1215–1224.
[19]
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning. PMLR, 97–105.
[20]
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. 2018. Conditional adversarial domain adaptation. In Proceedings of the International Conference on Neural Information Processing Systems. 1647–1657.
[21]
Ao Ma, Jingjing Li, Ke Lu, Lei Zhu, and Heng Tao Shen. 2021. Adversarial entropy optimization for unsupervised domain adaptation. IEEE Transactions on Neural Networks and Learning Systems 33, 11 (2021), 6263–6274.
[22]
Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems 32, 4694–4703.
[23]
Arun Reddy Nelakurthi, Ross Maciejewski, and Jingrui He. 2018. Source free domain adaptation using an off-the-shelf classifier. In Proceedings of the IEEE International Conference on Big Data (Big Data’18). IEEE, 140–145.
[24]
Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10 (2009), 1345–1359.
[25]
Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. 2018. Multi-adversarial domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence. 1–8.
[26]
Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. 2017. Visda: The visual domain adaptation challenge. arXiv:1710.06924. Retrieved from https://arxiv.org/abs/1710.06924.
[27]
Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, and Kate Saenko. 2018. Visda: A synthetic-to-real benchmark for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2021–2026.
[28]
Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, and Yun Fu. 2021. Contradictory structure learning for semi-supervised domain adaptation. In Proceedings of the SIAM International Conference on Data Mining (SDM’21). SIAM, 576–584.
[29]
Zhen Qiu, Yifan Zhang, Hongbin Lin, Shuaicheng Niu, Yanxia Liu, Qing Du, and Mingkui Tan. 2021. Source-free domain adaptation via avatar prototype generation and adaptation. In Proceedings of the International Joint Conference on Artificial Intelligence, 1–10.
[30]
Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision. Springer, 213–226.
[31]
Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. 2019. Semi-supervised domain adaptation via minimax entropy. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8050–8058.
[32]
Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3723–3732.
[33]
Qing Tian, Chuang Ma, Fengyuan Zhang, shun Peng, and Hui Xue. 2021. Source-free unsupervised domain adaptation with sample transport learning. J. Comput. Sci. Technol. 36, 3 (2021), 606–616.
[34]
Qing Tian, Heyang Sun, Chuang Ma, Meng Cao, Yi Chu, and Songcan Chen. 2021. Heterogeneous domain adaptation with structure and classification space alignment. IEEE Transactions on Cybernetics 52, 10 (2021), 10328–10338.
[35]
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7167–7176.
[36]
Shanshan Wang and Lei Zhang. 2021. Self-adaptive re-weighted adversarial domain adaptation. In Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence. 3181–3187.
[37]
Garrett Wilson and Diane J. Cook. 2020. A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol. 11, 5 (2020), 1–46.
[38]
Hanrui Wu, Yuguang Yan, Michael K. Ng, and Qingyao Wu. 2020. Domain-attention conditional wasserstein distance for multi-source domain adaptation. ACM Trans. Intell. Syst. Technol. 11, 4 (2020), 1–19.
[39]
Ni Xiao and Lei Zhang. 2021. Dynamic weighted learning for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15242–15251.
[40]
Guanglei Yang, Haifeng Xia, Mingli Ding, and Zhengming Ding. 2020. Bi-directional generation for unsupervised domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 6615–6622.
[41]
Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, and Shangling Jui. 2020. Unsupervised domain adaptation without source data by casting a bait. arXiv:2010.12427. Retrieved from https://arxiv.org/abs/2010.12427.
[42]
Haojian Zhang, Yabin Zhang, Kui Jia, and Lei Zhang. 2021. Unsupervised domain adaptation of black-box source models. arXiv:2101.02839. Retrieved from https://arxiv.org/abs/2101.02839.
[43]
Lei Zhang and Xinbo Gao. 2022. Transfer adaptation learning: A decade survey. IEEE Transactions on Neural Networks and Learning Systems (2022), 1–22.
[44]
Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. 2019. Bridging theory and algorithm for domain adaptation. In Proceedings of the International Conference on Machine Learning. PMLR, 7404–7413.

Cited By

View all
  • (2024)Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual NetworksACM Transactions on Intelligent Systems and Technology10.1145/364386015:3(1-20)Online publication date: 29-Mar-2024
  • (2023)A plug-and-play noise-label correction framework for unsupervised domain adaptation person re-identificationThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-03094-440:6(4493-4504)Online publication date: 24-Sep-2023

Index Terms

  1. Source-free Unsupervised Domain Adaptation with Trusted Pseudo Samples

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Intelligent Systems and Technology
      ACM Transactions on Intelligent Systems and Technology  Volume 14, Issue 2
      April 2023
      430 pages
      ISSN:2157-6904
      EISSN:2157-6912
      DOI:10.1145/3582879
      • Editor:
      • Huan Liu
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 16 February 2023
      Online AM: 03 November 2022
      Accepted: 28 October 2022
      Revised: 14 September 2022
      Received: 19 January 2022
      Published in TIST Volume 14, Issue 2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Unsupervised domain adaptation (UDA)
      2. source-free UDA (SFUDA)
      3. trusted pseudo samples
      4. data distributions
      5. knowledge transfer

      Qualifiers

      • Research-article

      Funding Sources

      • National Natural Science Foundation of China
      • Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University
      • Fundamental Research Funds for the Central Universities
      • Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)201
      • Downloads (Last 6 weeks)11
      Reflects downloads up to 30 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual NetworksACM Transactions on Intelligent Systems and Technology10.1145/364386015:3(1-20)Online publication date: 29-Mar-2024
      • (2023)A plug-and-play noise-label correction framework for unsupervised domain adaptation person re-identificationThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-03094-440:6(4493-4504)Online publication date: 24-Sep-2023

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media