Abstract
In this work, we focus on the cross-domain few-shot classification (CDFSC), which is mostly challenged by the low-data problem as well as extreme domain shift between base and novel target classes. Current methods always employ a lightweight backbone and continue to use a linear-probe-like traditional fine-tuning (Trad-FT) paradigm. While for recently emerging large-scale pre-trained model (LPM), which has more parameters with considerable prior knowledge, employing Trad-FT will face significant risks of overfitting and prior knowledge damage. In this paper, we propose semantic-guided robustness tuning (SRT), a novel fine-tuning paradigm including modulus-matching-based image-text mixup (MMIT-Mixup) and robustness-invariance fine-tuning (RI-FT), to address the CDFSC challenge of LPM. Concretely, SRT focuses on achieving robust class-specific representation. It first considers textual information as a robust and domain-invariant conductor, and MMIT-Mixup injects the domain-invariant and class-specific knowledge to obtain domain-invariant prototypes. Then, RI-FT optimizes the distance between features and prototypes to enhance the robustness of visual-encoder. We consider several types of LPMs and conduct extensive experiments, which reveals that SRT is a general solution for LPM’s CDFSC challenge and outperforms the existing methods with a large margin.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Caron, M., et al.: Emerging properties in self-supervised vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9650–9660 (2021)
Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C.F., Huang, J.B.: A closer look at few-shot classification. arXiv preprint arXiv:1904.04232 (2019)
Chen, W., Si, C., Zhang, Z., Wang, L., Wang, Z., Tan, T.: Semantic prompt for few-shot image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23581–23591 (2023)
Codella, N., et al.: Skin lesion analysis toward melanoma detection 2018: a challenge hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1902.03368 (2019)
Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
Das, R., Wang, Y.X., Moura, J.M.: On the importance of distractors for few-shot classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9030–9040 (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)
Fu, Y., Fu, Y., Jiang, Y.G.: Meta-fdmixup: cross-domain few-shot learning guided by labeled target data. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 5326–5334 (2021)
Fu, Y., Xie, Y., Fu, Y., Chen, J., Jiang, Y.G.: Wave-san: wavelet based style augmentation network for cross-domain few-shot learning. arXiv preprint arXiv:2203.07656 (2022)
Fu, Y., Xie, Y., Fu, Y., Jiang, Y.-G.: StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24575–24584 (2023)
Gao, P., et al.: Clip-adapter: better vision-language models with feature adapters. Int. J. Comput. Vision 1–15 (2023)
Guo, Y., et al.: A broader study of cross-domain few-shot learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12372, pp. 124–141. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58583-9_8
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Helber, P., Bischke, B., Dengel, A., Borth, D.: Eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 12(7), 2217–2226 (2019)
Hendrycks, D., Mu, N., Cubuk, E.D., Zoph, B., Gilmer, J., Lakshminarayanan, B.: Augmix: a simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781 (2019)
Hong, T., Guo, X., Ma, J.: Itmix: image-text mix augmentation for transferring clip to image classification. In: 2022 16th IEEE International Conference on Signal Processing (ICSP), vol. 1, pp. 129–133. IEEE (2022)
Hou, R., Chang, H., Ma, B., Shan, S., Chen, X.: Cross attention network for few-shot classification. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. In: International Conference on Machine Learning, pp. 2790–2799. PMLR (2019)
Hu, E.J., et al.: Lora: low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021)
Hu, Y., Ma, A.J.: Adversarial feature augmentation for cross-domain few-shot classification. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13680, pp. 20–37. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20044-1_2
Huang, Z., Zhou, A., Ling, Z., Cai, M., Wang, H., Lee, Y.J.: A sentence speaks a thousand images: domain generalization through distilling clip with language guidance. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11685–11695 (2023)
Jang, J., Kong, C., Jeon, D., Kim, S., Kwak, N.: Unifying vision-language representation space with single-tower transformer. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 980–988 (2023)
Khattak, M.U., Rasheed, H., Maaz, M., Khan, S., Khan, F.S.: Maple: multi-modal prompt learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19113–19122 (2023)
Khattak, M.U., Wasim, S.T., Naseer, M., Khan, S., Yang, M.H., Khan, F.S.: Self-regulating prompts: foundational model adaptation without forgetting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15190–15200 (2023)
Kim, J.H., Choo, W., Song, H.O.: Puzzle mix: exploiting saliency and local statistics for optimal mixup. In: International Conference on Machine Learning, pp. 5275–5285. PMLR (2020)
Kurakin, A., et al.: Remixmatch: semi-supervised learning with distribution matching and augmentation anchoring (2020)
Li, J., Wang, Z., Gao, Y., Hu, X.: Exploring high-quality target domain information for unsupervised domain adaptive semantic segmentation. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 5237–5245 (2022)
Li, J., Wang, Z., Hu, X.: Learning intact features by erasing-inpainting for few-shot classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 8401–8409 (2021)
Li, J., Zhang, Y., Wang, Z., Tu, K., Hou, S.: Probabilistic contrastive learning for domain adaptation. arXiv preprint arXiv:2111.06021 (2021)
Liang, H., Zhang, Q., Dai, P., Lu, J.: Boosting the generalization capability in cross-domain few-shot learning via noise-enhanced supervised autoencoder. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9424–9434 (2021)
Liu, B., Zhao, Z., Li, Z., Jiang, J., Guo, Y., Ye, J.: Feature transformation ensemble model with batch spectral regularization for cross-domain few-shot classification. arXiv preprint arXiv:2005.08463 (2020)
Liu, C., et al.: Learning a few-shot embedding model with contrastive learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 8635–8643 (2021)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)
Mohanty, S.P., Hughes, D.P., Salathé, M.: Using deep learning for image-based plant disease detection. Front. Plant Sci. 7, 1419 (2016)
Oquab, M., et al.: Dinov2: learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Schuhmann, C., et al.: Laion-5b: an open large-scale dataset for training next generation image-text models. Adv. Neural. Inf. Process. Syst. 35, 25278–25294 (2022)
So, J., Oh, C., Shin, M., Song, K.: Multi-modal mixup for robust fine-tuning. arXiv preprint arXiv:2203.03897, vol. 3 (2022)
Song, K., Ma, H., Zou, B., Zhang, H., Huang, W.: FD-align: feature discrimination alignment for fine-tuning pre-trained models in few-shot learning. arXiv preprint arXiv:2310.15105 (2023)
Verma, V., et al.: Manifold mixup: better representations by interpolating hidden states. In: International Conference on Machine Learning, pp. 6438–6447. PMLR (2019)
Wang, H., Deng, Z.H.: Cross-domain few-shot classification via adversarial task augmentation. arXiv preprint arXiv:2104.14385 (2021)
Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017)
Wang, Z., Liang, J., He, R., Xu, N., Wang, Z., Tan, T.: Improving zero-shot generalization for clip with synthesized prompts. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3032–3042 (2023)
Wang, Z., Liang, J., Sheng, L., He, R., Wang, Z., Tan, T.: A hard-to-beat baseline for training-free clip-based adaptation. arXiv preprint arXiv:2402.04087 (2024)
Xu, M., Zhang, J., Ni, B., Li, T., Wang, C., Tian, Q., Zhang, W.: Adversarial domain adaptation with domain mixup. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6502–6509 (2020)
Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023–6032 (2019)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
Zhang, R., et al.: Prompt, generate, then cache: cascade of foundation models makes strong few-shot learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15211–15222 (2023)
Zhang, R., et al.: Tip-adapter: training-free adaption of clip for few-shot classification. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13695, pp. 493–510. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19833-5_29
Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825 (2022)
Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vision 130(9), 2337–2348 (2022)
Zhuo, L., Fu, Y., Chen, J., Cao, Y., Jiang, Y.G.: TGDM: target guided dynamic mixup for cross-domain few-shot learning. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 6368–6376 (2022)
Acknowledgements
This work is supported by the National Natural Science Foundation of China under Grant 62176246. This work is also supported by Anhui Province Key Research and Development Plan (202304a05020045) and Anhui Province Natural Science Foundation (2208085UD17).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xiao, K., Wang, Z., Li, J. (2025). Semantic-Guided Robustness Tuning for Few-Shot Transfer Across Extreme Domain Shift. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15107. Springer, Cham. https://doi.org/10.1007/978-3-031-72967-6_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-72967-6_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72966-9
Online ISBN: 978-3-031-72967-6
eBook Packages: Computer ScienceComputer Science (R0)