Abstract
Exploring the capacity of pre-trained large-scale models to learn common features of multimodal data and the effect of knowledge transfer on downstream tasks are two major trends in the multimedia field. However, existing studies usually use pre-trained models as feature extractors, or as the teacher model to achieve knowledge distillation of downstream tasks. Therefore, the cross-modal knowledge transfer mechanism and the knowledge forgetting problem of pre-trained large models have not been fully investigated.To address the above issues, this paper explores the fine-tuning strategy, feature selection strategy and semantic guidance approach in the migration process of pre-trained large models.Aiming at the problem of knowledge forgetting during “fine-tuning”, an image classification algorithm (PMHANet) integrating a pre-trained large-scale model and heterogeneous feature alignment is proposed.More importantly, this provides a cross-modal knowledge transfer paradigm for multimodal pre-training of large models.We conducted experiments on VireoFood-172 and NUS-WIDE and found that large models trained on datasets such as COCO performed better on the similar domain dataset NUS-WIDE than the small domain dataset VireoFood-172; PMHANet effectively implements multimodal representation enhancement in downstream tasks based on a partially fine-tuned pre-trained large model to achieve SOTA performance on both datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, J., Ngo, C.W.: Deep-based ingredient recognition for cooking recipe retrieval. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 32–41 (2016)
Chua, T.S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: Nus-wide: a real-world web image database from national university of Singapore. In: Proceedings of the ACM International Conference on Image and Video Retrieval, pp. 1–9 (2009)
Chung, Y.A., Weng, W.H., Tong, S., Glass, J.: Unsupervised cross-modal alignment of speech and text embedding spaces. In: Advances in Neural Information Processing Systems 31 (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, H., et al.: Unicoder: a universal language encoder by pre-training with multiple cross-lingual tasks. arXiv preprint arXiv:1909.00964 (2019)
Iki, T., Aizawa, A.: Effect of visual extensions on natural language understanding in vision-and-language models. arXiv preprint arXiv:2104.08066 (2021)
Jiang, S., Min, W., Liu, L., Luo, Z.: Multi-scale multi-view deep feature aggregation for food recognition. IEEE Trans. Image Process. 29, 265–276 (2019)
Kim, W., Son, B., Kim, I.: ViLT: vision-and-language transformer without convolution or region supervision. In: International Conference on Machine Learning, pp. 5583–5594. PMLR (2021)
Li, L.H., Yatskar, M., Yin, D., Hsieh, C.J., Chang, K.W.: VisualBert: a simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557 (2019)
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Lu, J., Batra, D., Parikh, D., Lee, S.: VilBert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Advances in Neural Information Processing Systems 32 (2019)
Martinel, N., Foresti, G.L., Micheloni, C.: Wide-slice residual networks for food recognition. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 567–576. IEEE (2018)
Meng, L., et al.: Learning using privileged information for food recognition. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 557–565 (2019)
Qi, D., Su, L., Song, J., Cui, E., Bharti, T., Sacheti, A.: ImageBert: cross-modal pre-training with large-scale weak-supervised image-text data. arXiv preprint arXiv:2001.07966 (2020)
Sun, B., Saenko, K.: Deep coral: correlation alignment for deep domain adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_35
Tan, H., Bansal, M.: Lxmert: learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490 (2019)
Tang, J., Shu, X., Li, Z., Qi, G.J., Wang, J.: Generalized deep transfer networks for knowledge propagation in heterogeneous domains (2016)
Tang, J., et al.: Tri-clustered tensor completion for social-aware image tag refinement. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1662–1674 (2016)
Tang, Z., Cho, J., Tan, H., Bansal, M.: VidLanKD: improving language understanding via video-distilled knowledge transfer. In: Advances in Neural Information Processing Systems 34 (2021)
Wang, J., Wang, H., Deng, J., Wu, W., Zhang, D.: EfficientcCLIP: efficient cross-modal pre-training by ensemble confident learning and language modeling. arXiv preprint arXiv:2109.04699 (2021)
Yu, F., et al.: ERNIE-ViL: knowledge enhanced vision-language representations through scene graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 3208–3216 (2021)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Acknowledgments
This work is supported in part by the Excellent Youth Scholars Program of Shandong Province (Grant no. 2022HWYQ-048) and the Oversea Innovation Team Project of the “20 Regulations for New Universities” funding program of Jinan (Grant no. 2021GXRC073).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, J. et al. (2022). Prompt Learning with Cross-Modal Feature Alignment for Visual Domain Adaptation. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science(), vol 13604. Springer, Cham. https://doi.org/10.1007/978-3-031-20497-5_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-20497-5_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20496-8
Online ISBN: 978-3-031-20497-5
eBook Packages: Computer ScienceComputer Science (R0)