Abstract
Visual grounding (VG) is a representative multi-modal task that has recently gained increasing attention. Nevertheless, existing works still face challenges leading to under-performance due to insufficient training data. To address this, some researchers have attempted to generate new samples by integrating each two (image, text) pairs, inspired by the success of uni-modal CutMix series data augmentation. However, these methods mix images and texts separately and neglect their contextual correspondence. To overcome this limitation, we propose a novel data augmentation method for visual grounding task, called Cross-Modal Mix (CMMix). Our approach employs a fine-grained mix paradigm, where sentence-structure analysis is used to locate the central noun parts in texts, and their corresponding image patches are drafted through noun-specific bounding boxes in VG. In this way, CMMix maintains matching correspondence during mix operation, thereby retaining the coherent relationship between images and texts and resulting in richer and more meaningful mixed samples. Furthermore, we employ a filtering-sample-by-loss strategy to enhance the effectiveness of our method. Through experiments on four VG benchmarks: ReferItGame, RefCOCO, RefCOCO+, and RefCOCOg, the superiority of our method is fully verified.
Supported by the Natural Science Foundation of China under grant 62071171.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bird, S., Klein, E., Loper, E.: Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly Media Inc., Sebastopol (2009)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13
Deng, J., Yang, Z., Chen, T., Zhou, W., Li, H.: Transvg: end-to-end visual grounding with transformers. In: IEEE/CVF International Conference on Computer Vision, pp. 1769–1779 (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019)
Escalante, H.J., et al.: The segmented and annotated IAPR TC-12 benchmark. Comput. Vis. Image Underst. 114(4), 419–428 (2010)
Guo, H., Mao, Y., Zhang, R.: Augmenting data with mixup for sentence classification: an empirical study. arXiv preprint arXiv:1905.08941 (2019)
Hao, X., et al.: Mixgen: a new multi-modal data augmentation. In: IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 379–389 (2023)
He, K., Zhang, X., Ren, S., Jian, S.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
Hermann, K.M., et al.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, vol. 28 (2015)
Karpathy, A., Joulin, A., Fei-Fei, L.F.: Deep fragment embeddings for bidirectional image sentence mapping. In: Advances in Neural Information Processing Systems, vol. 27 (2014)
Kazemzadeh, S., Ordonez, V., Matten, M., Berg, T.: Referitgame: referring to objects in photographs of natural scenes. In: Conference on Empirical Methods in Natural Language Processing, pp. 787–798 (2014)
Li, X., Zhang, X., Cai, Z., Ma, J.: On wine label image data augmentation through viewpoint based transformation. J. Signal Process. 38(1), 1–8 (2022)
Mao, J., Huang, J., Toshev, A., Camburu, O., Yuille, A.L., Murphy, K.: Generation and comprehension of unambiguous object descriptions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 11–20 (2016)
Nagaraja, V.K., Morariu, V.I., Davis, L.S.: Modeling context between objects for referring expression understanding. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 792–807. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_48
Sadhu, A.: Awesome visual grounding (2022). https://github.com/TheShadow29/awesome-grounding
Uddin, A.S., Monira, M.S., Shin, W., Chung, T., Bae, S.H.: Saliencymix: a saliency guided data augmentation strategy for better regularization. In: International Conference on Learning Representations (2020)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164 (2015)
Wang, T., et al.: Vlmixer: unpaired vision-language pre-training via cross-modal cutmix. In: International Conference on Machine Learning, pp. 22680–22690. PMLR (2022)
Yu, L., Poirson, P., Yang, S., Berg, A.C., Berg, T.L.: Modeling context in referring expressions. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 69–85. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_5
Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: regularization strategy to train strong classifiers with localizable features. In: IEEE International Conference on Computer Vision, pp. 6023–6032 (2019)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: Mixup: beyond empirical risk minimization. In: International Conference on Learning Representations (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Hong, T., Wang, Y., Sun, X., Li, X., Ma, J. (2024). CMMix: Cross-Modal Mix Augmentation Between Images and Texts for Visual Grounding. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1966. Springer, Singapore. https://doi.org/10.1007/978-981-99-8148-9_37
Download citation
DOI: https://doi.org/10.1007/978-981-99-8148-9_37
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8147-2
Online ISBN: 978-981-99-8148-9
eBook Packages: Computer ScienceComputer Science (R0)