Abstract
Image understanding is an essential research issue for many applications, such as text-to-image retrieval, Visual Question Answering, visual dialog and so forth. In all these applications, how to comprehend images through queries has been a challenge. Most studies concentrate on general images and have obtained desired results, but not for some specific real applications, e.g., remote sensing. To tackle this issue, in this paper, we propose an enhanced attention-based approach, entitled EAGLE, which seamlessly integrates the property of aerial images and NLP. In addition, we contribute the first large-scale remote sensing question answering corpus (https://github.com/rsqa2018/corpus). Extensive experiments conducted on real data demonstrate that EAGLE outperforms the-state-of-the-art approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Anderson, P., et al.: Bottom-up and top-down attention for image captioning and visual question answering. In: CVPR, pp. 6077–6086 (2018)
Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Deep compositional question answering with neural module networks. CoRR abs/1511.02799 (2015)
Antol, S., et al.: VQA: visual question answering. In: ICCV, pp. 2425–2433 (2015)
Chen, K., Crawford, M.M., Gamba, P., Smith, J.S.: Introduction for the special issue on remote sensing for major disaster prevention, monitoring, and assessment. IEEE Trans. Geosci. Remote Sens. 45(6–1), 1515–1518 (2007)
Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: benchmark and state of the art. Proc. IEEE 105(10), 1865–1883 (2017)
Das, A., Agrawal, H., Zitnick, L., Parikh, D., Batra, D.: Human attention in visual question answering: do humans and deep networks look at the same regions? vol. 163, pp. 90–100. Elsevier (2017)
Fukui, A., Park, D.H., Yang, D., Rohrbach, A., Darrell, T., Rohrbach, M.: Multimodal compact bilinear pooling for visual question answering and visual grounding. In: EMNLP (2016)
Karpathy, A., Li, F.: Deep visual-semantic alignments for generating image descriptions. In: CVPR, pp. 3128–3137 (2015)
Kim, J., On, K.W., Lim, W., Kim, J., Ha, J., Zhang, B.: Hadamard product for low-rank bilinear pooling. CoRR abs/1610.04325 (2016)
Krishna, R., et al.: Visual genome: connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vision 123(1), 32–73 (2017)
Li, Q., Fu, J., Yu, D., Mei, T., Luo, J.: Tell-and-answer: towards explainable visual question answering using attributes and captions. In: EMNLP (2018)
Liang, J., Jiang, L., Cao, L., Li, L., Hauptmann, A.G.: Focal visual-text attention for visual question answering. In: CVPR, pp. 6135–6143 (2018)
Lillesand, T., Kiefer, R.W., Chipman, J.: Remote Sensing and Image Interpretation. Wiley, Hoboken (2015)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Lin, Y., Pang, Z., Wang, D., Zhuang, Y.: Feature enhancement in attention for visual question answering. In: IJCAI, pp. 4216–4222 (2018)
Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. 29, 289–297 (2016)
Lu, X., Wang, B., Zheng, X., Li, X.: Exploring models and data for remote sensing image caption generation. IEEE Trans. Geosci. Remote Sens. 56(4), 2183–2195 (2017)
Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. In: NIPS, pp. 1682–1690 (2014)
Mostafazadeh, N., Misra, I., Devlin, J., Mitchell, M., He, X., Vanderwende, L.: Generating natural questions about an image. In: ACL (2016)
Nguyen, D., Okatani, T.: Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In: CVPR, pp. 6087–6096 (2018)
Noh, H., Han, B.: Training recurrent answering units with joint loss minimization for VQA. CoRR abs/1606.03647 (2016)
Qiao, T., Dong, J., Xu, D.: Exploring human-like attention supervision in visual question answering. In: AAAI (2018)
Qu, B., Li, X., Tao, D., Lu, X.: Deep semantic understanding of high resolution remote sensing image. In: CITS, pp. 1–5 (2016)
Rajani, N.F., Mooney, R.J.: Stacking with auxiliary features for visual question answering. In: NAACL-HLT, pp. 2217–2226 (2018)
Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)
Shi, Z., Zou, Z.: Can a machine generate humanlike language descriptions for a remote sensing image? IEEE Trans. Geosci. Remote Sens. 55(6), 3623–3634 (2017)
Shih, K.J., Singh, S., Hoiem, D.: Where to look: focus regions for visual question answering. In: CVPR, pp. 4613–4621 (2016)
Shimizu, N., Rong, N., Miyazaki, T.: Visual question answering dataset for bilingual image understanding: a study of cross-lingual transfer using attention maps. In: ICCL, pp. 1918–1928 (2018)
Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33715-4_54
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)
Xie, L., Shen, J., Zhu, L.: Online cross-modal hashing for web image retrieval. In: AAAI, vol. 30 (2016)
Yang, Y., Newsam, S.D.: Bag-of-visual-words and spatial extensions for land-use classification. In: GIS (2010)
Yang, Z., He, X., Gao, J., Deng, L., Smola, A.J.: Stacked attention networks for image question answering. In: CVPR, pp. 21–29 (2016)
Zhang, F., Du, B., Zhang, L.: Saliency-guided unsupervised feature learning for scene classification. IEEE Trans. Geosci. Remote Sens. 53(4), 2175–2184 (2014)
Zhu, C., Zhao, Y., Huang, S., Tu, K., Ma, Y.: Structured attentions for visual question answering. In: ICCV, pp. 1291–1300 (2017)
Zhu, Y., Groth, O., Bernstein, M.S., Fei-Fei, L.: Visual7w: grounded question answering in images. In: CVPR, pp. 4995–5004 (2016)
Acknowledgements
We would like to thank the anonymous annotators for their professional annotations. This work is supported by the Natural Science Research of Jiangsu Higher Education Institutions of China under Grant 15KJA420001, National Natural Science Foundation of China under Grant 61772278, and Open Foundation for Key Laboratory of Information Processing and Intelligent Control in Fujian Province under Grant MJUKF201705.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, Y. et al. (2023). EAGLE: An Enhanced Attention-Based Strategy by Generating Answers from Learning Questions to a Remote Sensing Image. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2019. Lecture Notes in Computer Science, vol 13452. Springer, Cham. https://doi.org/10.1007/978-3-031-24340-0_42
Download citation
DOI: https://doi.org/10.1007/978-3-031-24340-0_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-24339-4
Online ISBN: 978-3-031-24340-0
eBook Packages: Computer ScienceComputer Science (R0)