Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Salient Object Detection in Optical Remote Sensing Images Based on Global Context Mixed Attention

  • Research Article
  • Published:
Journal of the Indian Society of Remote Sensing Aims and scope Submit manuscript

Abstract

Optical remote sensing images exhibit complex characteristics such as high density, multiscale, and multi-angle features, posing significant challenges in the field of salient object detection. This academic exposition introduces an integrated model customized for the precise detection of salient objects in optical remote sensing images, presenting a comprehensive solution. At the core of this model lies a feature aggregation module based on the concept of hybrid attention. This module orchestrates the gradual fusion of multi-layer feature maps, thereby reducing information loss encountered during traversal of the inherent skip connections in the U-shaped architecture. Notably, this framework integrates a dual-channel attention mechanism, cleverly leveraging the spatial contours of salient regions within optical remote sensing images to enhance the efficiency of the proposed module. By implementing a hybrid loss function, the overall approach is further strengthened, facilitating multifaceted supervision during the network training phase, covering considerations at the pixel-level, region-level, and statistical levels. Through a series of comprehensive experiments, the effectiveness and robustness of the proposed method are validated, undergoing rigorous evaluation on two widely accessed benchmark datasets, meticulously catering to optical remote sensing scenarios. It is evident that our method exhibits certain advantages relative to other methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Liu, J.-J., Hou, Q., Cheng, M.-M., Feng, J., & Jiang, J., (2019). “A simple pooling-based design for real-time salient object detection,” In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3917–3926.

  • Li, J., Pan, Z., Liu, Q., & Wang, Z. (2020). Stacked u-shape network with channel-wise attention for salient object detection. IEEE Transactions on Multimedia, 23, 1397–1409.

    Article  Google Scholar 

  • Li, C., Cong, R., Hou, J., Zhang, S., Qian, Y., & Kwong, S. (2019). Nested network with two-stream pyramid for salient object detection in optical remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 57(11), 9156–9166.

    Article  Google Scholar 

  • Zhao, J.-X., Liu, J.-J.,Fan, D.-P., Cao, Y., Yang, J., & Cheng, M.-M., (2019). “Egnet: Edge guidance network for salient object detection,” In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 8779–8788.

  • Wu, W., Zhang, Y., Wang, D., et al. (2020). SK-Net: Deep learning on point cloud via end-to-end discovery of spatial keypoints. In: Proceedings of the AAAI Conference on Artificial Intelligence. 4(04): 6422–6429.

  • Wang, Q., Wu, B., Zhu, P., et al. ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 11534–11542.

  • Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C., (2018). Mobilenetv2: Inverted residuals and linear bottlenecks,” In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520.

  • Glorot, X., & Bengio, Y., (2010) Understanding the difficulty of training deep feedforward neural networks, In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, pp. 249–256.

  • Kingma, D.P. and Ba, J. (2014). Adam: A method for stochastic optimization. https://arxiv.org/abs/1412.6980

  • Fan, D.-P., Cheng, M.-M., Liu, Y., Li, T., & Borji, A. (2017). Structure-measure: A new way to evaluate foreground maps. In: Proceedings of the IEEE international conference on computer vision, pp. 4548–4557.

  • Wei, J., Wang, S., & Huang, Q. (2020a). F3net: Fusion, feedback and focus for salient object detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12321–12328.

    Article  Google Scholar 

  • Zhao, X., Pang, Y., Zhang, L., Lu, H., Zhang, L., & Suppress and balance: A simple gated network for salient object detection, In: Computer Vision–ECCV,. (2020). 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. Springer, 2020, 35–51.

    Google Scholar 

  • Liu, Y., Gu, Y.-C., Zhang, X.-Y., Wang, W., & Cheng, M.-M. (2020). Lightweight salient object detection via hierarchical visual perception learning. IEEE Transactions on Cybernetics, 51(9), 4439–4449.

    Article  Google Scholar 

  • Zhou, H., Xie, X., Lai, J.-H., Chen, Z., and Yang, L., (2020). Interactive two-stream decoder for accurate and fast saliency detection, In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9141–9150.

  • Wei, J., Wang, S., Wu, Z., Su, C., Huang, Q., & Tian, Q., (2020). Label decoupling framework for salient object detection,” In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 13025–13034.

  • Pang, Y., Zhao, X., Zhang, L., & Lu, H., (2020). Multi-scale interactive network for salient object detection, In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9413–9422.

  • Deng, Z., Hu, X., Zhu, L., Xu, X., Qin, J., Han, G., & Heng, P.-A., (2018). R3net: Recurrent residual refinement network for saliency detection, In: Proceedings of the 27th international joint conference on artificial intelligence. AAAI Press Menlo Park, CA, USA, p. 684–690.

  • Lin, Y., Sun, H., Liu, N., Bian, Y., Cen, J., & Zhou, H., (2022). A lightweight multi-scale context network for salient object detection in optical remote sensing images, In: 2022 26th International Conference on Pattern Recognition (ICPR).IEEE, pp. 238–244.

  • Liu, Y., Zhang, X.-Y., Bian, J.-W., Zhang, L., & Cheng, M.-M. (2021). Samnet: Stereoscopically attentive multi-scale network for lightweight salient object detection. IEEE Transactions on Image Processing, 30, 3804–3814.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the reviewers for their recognition of our work, and their suggestions and comments were crucial to the improvement of our manuscript. This research was funded by the National Natural Science Foundation of China (62271393).

Funding

This research was funded by the National Natural Science Foundation of China (62271393).

Author information

Authors and Affiliations

Authors

Contributions

Longquan Yan and Ruixiang Yan designed and implemented the entire model architecture manuscript writing, Guohua Geng and Mingquan Zhou provided guidance on algorithm optimization and manuscript revision, and Rong Chen polished the manuscript writing.

Corresponding author

Correspondence to Guohua Geng.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, L., Yan, R., Geng, G. et al. Salient Object Detection in Optical Remote Sensing Images Based on Global Context Mixed Attention. J Indian Soc Remote Sens 52, 1489–1499 (2024). https://doi.org/10.1007/s12524-024-01870-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12524-024-01870-w

Keywords

Navigation