Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Accurate and real-time visual detection algorithm for environmental perception of USVS under all-weather conditions

  • Research
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Owing to the intricate and ever-changing nature of the marine environment, traditional marine survey methods are subject to numerous limitations. Unmanned surface vehicles (USVs) have gained significant popularity for their role in automatically identifying and positioning targets in the ocean. To enhance the environmental perception capabilities of USVs in complex marine environments, vision-based sea surface object detection algorithms have emerged as a crucial technological approach. In response to the unique challenges of sea surface object detection tasks and the various extreme situations encountered by USVs in real sea environments, YOLOv7 was chosen as the baseline model due to its excellent trade-off between speed and accuracy. We propose Efficient Multi-Scale Pyramid Attention Networks, which enable easy and rapid multi-scale feature fusion. Additionally, we improve the boundary box loss function. Building upon these optimizations, we develop a new detector called Marit-YOLO, which consistently achieves better accuracy and efficiency than the prior art across a wide spectrum of resource constraints. Marit-YOLO achieved a 5.0% increase in the average precision (AP) value on the independently collected Ocean Buoys Dataset. During generalization validation, the AP value also saw a 7.7% increase on the open-source Singapore Maritime Dataset. The Marit-YOLO algorithm achieves a real-time inference speed of 69 frames per second on a single RTX2080, enabling accurate and real-time target detection in complex sea environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Availability of data and materials

Dataset download link:https://sites.google.com/site/dilipprasad/home/singapore-maritime-dataset.

References

  1. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893. IEEE (2005)

  2. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

  3. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

  4. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. Adv. Neural. Inf. Process. Syst. 28, 91–99 (2015)

    Google Scholar 

  5. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

  6. Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)

  7. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)

  8. Bochkovskiy, A., Wang, C.-Y., Liao, H.-Y.M.: Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

  9. Ge, Z., Liu, S., Wang, F., Li, Z., Sun, J.: Yolox: exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430 (2021)

  10. Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., : Swin transformer v2: Scaling up capacity and resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12009–12019 (2022)

  11. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  14. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  15. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

  16. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)

  17. Dai, J., Li, Y., He, K., Sun, J.: R-fcn: object detection via region-based fully convolutional networks. Adv. Neural. Inf. Process. Syst. 29, 379–387 (2016)

    Google Scholar 

  18. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

  19. Cai, Z., Vasconcelos, N.: Cascade r-cnn: delving into high quality object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6154–6162 (2018)

  20. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37. Springer (2016)

  21. Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

  22. Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: Centernet: keypoint triplets for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6569–6578 (2019)

  23. Tian, Z., Shen, C., Chen, H., He, T.: Fcos: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9627–9636 (2019)

  24. Wang, J., Lin, Y., Guo, J., Zhuang, L.: Sss-yolo: towards more accurate detection for small ships in sar image. Remote Sens. Lett. 12(2), 93–102 (2021)

    Article  Google Scholar 

  25. Jie, Y., Leonidas, L., Mumtaz, F., Ali, M.: Ship detection and tracking in inland waterways using improved yolov3 and deep sort. Symmetry 13(2), 308 (2021)

    Article  Google Scholar 

  26. Qian, L., Zheng, Y., Cao, J., Ma, Y., Zhang, Y., Liu, X.: Lightweight ship target detection algorithm based on improved yolov5s. J. Real-Time Image Proc. 21(1), 1–15 (2024)

    Article  Google Scholar 

  27. Kim, J., Cho, Y., Han, S., Kim, J.: Automatic ship detection and tracking considering the uncertainty of deep learning-based object detection. J. Inst. Control Robot. Syst. 28(6), 529–535 (2022)

    Article  Google Scholar 

  28. Chen, Z., Liu, C., Filaretov, V., Yukhimets, D.: Multi-scale ship detection algorithm based on yolov7 for complex scene sar images. Remote Sens. 15(8), 2071 (2023)

    Article  Google Scholar 

  29. Patel, K., Bhatt, C., Mazzeo, P.L.: Deep learning-based automatic detection of ships: an experimental study using satellite images. J. Imaging 8(7), 182 (2022)

    Article  Google Scholar 

  30. Chen, Z., Chen, D., Zhang, Y., Cheng, X., Zhang, M., Wu, C.: Deep learning for autonomous ship-oriented small ship detection. Saf. Sci. 130, 104812 (2020)

    Article  Google Scholar 

  31. Sun, X., Liu, T., Yu, X., Pang, B.: Unmanned surface vessel visual object detection under all-weather conditions with optimized feature fusion network in yolov4. J. Intell. Robot. Syst. 103(3), 1–16 (2021)

    Article  Google Scholar 

  32. Wang, C.-Y., Liao, H.-Y.M., Wu, Y.-H., Chen, P.-Y., Hsieh, J.-W., Yeh, I.-H.: Cspnet: a new backbone that can enhance learning capability of cnn. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 390–391 (2020)

  33. Liu, T., Pang, B., Zhang, L., Yang, W., Sun, X.: Sea surface object detection algorithm based on yolo v4 fused with reverse depthwise separable convolution (rdsc) for usv. J. Mar. Sci. Eng. 9(7), 753 (2021)

    Article  Google Scholar 

  34. Liu, T., Pang, B., Ai, S., Sun, X.: Study on visual detection algorithm of sea surface targets based on improved yolov3. Sensors 20(24), 7263 (2020)

    Article  Google Scholar 

  35. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  36. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

  37. Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M.: Yolov7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7464–7475 (2023)

  38. Jiang, P., Ergu, D., Liu, F., Cai, Y., Ma, B.: A review of yolo algorithm developments. Proc. Comp. Sci. 199, 1066–1073 (2022)

    Article  Google Scholar 

  39. Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., Ren, D.: Distance-iou loss: faster and better learning for bounding box regression. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12993–13000 (2020)

  40. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768 (2018)

  41. Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

  42. Yu, J., Jiang, Y., Wang, Z., Cao, Z., Huang, T.: Unitbox: an advanced object detection network. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 516–520 (2016)

  43. Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., Savarese, S.: Generalized intersection over union: a metric and a loss for bounding box regression. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 658–666 (2019)

  44. Gevorgyan, Z.: Siou loss: more powerful learning for bounding box regression. arXiv preprint arXiv:2205.12740 (2022)

  45. Prasad, D.K., Rajan, D., Rachmawati, L., Rajabally, E., Quek, C.: Video processing from electro-optical sensors for object detection and tracking in a maritime environment: a survey. IEEE Trans. Intell. Transp. Syst. 18(8), 1993–2016 (2017)

    Article  Google Scholar 

  46. Tan, M., Pang, R., Le, Q.V.: Efficientdet: scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10781–10790 (2020)

  47. Zhang, Y.-F., Ren, W., Zhang, Z., Jia, Z., Wang, L., Tan, T.: Focal and efficient iou loss for accurate bounding box regression. Neurocomputing 506, 146–157 (2022)

    Article  Google Scholar 

  48. He, J., Erfani, S., Ma, X., Bailey, J., Chi, Y., Hua, X.-S.: Alpha-iou: a family of power intersection over union losses for bounding box regression. Adv. Neural. Inf. Process. Syst. 34, 20230–20242 (2021)

    Google Scholar 

  49. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: common objects in context. In: European Conference on Computer Vision, pp. 740–755. Springer (2014)

  50. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

  51. Everingham, M., Eslami, S., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)

    Article  Google Scholar 

  52. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

Download references

Acknowledgements

The authors would like to express their gratitude to all their colleagues.

Funding

This work was supported by Equipment Pre-research Key Laboratory Fund (6142215190207) and Key Research and Development Program of Heilongjiang Province (2022ZX01A05).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, KD, TL, ZS and YZ; methodology, KD and TL; software, KD; validation, KD, TL and YZ; investigation, TL and ZS; resources, TL and ZS; data curation, TL; writing—original draft preparation, KD and YZ; writing—review and editing, KD and YZ; supervision, TL and ZS; funding acquisition, TL. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Tao Liu.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in this study were in accordance with with the ethical standards of the institution and the National Research Council. Approval was obtained from the ethics committee of Harbin Engineering University.

Consent to participate

Informed consent was obtained from all individual participants included in the study.

Consent for publication

The authors affirm that research participants provided informed consent for publication of their data and photographs.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, K., Liu, T., Shi, Z. et al. Accurate and real-time visual detection algorithm for environmental perception of USVS under all-weather conditions. J Real-Time Image Proc 21, 36 (2024). https://doi.org/10.1007/s11554-024-01417-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11554-024-01417-9

Keywords

Navigation