Oriented Ship Detection Based on Intersecting Circle and Deformable RoI in Remote Sensing Images
"> Figure 1
<p>(<b>a</b>) The situation of missed detection due to too dense targets. The yellow box indicates the detected targets, and the red box indicates the missed targets. (<b>b</b>) The result of using the oriented bounding box.</p> "> Figure 2
<p>Overview of the our proposed OSCD-Net. Feature extraction and RPN are located on the left. The DRoI module is an intermediate grid rectangle used for enhancement to form the final detection feature. ICR-head on the right is used to detect the horizontal rectangle and a circle, which is decoded into a rotating rectangular box.</p> "> Figure 3
<p>(<b>a</b>) The circle marked with the radius of the circle. The circle and the horizontal rectangle will intersect into four inclined rectangles. (<b>b</b>) The vertex of the horizontal rectangle and the intersection with the circle, and explains that there are only two inclined rectangles using AF to draw the circle.</p> "> Figure 4
<p>Schematic diagram of the rectangle in the coordinate axis. <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>h</mi> <mo>,</mo> <mi>R</mi> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> </semantics></math> are known, and vertex coordinates (A, B, C, D, E, F, G, H) are unknown.</p> "> Figure 5
<p>Illustration of the process of the ICR-head module. Each part is marked in the red box on the right.</p> "> Figure 6
<p>Illustration of process of DRoI module. DRoI module uses three transformation operations and two deformation operations. Transformation operation and deformation operation are circled with red frames.</p> "> Figure 7
<p>(<b>a</b>) The step of transformation operation; (<b>b</b>) The process of deformation operation.</p> "> Figure 8
<p>(<b>a</b>) The case where the short side of the horizontal rectangle does not intersect the circle. (<b>b</b>) The parallelogram obtained by connecting the midpoint and intersection of the short side of the horizontal rectangle.</p> "> Figure 9
<p>(<b>a</b>) The tilt target represented by <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>h</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>b</b>) the tilt target represented by <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>h</mi> <mo>,</mo> <msub> <mi>α</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>α</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>α</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>α</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </semantics></math>; (<b>c</b>) the tilt target represented by <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>h</mi> <mo>,</mo> <mi>R</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 10
<p>Comparison of ablation study methods. In the first column, we mark the ground truth with green boxes. The second column shows the detection results of Faster-RCNN OBB. The detection result in blue boxes are the effect of our use of ICR-head. In the last column, the yellow box shows the detection results used by both the DRoI module and ICR-head.</p> "> Figure 11
<p>Comparison experiment detection results on HRSC2016 dataset. The green box in the first column represents the ground truth. The yellow box indicates the detection result. The red box indicates the missed detection area, and the blue box indicates the inaccurate detection area. (<b>a</b>) The detection result of Faster R-CNN. (<b>b</b>) The detection result of RoI transformer. (<b>c</b>) The detection result of Gliding vertex. (<b>d</b>) The detection result of OSCD-Net.</p> "> Figure 12
<p>Precision–recall curve of different methods on the HRSC2016 dataset.</p> "> Figure 13
<p>Comparison of experiment detection results on the DOTA-Ship dataset. The green box in the first column represents the ground truth. The yellow box indicates the detection result. The red box indicates the missed detection area, and the blue box indicates the inaccurate detection area. (<b>a</b>) The detection result of Faster R-CNN. (<b>b</b>) The detection result of RoI transformer. (<b>c</b>) The detection result of Gliding vertex. (<b>d</b>) The detection result of OSCD-Net.</p> "> Figure 14
<p>Precision–recall curve of different methods on the DOTA-Ship dataset.</p> "> Figure 15
<p>Precision–recall curve of different methods on the SSDD+ dataset.</p> "> Figure 16
<p>Comparison of experiment detection results on the SSDD+ dataset. The green box in the first column represents the ground truth. The yellow box indicates the detection result. The red box indicates the missed detection area, and the blue box indicates the inaccurate detection area. (<b>a</b>) The detection result of Faster R-CNN. (<b>b</b>) The detection result of RoI transformer. (<b>c</b>) The detection result of Gliding vertex. (<b>d</b>) The detection result of OSCD-Net.</p> "> Figure 17
<p>Some detection results on the test datasets.</p> ">
Abstract
:1. Introduction
- We develop an intersecting circle rotated detection head for detecting rotated ships in remote sensing images. This detection head is also suitable for detecting other rotated targets.
- We designed a DRoI module, which can obtain features of different pooled sizes. DRoI is used to solve the problem that the ship length–width ratio is overlarge.
- A new end-to-end algorithm framework for oriented ship detection in remote sensing images.
2. Methods
2.1. Overall Architecture of OSCD-Net
2.2. ICR-Head
2.2.1. Obtaining a Rotated Rectangle
2.2.2. Encode and Decode
2.2.3. Double-Heads
2.3. Deformable RoI
2.4. Multitask Loss
2.5. Inference
Algorithm 1 The pseudocode for OSCD-Net. |
|
3. Experiment
3.1. Datasets
- (1)
- HRSC2016 [46]: HRSC2016 is a public dataset of remotely sensed ships, including 1070 pictures and 2976 examples. The pictures are collected from Google Earth. There are 2886 ships standing under the port with complex background, including warships, aircraft carriers, cargo ships, fishing boats, etc. The image resolution ranges from 2 m to 0.4 m, and the image size ranges from to , most of which are larger than . In the experiment, we use the HRSC official division of the trainset and the testset. A total of 646 pictures are used for training, and 444 pictures are used for testing.
- (2)
- DOTA-Ship: DOTA [47] is a large-scale dataset with 15 object categories for object detection in remote sensing images. It uses four vertex coordinates for annotation and contains 2806 aerial images from different sensors and platforms. The size of the images ranges from around 800 × 800 to 4000 × 4000 pixels. For DOTA-Ship, we manually extract all ship images on the DOTA trainset and verset, with a total of 434 images, including 39028 instances. We randomly divided the trainset and testset according to the ratio of 6:4. During the training, we cut the picture to 1000 × 1000 pixels.
- (3)
- SSDD+ [14]: SSDD is the first dataset published at home and abroad for ship object detection in SAR images. There are a total of 12 images and 2456 images for each ship. In order to detect ship targets with rotating frame, the label of SSDD is improved, and the rotation angle information is added on the basis of category and position. This data set is called SSDD+. In order to verify the robustness of our algorithm, we also use SAR images to evaluate the algorithm, using SSDD+ with rotation annotation. We randomly divided the trainset and testset according to the ratio of 7:3.
3.2. Evaluation Metrics
3.3. Training Details
3.4. Ablation Study
3.5. Comparison with Others
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zou, Z.; Shi, Z. Random Access Memories: A New Paradigm for Target Detection in High Resolution Aerial Remote Sensing Images. IEEE Trans. Image Process. 2018, 27, 1100–1111. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Bai, X.; Wang, S.; Zhou, J.; Ren, P. Multiscale visual attention networks for object detection in VHR remote sensing images. IEEE Geosci. Remote Sens. Lett. 2018, 16, 310–314. [Google Scholar] [CrossRef]
- Liu, E.; Zheng, Y.; Pan, B.; Xu, X.; Shi, Z. DCL-Net: Augmenting the Capability of Classification and Localization for Remote Sensing Object Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7933–7944. [Google Scholar] [CrossRef]
- Yu, Y.; Wang, J.; Qiang, H.; Jiang, M.; Tang, E.; Yu, C.; Zhang, Y.; Li, J. Sparse anchoring guided high-resolution capsule network for geospatial object detection from remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102548. [Google Scholar] [CrossRef]
- Yuan, H.; Huang, K.; Ren, C.; Xiong, Y.; Duan, J.; Yang, Z. Pomelo Tree Detection Method Based on Attention Mechanism and Cross-Layer Feature Fusion. Remote Sens. 2022, 14, 3902. [Google Scholar] [CrossRef]
- Dong, X.; Qin, Y.; Gao, Y.; Fu, R.; Liu, S.; Ye, Y. Attention-Based Multi-Level Feature Fusion for Object Detection in Remote Sensing Images. Remote Sens. 2022, 14, 3735. [Google Scholar] [CrossRef]
- Li, Q.; Mou, L.; Liu, Q.; Wang, Y.; Zhu, X.X. HSF-Net: Multiscale Deep Feature Embedding for Ship Detection in Optical Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7147–7161. [Google Scholar] [CrossRef]
- Zhou, Q.; Song, F.; Chen, Z.; Zhang, R.; Jiang, P.; Lei, T. Harbor Ship Detection Based on Channel Weighting and Spatial Information Fusion. J. Phys. Conf. Ser. 2021, 1738, 012057. [Google Scholar] [CrossRef]
- Li, X.; Li, D.; Liu, H.; Wan, J.; Chen, Z.; Liu, Q. A-BFPN: An Attention-Guided Balanced Feature Pyramid Network for SAR Ship Detection. Remote Sens. 2022, 14, 3829. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, L.; Xiong, B.; Kuang, G. Attention Receptive Pyramid Network for Ship Detection in SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2738–2756. [Google Scholar] [CrossRef]
- Nie, X.; Duan, M.; Ding, H.; Hu, B.; Wong, E.K. Attention Mask R-CNN for Ship Detection and Segmentation from Remote Sensing Images. IEEE Access 2020, 8, 9325–9334. [Google Scholar] [CrossRef]
- Hu, J.; Zhi, X.; Shi, T.; Zhang, W.; Cui, Y.; Zhao, S. PAG-YOLO: A Portable Attention-Guided YOLO Network for Small Ship Detection. Remote Sens. 2021, 13, 3059. [Google Scholar] [CrossRef]
- Yao, Y.; Jiang, Z.; Zhang, H.; Zhao, D.; Cai, B. Ship detection in optical remote sensing images based on deep convolutional neural networks. J. Appl. Remote Sens. 2017, 11, 042611. [Google Scholar] [CrossRef]
- Li, J.; Qu, C.; Shao, J. Ship detection in SAR images based on an improved faster R-CNN. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Tang, G.; Zhuge, Y.; Claramunt, C.; Men, S. N-YOLO: A SAR Ship Detection Using Noise-Classifying and Complete-Target Extraction. Remote Sens. 2021, 13, 871. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the Advances in Neural Information Processing Systems; Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2015; Volume 28. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Xie, X.; Li, L.; An, Z.; Lu, G.; Zhou, Z. Small Ship Detection Based on Hybrid Anchor Structure and Feature Super-Resolution. Remote Sens. 2022, 14, 3530. [Google Scholar] [CrossRef]
- Jin, K.; Chen, Y.; Xu, B.; Yin, J.; Wang, X.; Yang, J. A Patch-to-Pixel Convolutional Neural Network for Small Ship Detection With PolSAR Images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6623–6638. [Google Scholar] [CrossRef]
- Zhu, M.; Hu, G.; Zhou, H.; Wang, S.; Feng, Z.; Yue, S. A Ship Detection Method via Redesigned FCOS in Large-Scale SAR Images. Remote Sens. 2022, 14, 1153. [Google Scholar] [CrossRef]
- Sun, Z.; Dai, M.; Leng, X.; Lei, Y.; Xiong, B.; Ji, K.; Kuang, G. An Anchor-Free Detection Method for Ship Targets in High-Resolution SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7799–7816. [Google Scholar] [CrossRef]
- Chen, C.; He, C.; Hu, C.; Pei, H.; Jiao, L. MSARN: A Deep Neural Network Based on an Adaptive Recalibration Mechanism for Multiscale and Arbitrary-Oriented SAR Ship Detection. IEEE Access 2019, 7, 159262–159283. [Google Scholar] [CrossRef]
- Jiang, Y.; Zhu, X.; Wang, X.; Yang, S.; Li, W.; Wang, H.; Fu, P.; Luo, Z. R2CNN: Rotational region CNN for orientation robust scene text detection. arXiv 2017, arXiv:1706.09579. [Google Scholar]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.S.; Lu, Q. Learning RoI Transformer for Oriented Object Detection in Aerial Images. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–17 June 2019; pp. 2844–2853. [Google Scholar] [CrossRef]
- Han, J.; Ding, J.; Xue, N.; Xia, G.S. ReDet: A Rotation-equivariant Detector for Aerial Object Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–17 June 2021; pp. 2785–2794. [Google Scholar] [CrossRef]
- Yang, X.; Yang, X.; Yang, J.; Ming, Q.; Wang, W.; Tian, Q.; Yan, J. Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence. In Proceedings of the Advances in Neural Information Processing Systems; Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W., Eds.; Curran Associates, Inc.: New York, NY, USA, 2021; Volume 34, pp. 18381–18394. [Google Scholar]
- Li, L.; Zhou, Z.; Wang, B.; Miao, L.; Zong, H. A Novel CNN-Based Method for Accurate Ship Detection in HR Optical Remote Sensing Images via Rotated Bounding Box. IEEE Trans. Geosci. Remote Sens. 2021, 59, 686–699. [Google Scholar] [CrossRef]
- Ma, J.; Shao, W.; Ye, H.; Wang, L.; Wang, H.; Zheng, Y.; Xue, X. Arbitrary-Oriented Scene Text Detection via Rotation Proposals. IEEE Trans. Multimed. 2018, 20, 3111–3122. [Google Scholar] [CrossRef]
- Xu, Y.; Fu, M.; Wang, Q.; Wang, Y.; Chen, K.; Xia, G.S.; Bai, X. Gliding Vertex on the Horizontal Bounding Box for Multi-Oriented Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1452–1459. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q.; Xiang, X.; Yang, Z.; Hu, Y.; Hong, Y. Arbitrary Direction Ship Detection in Remote-Sensing Images Based on Multitask Learning and Multiregion Feature Fusion. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1553–1564. [Google Scholar] [CrossRef]
- Yang, X.; Yan, J. Arbitrary-oriented object detection with circular smooth label. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 677–694. [Google Scholar]
- Yang, X.; Hou, L.; Zhou, Y.; Wang, W.; Yan, J. Dense Label Encoding for Boundary Discontinuity Free Rotation Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 15814–15824. [Google Scholar] [CrossRef]
- Shao, Z.; Zhang, X.; Zhang, T.; Xu, X.; Zeng, T. RBFA-Net: A Rotated Balanced Feature-Aligned Network for Rotated SAR Ship Detection and Classification. Remote Sens. 2022, 14, 3345. [Google Scholar] [CrossRef]
- Cui, Z.; Leng, J.; Liu, Y.; Zhang, T.; Quan, P.; Zhao, W. SKNet: Detecting Rotated Ships as Keypoints in Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8826–8840. [Google Scholar] [CrossRef]
- Yu, Y.; Yang, X.; Li, J.; Gao, X. A Cascade Rotated Anchor-Aided Detector for Ship Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Zhu, M.; Hu, G.; Zhou, H.; Wang, S.; Zhang, Y.; Yue, S.; Bai, Y.; Zang, K. Arbitrary-Oriented Ship Detection Based on RetinaNet for Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6694–6706. [Google Scholar] [CrossRef]
- Pan, Z.; Yang, R.; Zhang, Z. MSR2N: Multi-Stage Rotational Region Based Network for Arbitrary-Oriented Ship Detection in SAR Images. Sensors 2020, 20, 2340. [Google Scholar] [CrossRef]
- Ran, B.; You, Y.; Li, Z.; Liu, F. Arbitrary-Oriented Ship Detection Method Based on Improved Regression Model for Target Direction Detection Network. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 964–967. [Google Scholar] [CrossRef]
- Wu, Y.; Chen, Y.; Yuan, L.; Liu, Z.; Wang, L.; Li, H.; Fu, Y. Rethinking Classification and Localization for Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10183–10192. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, H.; Weng, L.; Yang, Y. Ship Rotated Bounding Box Space for Ship Extraction From High-Resolution Optical Satellite Images with Complex Backgrounds. IEEE Geosci. Remote Sens. Lett. 2017, 13, 1074–1078. [Google Scholar] [CrossRef]
- Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A Large-Scale Dataset for Object Detection in Aerial Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
- Bottou, L. Stochastic Gradient Descent Tricks. In Neural Networks: Tricks of the Trade, 2nd ed.; Montavon, G., Orr, G.B., Müller, K.R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. [Google Scholar] [CrossRef] [Green Version]
- Wang, T.; Li, Y. Rotation-Invariant Task-Aware Spatial Disentanglement in Rotated Ship Detection Based on the Three-Stage Method. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Method | Num | S | AP(07) | AP(12) |
---|---|---|---|---|
Faster R-CNN+FP | 5 | - | 80.9 | 82.9 |
Faster R-CNN+EP | 8 | - | 88.20 | 89.0 |
Faster R-CNN+RHC (ours) | 5 | 98.9 | 88.6 | 89.2 |
RHC | ICR-Head | DRoI | AP(07) | AP(12) |
---|---|---|---|---|
√ | 88.60 | 89.21 | ||
√ | √ | 89.17 | 92.10 | |
√ | √ | 89.21 | 91.73 | |
√ | √ | √ | 89.98 | 93.52 |
Method | Data Aug | Backbone | FPN | AP(07) | AP(12) |
---|---|---|---|---|---|
Faster R-CNN OBB | √ | ResNet101 | √ | 80.90 | 82.91 |
RetinaNet OBB | √ | ResNet101 | √ | 79.17 | 82.10 |
R2CNN | - | ResNet101 | - | 72.36 | 74.35 |
RRPN | √ | VGG16 | - | 79.60 | - |
RoI transformer | √ | ResNet101 | √ | 86.22 | 87.18 |
Gliding vertex | √ | ResNet101 | √ | 88.20 | 89.02 |
CSL | √ | ResNet101 | √ | 89.62 | - |
RITSD | √ | ResNet101 | √ | 89.70 | 92.98 |
OSCD-Net (ours) | √ | ResNet101 | √ | 89.90 | 93.52 |
Method | P | R | F1 |
---|---|---|---|
Faster R-CNN OBB | 86.24 | 87.17 | 86.70 |
RetinaNet OBB | 87.12 | 85.21 | 86.15 |
RoI transformer | 88.01 | 90.15 | 89.07 |
Gliding vertex | 89.81 | 92.67 | 91.22 |
OSCD-Net (ours) | 90.12 | 94.54 | 92.27 |
Method | Data Aug | FPN | AP(07) |
---|---|---|---|
Faster R-CNN OBB | √ | √ | 58.49 |
RetinaNet OBB | √ | √ | 56.21 |
R2CNN | - | - | 52.12 |
RoI transformer | √ | √ | 63.75 |
Gliding vertex | √ | √ | 66.53 |
OSCD-Net (ours) | √ | √ | 67.62 |
OSCD-Net (ours) | √ | √ | 67.95 |
Method | P | R | F1 |
---|---|---|---|
Faster R-CNN OBB | 62.92 | 70.19 | 66.36 |
RetinaNet OBB | 62.81 | 68.25 | 65.42 |
RoI transformer | 64.89 | 75.10 | 69.62 |
Gliding vertex | 69.01 | 77.50 | 73.01 |
OSCD-Net (ours) | 69.04 | 78.15 | 73.31 |
Method | Data Aug | FPN | AP(07) | AP(12) |
---|---|---|---|---|
Faster R-CNN OBB | √ | √ | 75.95 | 77.45 |
RetinaNet OBB | √ | √ | 69.17 | 72.18 |
R2CNN | - | - | 67.78 | 68.95 |
RoI transformer | √ | √ | 78.32 | 79.82 |
Gliding vertex | √ | √ | 81.22 | 83.20 |
OSCD-Net(ours) | √ | √ | 81.01 | 83.49 |
OSCD-Net(ours) | √ | √ | 82.90 | 84.52 |
Method | P | R | F1 |
---|---|---|---|
Faster R-CNN OBB | 85.24 | 83.42 | 84.32 |
RetinaNet OBB | 84.10 | 81.93 | 83.00 |
RoI transformer | 88.23 | 87.15 | 87.69 |
Gliding vertex | 90.04 | 88.24 | 89.13 |
OSCD-Net (ours) | 89.25 | 90.45 | 89.85 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, J.; Huang, R.; Li, Y.; Pan, B. Oriented Ship Detection Based on Intersecting Circle and Deformable RoI in Remote Sensing Images. Remote Sens. 2022, 14, 4749. https://doi.org/10.3390/rs14194749
Zhang J, Huang R, Li Y, Pan B. Oriented Ship Detection Based on Intersecting Circle and Deformable RoI in Remote Sensing Images. Remote Sensing. 2022; 14(19):4749. https://doi.org/10.3390/rs14194749
Chicago/Turabian StyleZhang, Jun, Ruofei Huang, Yan Li, and Bin Pan. 2022. "Oriented Ship Detection Based on Intersecting Circle and Deformable RoI in Remote Sensing Images" Remote Sensing 14, no. 19: 4749. https://doi.org/10.3390/rs14194749