A Recognition Model Incorporating Geometric Relationships of Ship Components
"> Figure 1
<p>The workflow of our proposed method.</p> "> Figure 2
<p>Network structure.</p> "> Figure 3
<p>Schematic of hierarchical clustering.</p> "> Figure 4
<p>Comparison of conventional convolution and deformable convolution.</p> "> Figure 5
<p>Specific examples for FGWC-18 categories.</p> "> Figure 6
<p>Distribution of the number of samples in each category of FGWC-18.</p> "> Figure 7
<p>Visualization of multi-scenario experimental results in ablation experiments.</p> "> Figure 8
<p>Visualization results for ship components.</p> ">
Abstract
:1. Introduction
- 1.
- Generalized ship component samples. By selecting commonly found ship components such as the main gun, chimney, vertical launch system, and flight deck, the samples become more generalized. Most ships possess some or all of these components, reducing the need for extensive ship-specific data.
- 2.
- The simplified labeling and orientation determination. Ship components often have regular shapes like rounds or squares, with symmetry. This characteristic leads to less background area for labeling and prediction while using horizontal frames. Furthermore, identifying ship components can assist in determining the actual orientation of the ship.
- 3.
- The utilization of regular geometric relationships. While ships of different types may share similarities in their outward appearance, the distribution and arrangement of their components exhibit variations. Moreover, there are distinct characteristics in terms of the relative positions and orientations of ship components and the alignment and integration of ship components within the ship’s structure. Exploiting these inherent geometric relationships can significantly contribute to the processes of detection and identification.
- 1.
- Smaller and more challenging detection. Ship components are typically smaller than the whole ship in scale, making their detection more difficult. The reduced size increases the complexity of accurately localizing and recognizing these components.
- 2.
- Uneven sample distribution. The number of available samples for different ship components may vary significantly, resulting in imbalanced data distributions. This imbalance can pose challenges during training and may impact the performance of the recognition model.
- 3.
- Varied difficulty in feature extraction. Different ship components may exhibit varying levels of difficulty in feature extraction. Some components may possess distinctive features, while others may lack clear discriminative characteristics. Addressing these differences in feature extraction complexity is crucial for achieving accurate recognition across all ship components.
- 1.
- Geometric relationship constraints and attention mechanism. The paper incorporates the geometric relationship constraints of ship components. This information is utilized to enhance the accuracy and reduce the false alarm rate. Additionally, an attention mechanism is employed to weight the extracted sample features. This mechanism effectively focuses on the most relevant features and improves the overall detection accuracy.
- 2.
- Adaptive anchor box generation. A new hierarchical clustering approach is introduced to generate adaptive anchor boxes. This approach helps improve the network’s ability to detect multi-scale samples. By dynamically generating anchor boxes based on the clustering algorithm, the network becomes more effective in handling objects of varying sizes.
- 3.
- Small-target detection layer. A novel network structure that incorporates a small target detection layer is designed. This layer specifically focuses on enhancing the network’s capability to detect tiny targets. Small targets often pose challenges due to their limited visual information, and the inclusion of this layer aims to improve the detection performance for such cases.
- 4.
- Deformable convolution for feature extraction. This paper utilizes deformable convolution for the feature extraction of input samples. Deformable convolution allows the network to effectively extract features from samples with different shapes. By adapting the convolutional filters to the specific spatial locations, the network becomes more flexible in capturing the informative features from ship images.
2. Materials and Methods
2.1. A Hierarchical Clustering Algorithm to Implement Adaptive Anchor Boxes
- 1.
- The number of clusters needs to be specified in advance: for the task of ship detection based on ship parts in remote sensing imagery, the size distribution of the samples is not uniform, and selecting the appropriate number of clusters may be difficult without a priori knowledge.
- 2.
- Sensitivity to initial centroid selection: different initial centroid selections may lead to different clustering results. A poor initial selection may cause the algorithm to fall into a local optimum solution.
- 3.
- Sensitive to outliers: outliers may disturb the clustering results, causing some clusters to be affected or even split into multiple clusters.
2.2. Deformable Convolution
2.3. Mechanisms of Relational Attention
2.4. Datasets
- 1.
- Limited open source datasets and small data volume: There is a scarcity of open source datasets, particularly for ship fine-grained recognition. The available datasets are often small in size, which limits the amount of training data.
- 2.
- Confusing dataset labeling: Existing datasets, such as the HRSC-2016 [35] dataset, suffer from confusing multilevel labeling. This includes missing the labeling of small ships and incorrectly labeled ship classes, which can affect the accuracy of ship recognition algorithms.
- 3.
- Imbalance in ship classes and appearance features: Current ship class datasets suffer from an imbalance in the number of ship classes and appearance features. The majority of datasets have a larger number of civilian ships, while warships, which are strategically significant and more challenging to identify due to inter-class similarities and intra-class differences have limited samples.
2.4.1. HRSC2016
- 1.
- Dense arrangement of ships along the shore: Ships are often densely arranged along the shore in the dataset, resulting in a high degree of overlap in the labeling frames. This can make it challenging to accurately detect and classify individual ships.
- 2.
- Complex background in remote sensing images: The background of the remote sensing images in the dataset is complex, and there is a significant degree of similarity between the ships to be detected and the nearby shore textures. This adds to the difficulty of distinguishing ships from the background.
- 3.
- Variation in ship scales: The dataset contains ships of various scales, with a wide range of sizes present in the same image. Detecting and classifying ships with varying scales adds complexity to the task.
- 4.
- Numerous ship categories: The dataset includes dozens of different ship categories, making classification detection a challenging task. Each category may have distinct visual characteristics that need to be learned and recognized.
- 5.
- Within-category ship variations: Each ship category in the dataset contains multiple different ships, which further complicate the classification detection task. The variations within each category require the model to be robust and capable of distinguishing between different ships within the same category.
- 6.
- Small sample size and insufficient learning: The number of ships in each category is not large, leading to a small sample size for training. Insufficient learning and training data can result in reduced model performance and poor robustness.
- 7.
- Cloud masking issues: The dataset may also have challenges related to cloud masking, where clouds obstruct the components of the ships or introduce additional visual noise.
2.4.2. FGSC-23
2.4.3. FGSCR-42
2.5. Evaluation Metrics
3. Results
3.1. Experimental Parameters
3.2. Experimental Results and Analysis
3.2.1. Ablation Experiment
- (1)
- Effectiveness of Adaptive Anchor Boxes
- (2)
- Effectiveness of Multiscale Detection Layers
- (3)
- Effectiveness of Deformable Convolution
- (4)
- Effectiveness of the Relational Attention Module
3.2.2. Comparison with the State of the Art
3.2.3. Robustness Test
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
CNN | convolutional neural networks |
HBBs | horizontal bounding boxes |
OBBs | oriented bounding boxes |
IoU | Intersection over Union |
NMS | Non-Maximum Suppression |
ROIs | Region of Interests |
CSL | Circular Smooth Label |
YOLO | You Only Look Once |
FGWC | Fine-Grained Warship Classification |
FGSC | Fine-Grained Ship Classification |
FGSCR | Fine-Grained Ship Classification in Remote sensing images |
FGVC | fine-grained visual classification |
mAP | mean average precision |
TP | true positive |
FP | false positive |
FN | false negative |
R2CNN | rotational region CNN |
RRPN | Radar Region Proposal Network |
SCRDet | small, cluttered and rotated objects Detector |
R3Det | Refined Single-Stage Detector |
ReDet | Rotation-equivariant Detector |
References
- Li, J.; Li, Z.; Chen, M.; Wang, Y.; Luo, Q. A new ship detection algorithm in optical remote sensing images based on improved R3Det. Remote Sens. 2022, 14, 5048. [Google Scholar] [CrossRef]
- Xu, F.; Liu, J.; Dong, C.; Wang, X. Ship detection in optical remote sensing images based on wavelet transform and multi-level false alarm identification. Remote Sens. 2017, 9, 985. [Google Scholar] [CrossRef]
- He, H.; Lin, Y.; Chen, F.; Tai, H.M.; Yin, Z. Inshore ship detection in remote sensing images via weighted pose voting. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3091–3107. [Google Scholar] [CrossRef]
- Li, S.; Zhou, Z.; Wang, B.; Wu, F. A novel inshore ship detection via ship head classification and body boundary determination. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1920–1924. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang, L.; Wang, Y.; Feng, P.; He, R. ShipRSImageNet: A large-scale fine-grained dataset for ship detection in high-resolution optical remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8458–8472. [Google Scholar] [CrossRef]
- Chua, L.O.; Roska, T. The CNN paradigm. IEEE Trans. Circuits Syst. Fundam. Theory Appl. 1993, 40, 147–156. [Google Scholar] [CrossRef]
- Etten, A. You only look twice: Rapid multi-scale object detection in satellite imagery. arXiv 2018, arXiv:1805.09512. [Google Scholar]
- Pang, J.; Li, C.; Shi, J.; Xu, Z.; Feng, H. R2-CNN: Fast Tiny object detection in large-scale remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5512–5524. [Google Scholar] [CrossRef]
- Yang, X.; Yan, J.; Feng, Z.; He, T. R3det: Refined single-stage detector with feature refinement for rotating object. Proc. AAAI Conf. Artif. Intell. 2021, 35, 3163–3171. [Google Scholar] [CrossRef]
- Zhang, G.; Lu, S.; Zhang, W. CAD-Net: A context-aware detection network for objects in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10015–10024. [Google Scholar] [CrossRef]
- Wu, Y.; Ma, W.; Gong, M.; Bai, Z.; Zhao, W.; Guo, Q.; Chen, X.; Miao, Q. A coarse-to-fine network for ship detection in optical remote sensing images. Remote Sens. 2020, 12, 246. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. Automatic ship detection based on RetinaNet using multi-resolution Gaofen-3 imagery. Remote Sens. 2019, 11, 531. [Google Scholar] [CrossRef]
- Yang, X.; Zhang, X.; Wang, N.; Gao, X. A robust one-stage detector for multiscale ship detection with complex background in massive SAR images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5217712. [Google Scholar] [CrossRef]
- Li, Y.; Xu, Q.; Kong, Z.; Li, W. MULS-Net: A Multilevel Supervised Network for Ship Tracking From Low-Resolution Remote-Sensing Image Sequences. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5624214. [Google Scholar] [CrossRef]
- Li, Y.; Xu, Q.; He, Z.; Li, W. Progressive Task-based Universal Network for Raw Infrared Remote Sensing Imagery Ship Detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5610013. [Google Scholar] [CrossRef]
- Xu, Q.; Li, Y.; Zhang, M.; Li, W. COCO-Net: A Dual-Supervised Network With Unified ROI-Loss for Low-Resolution Ship Detection From Optical Satellite Image Sequences. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5629115. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015. [Google Scholar] [CrossRef]
- Jiang, Y.; Zhu, X.; Wang, X.; Yang, S.; Li, W.; Wang, H.; Fu, P.; Luo, Z. R2CNN: Rotational region CNN for orientation robust scene text detection. arXiv 2017, arXiv:1706.09579. [Google Scholar]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.S.; Lu, Q. Learning RoI transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2849–2858. [Google Scholar]
- Yang, X.; Yang, J.; Yan, J.; Zhang, Y.; Zhang, T.; Guo, Z.; Sun, X.; Fu, K. Scrdet: Towards more robust detection for small, cluttered and rotated objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8232–8241. [Google Scholar]
- Yang, X.; Yan, J. Arbitrary-oriented object detection with circular smooth label. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VIII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 677–694. [Google Scholar]
- Yi, J.; Wu, P.; Liu, B.; Huang, Q.; Qu, H.; Metaxas, D. Oriented object detection in aerial images with box boundary-aware vectors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 2150–2159. [Google Scholar]
- Xie, X.; Cheng, G.; Wang, J.; Yao, X.; Han, J. Oriented R-CNN for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3520–3529. [Google Scholar]
- Zhang, F.; Wang, X.; Zhou, S.; Wang, Y.; Hou, Y. Arbitrary-oriented ship detection through center-head point extraction. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5612414. [Google Scholar] [CrossRef]
- Zhang, C.; Xiong, B.; Li, X.; Kuang, G. TCD: Task-collaborated detector for oriented objects in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4700714. [Google Scholar] [CrossRef]
- Fang, F.; Li, L.; Zhu, H.; Lim, J.H. Combining faster R-CNN and model-driven clustering for elongated object detection. IEEE Trans. Image Process. 2019, 29, 2052–2065. [Google Scholar] [CrossRef] [PubMed]
- Xiong, W.; Xiong, Z.; Cui, Y. An explainable attention network for fine-grained ship classification using remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5620314. [Google Scholar] [CrossRef]
- Sumbul, G.; Cinbis, R.G.; Aksoy, S. Multisource region attention network for fine-grained object recognition in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4929–4937. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; Kwon, Y.; Michael, K.; Fang, J.; Wong, C.; Yifu, Z.; Montes, D.; et al. ultralytics/yolov5: v6. 2-yolov5 classification models, apple m1, reproducibility, clearml and deci. ai integrations. Zenodo 2022. [Google Scholar] [CrossRef]
- Yue, T.; Zhang, Y.; Liu, P.; Xu, Y.; Yu, C. A generating-anchor network for small ship detection in SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7665–7676. [Google Scholar] [CrossRef]
- Nielsen, F.; Nielsen, F. Hierarchical clustering. In Introduction to HPC with MPI for Data Science; Springer: Berlin/Heidelberg, Germany, 2016; pp. 195–211. [Google Scholar]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Hu, H.; Gu, J.; Zhang, Z.; Dai, J.; Wei, Y. Relation networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3588–3597. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, H.; Weng, L.; Yang, Y. Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1074–1078. [Google Scholar] [CrossRef]
- Zhang, X.; Lv, Y.; Yao, L.; Xiong, W.; Fu, C. A new benchmark and an attribute-guided multilevel feature representation network for fine-grained ship classification in optical remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1271–1285. [Google Scholar] [CrossRef]
- Di, Y.; Jiang, Z.; Zhang, H. A public dataset for fine-grained ship classification in optical remote sensing images. Remote Sens. 2021, 13, 747. [Google Scholar] [CrossRef]
- Nabati, R.; Qi, H. Rrpn: Radar region proposal network for object detection in autonomous vehicles. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3093–3097. [Google Scholar]
- Yang, X.; Yan, J. On the arbitrary-oriented object detection: Classification based approaches revisited. Int. J. Comput. Vis. 2022, 130, 1340–1365. [Google Scholar] [CrossRef]
Method | YOLO V5 | YOLO V5 | YOLO V5 | YOLO V5 | YOLO V5 |
---|---|---|---|---|---|
Hierarchical clustering | Hierarchical clustering | Hierarchical clustering | Hierarchical clustering | ||
Multiscale detection layer | Multiscale detection layers | Multiscale detection layer | |||
Deformable convolution | Deformable convolution | ||||
Relational attention module | |||||
Arl. | 89.5% | 90.1% | 90.7% | 91.3% | 95.5% |
Whi. | 99.5% | 95.3% | 94.2% | 96.3% | 99.5% |
Per. | 96.6% | 97.1% | 96.5% | 98.8% | 98.8% |
San. | 79.2% | 79.5% | 80.1% | 81.3% | 83.5% |
Tic. | 86.2% | 89.3% | 88.2% | 89.7% | 90.5% |
Abu. | 97.3% | 95.5% | 96.3% | 97.8% | 98.8% |
Tar. | 22.2% | 41.6% | 54.7% | 61.0% | 68.4% |
Aus. | 82.0% | 82.5% | 83.4% | 85.6% | 89.2% |
Was. | 99.1% | 98.5% | 97.2% | 99.5% | 99.5% |
Fre. | 92.1% | 93.5% | 94.3% | 94.8% | 95.3% |
Ind. | 99.5% | 96.5% | 97.1% | 98.9% | 98.6% |
Hor. | 55.4% | 57.7% | 58.9% | 59.1% | 79.7% |
Ata. | 65.7% | 70.6% | 72.4% | 73.6% | 84.4% |
Mae. | 44.8% | 62.5% | 64.2% | 68.9% | 80.5% |
Aki. | 67.5% | 69.3% | 69.5% | 72.4% | 75.9% |
Asa. | 99.5% | 98.4% | 99.6% | 98.2% | 99.2% |
Kid. | 69.3% | 70.5% | 72.4% | 75.5% | 80.1% |
Kon. | 63.8% | 68.7% | 69.4% | 75.5% | 80.1% |
mAP@50 | 78.2% | 81.1% | 82.1% | 84.3% | 88.8% |
Method | YOLO V5 | YOLOV5 + K-Means | YOLO V5 + Hierarchical Clustering + Relational Attention Module |
---|---|---|---|
Main gun | 33.9% | 43.8% | 49.2% |
Vertical launch system | 62.4% | 62.5% | 62.8% |
Chimney | 19.9% | 30.4% | 39.7% |
Flight deck | 69.8% | 70.1% | 70.5% |
[email protected] | 49.5% | 52.2% | 55.3% |
Method | YOLO V5 | YOLOV5 + Multiscale Detection Layers |
---|---|---|
Main gun | 49.2% | 52.5% |
Vertical launch system | 62.8% | 63.9% |
Chimney | 39.7% | 44.7% |
Flight deck | 70.5% | 71.4% |
[email protected] | 55.3% | 58.1% |
Method | Baseline | Layer 1 | Layer 3 | Layer 5 | Layer 7 |
---|---|---|---|---|---|
Main gun | 52.5% | 55.8% | 52.1% | 48.2% | 46.3% |
Vertical launch system | 63.9% | 69.2% | 59.2% | 58.7% | 55.9% |
Chimney | 44.7% | 54.4% | 48.2% | 46.7% | 42.1% |
Flight deck | 71.4% | 77.3% | 71.1% | 69.3% | 67.7% |
[email protected] | 58.1% | 64.2%% | 57.9% | 55.7% | 53% |
Method | YOLO V5 | YOLOV5 + the Relational Attention Module |
---|---|---|
Main gun | 55.8% | 69.5% |
Vertical launch system | 69.2% | 75.4% |
Chimney | 54.4% | 68.3% |
Flight deck | 77.3% | 83.2% |
[email protected] | 64.2% | 74.1% |
Method | Ours | R2CNN | RRPN | SCRDet | R3Det | ReDet | CSL | ROI Transformer |
---|---|---|---|---|---|---|---|---|
Arl. | 95.5% | 87.1% | 88.4% | 84.6% | 84.9% | 88.1% | 74.6% | 84.9% |
Whi. | 99.5% | 63.9% | 67.1% | 63.9% | 53.5% | 72.7% | 60.9% | 78.5% |
Per. | 98.8% | 70.8% | 75.2% | 80.4% | 83.6% | 88.0% | 80.6% | 84.9% |
San. | 83.5% | 87.7% | 86.7% | 57.7% | 67.3% | 86.4% | 77.2% | 88.9% |
Tic. | 90.5% | 87.8% | 86.2% | 81.2% | 87.1% | 90.6% | 87.7% | 89.7% |
Abu. | 98.8% | 95.5% | 96.3% | 94.7% | 92.1% | 93.6% | 97.8% | 90.1% |
Tar. | 68.4% | 80.5% | 98.6% | 94.7% | 88.6% | 99.7% | 99.1% | 87.2% |
Aus. | 89.2% | 80.9% | 85.1% | 82% | 82% | 88.4% | 84.6% | 89.5% |
Was. | 99.5% | 80.1% | 82.5% | 83.7% | 84.2% | 86.5% | 86.4% | 87.2% |
Fre. | 95.3% | 82.4% | 89.3% | 82.5% | 84.3% | 87.2% | 82.1% | 83.3% |
Ind. | 98.6% | 87.1% | 88.5% | 84.3% | 89.7% | 89.6% | 90.1% | 92.2% |
Hor. | 79.7% | 75.6% | 73.2% | 72.1% | 78.8% | 72.1% | 73.2% | 74.1% |
Ata. | 84.4% | 69.9% | 70.1% | 68.2% | 66.8% | 63.5% | 75.4% | 78.6% |
Mae. | 80.5% | 72.1% | 70.2% | 73.6% | 77.6% | 67.8% | 79.5% | 80.0% |
Aki. | 75.9% | 70.3% | 70.5% | 74.1% | 73.4% | 79.2% | 84.2% | 82.1% |
Asa. | 99.2% | 50.2% | 20.3% | 53.1% | 56.2% | 49.3% | 44.8% | 60.7% |
Kid. | 80.1% | 80.6% | 84.1% | 85.5% | 86.3% | 89.7% | 80.5% | 83.1% |
Kon. | 80.1% | 76.8% | 70.1% | 53.2% | 40.1% | 67.4% | 70.9% | 68.3% |
mAP@50 | 88.8% | 78.9% | 77.9% | 77.8% | 78.7% | 83.3% | 82.2% | 83.5% |
Method | Ours | R2CNN | RRPN | SCRDet | R3Det | ReDet | CSL | ROI Transformer |
---|---|---|---|---|---|---|---|---|
FGSCR-42 | 95.5% | 77.4% | 87.2% | 88.1% | 89.3% | 91.6% | 92.3% | 93.1% |
FGSC-23 | 92.1% | 77.9% | 81.7% | 85.9% | 82.1% | 84.2% | 82.6% | 87.8% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, S.; Wang, W.; Pan, Z.; Hu, Y.; Zhou, G.; Wang, Q. A Recognition Model Incorporating Geometric Relationships of Ship Components. Remote Sens. 2024, 16, 130. https://doi.org/10.3390/rs16010130
Ma S, Wang W, Pan Z, Hu Y, Zhou G, Wang Q. A Recognition Model Incorporating Geometric Relationships of Ship Components. Remote Sensing. 2024; 16(1):130. https://doi.org/10.3390/rs16010130
Chicago/Turabian StyleMa, Shengqin, Wenzhi Wang, Zongxu Pan, Yuxin Hu, Guangyao Zhou, and Qiantong Wang. 2024. "A Recognition Model Incorporating Geometric Relationships of Ship Components" Remote Sensing 16, no. 1: 130. https://doi.org/10.3390/rs16010130
APA StyleMa, S., Wang, W., Pan, Z., Hu, Y., Zhou, G., & Wang, Q. (2024). A Recognition Model Incorporating Geometric Relationships of Ship Components. Remote Sensing, 16(1), 130. https://doi.org/10.3390/rs16010130