Probabilistic Ship Detection and Classification Using Deep Learning
<p>Structures of (<b>a</b>) Faster region-based convolution neural network (Faster R-CNN) and (<b>b</b>) region proposal network (RPN).</p> "> Figure 2
<p>An illustration of two bounding boxes with an Intersection over Union (IoU) of (<b>a</b>) 0.3, (<b>b</b>) 0.5, and (<b>c</b>) 0.9.</p> "> Figure 3
<p>IoU tracking. (<b>a</b>) IoU is equal to or larger than the threshold and (<b>b</b>) IoU is less than the threshold. If IoU is less than the threshold, the bounding box in frame <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> increases slightly and is updated as the target ship-bounding box to avoid missing ship detection.</p> "> Figure 4
<p>Initial bounding box selection in the first frame.</p> "> Figure 5
<p>Robust single ship detection system based on video in our proposed algorithm.</p> "> Figure 6
<p>Results of the ship detection using the Faster R-CNN based on the test images.</p> "> Figure 7
<p>The change of confidences in the image sequence without environmental factors. (<b>a</b>) Faster R-CNN, (<b>b</b>) Faster R-CNN and IoU tracking, and (<b>c</b>) Faster R-CNN, IoU tracking, and Bayesian fusion.</p> "> Figure 7 Cont.
<p>The change of confidences in the image sequence without environmental factors. (<b>a</b>) Faster R-CNN, (<b>b</b>) Faster R-CNN and IoU tracking, and (<b>c</b>) Faster R-CNN, IoU tracking, and Bayesian fusion.</p> "> Figure 8
<p>The change in the confidences in the image sequence with environmental factors: (<b>a</b>) Faster R-CNN, (<b>b</b>) Faster R-CNN and IoU tracking, and (<b>c</b>) Faster R-CNN, IoU tracking and Bayesian fusion.</p> "> Figure 8 Cont.
<p>The change in the confidences in the image sequence with environmental factors: (<b>a</b>) Faster R-CNN, (<b>b</b>) Faster R-CNN and IoU tracking, and (<b>c</b>) Faster R-CNN, IoU tracking and Bayesian fusion.</p> "> Figure 9
<p>Captured images from Interval 1.</p> "> Figure 10
<p>Captured images from Interval 2.</p> "> Figure 11
<p>Captured images from Interval 4.</p> ">
Abstract
:1. Introduction
2. Ship Detection and Classification from an Image
3. Building a Sequence of Bounding Boxes
4. Probabilistic Ship Detection and Classification in a Sequence of Images
Algorithm 1: Probabilistic ship detection and classification from video. | |
Step 1: | At frame 1, initialize the target ship-bounding box |
Step 2: | For a given image at frame update the bounding boxes |
If , | |
else | |
End | |
Step 3: | Evaluate the class confidence of a sequence of bounding boxes recursively by |
Step 4: | Determine the class at frame using |
Step 5: | For every next frame, repeat Steps 2, 3, and 4. |
5. Experimental Results
5.1. Ship Dataset
5.2. Performance
5.2.1. Results of the Single Image Detection
5.2.2. Results of Detection Based on Video
6. Conclusions
Author Contributions
Acknowledgments
Conflicts of Interest
References
- Statheros, T.; Howells, G.; Maier, K.M. Autonomous ship collision avoidance navigation concepts, technologies and techniques. J. Navig. 2008, 61, 129–142. [Google Scholar] [CrossRef]
- Sun, X.; Wang, G.; Fan, Y.; Mu, D.; Qiu, B. An Automatic Navigation System for Unmanned Surface Vehicles in Realistic Sea Environments. Appl. Sci. 2018, 8, 193. [Google Scholar] [Green Version]
- International Maritime Organization (IMO). Available online: http://www.imo.org/en/About/Conventions/-ListOfConventions/Pages/COLREG.aspx (accessed on 3 April 2018).
- Liu, Z.; Zhang, Y.; Yu, X.; Yuan, C. Unmanned surface vehicles: An overview of developments and challenges. Annu. Rev. Control 2016, 41, 71–93. [Google Scholar] [CrossRef]
- Stitt, I.P.A. AIS and collision avoidance—A sense of déjà vu. J. Navig. 2004, 57, 167–180. [Google Scholar] [CrossRef]
- Tang, J.; Deng, C.; Huang, G.B.; Zhao, B. Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1174–1185. [Google Scholar] [CrossRef]
- Crisp, D.J. The State-of-the-Art in Ship Detection in Synthetic Aperture Radar Imagery; No. DSTO-RR-0272; Defence Science and Technology Organisation Salisbury (Australia) Info Sciences Lab: Canberra, Australia, 2004. [Google Scholar]
- Migliaccio, M.; Nunziata, F.; Montuori, A.; Brown, C.E. Marine added-value products using RADARSAT-2 fine quad-polarization. Can. J. Remote Sens. 2012, 37, 443–451. [Google Scholar] [CrossRef]
- Hwang, J.I.; Chae, S.H.; Kim, D.; Jung, H.S. Application of Artificial Neural Networks to Ship Detection from X-Band Kompsat-5 Imagery. Appl. Sci. 2017, 7, 961. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), Proceedings of the NIPS 2015, Montréal, Canada, 7–12 December 2015; NIPS Foundation, Inc.: La Jolla, CA, USA, 2015; pp. 91–99. [Google Scholar]
- Park, S.; Hwang, J.P.; Kim, E.; Lee, H.; Jung, H.G. A neural network approach to target classification for active safety system using microwave radar. Expert Syst. Appl. 2010, 37, 2340–2346. [Google Scholar] [CrossRef]
- Hong, S.; Lee, H.; Kim, E. Probabilistic gait modelling and recognition. IET Comput. Vis. 2013, 7, 56–70. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Berg, A.C.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV) 2014, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Kim, K.H.; Hong, S.J.; Choi, B.H.; Kim, I.H.; Kim, E.T. Ship Detection Using Faster R-CNN in Maritime Scenarios. In Proceedings of the Conference on Information and Control Systems (CICS) 2017, Dubal, United Arab Emirates, 29–30 April 2017; pp. 158–159. (In Korean). [Google Scholar]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Murphy, K.; et al. Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors. In Proceedings of the IEEE 2017 Conference on Computer Vision and Pattern Recognition (CVPR) 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 7310–7311. [Google Scholar]
- Dai, J.; Li, Y.; He, K.; Sun, J. R-FCN: Object detection via region-based fully convolutional networks. In Advances in Neural Information Processing Systems (NIPS), Proceedings of the NIPS 2016 Barcelona, Spain, 5–10 December 2016; NIPS Foundation, Inc.: La Jolla, CA, USA, 2016; pp. 379–387. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multiBox Detector. In Proceedings of the European Conference on Computer Vision (ECCV) 2016, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE 2014 Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE 2015 Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1440–1448. [Google Scholar]
- Uijlings, J.R.; Van De Sande, K.E.; Gevers, T.; Smeulders, A.W. Selective search for object recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland; pp. 818–833. [Google Scholar]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
---|---|---|---|---|---|---|---|---|
Label | Aircraft carrier | Destroyer | Submarine | Bulk carrier | Container ship | Cruise ship | Tugboat | Back-ground |
Class | Training Set | Test Set | Subtotal |
---|---|---|---|
Aircraft carrier | 750 | 250 | 1000 |
Destroyer | 750 | 250 | 1000 |
Submarine | 750 | 250 | 1000 |
Container ship | 750 | 250 | 1000 |
Bulk carrier | 750 | 250 | 1000 |
Cruise ship | 750 | 250 | 1000 |
Tugboat | 750 | 250 | 1000 |
Total | 5250 | 1750 | 7000 |
Class | AP (%) |
---|---|
Aircraft carrier | 90.56 |
Destroyer | 87.98 |
Submarine | 90.22 |
Container ship | 99.60 |
Bulk carrier | 99.59 |
Cruise ship | 99.59 |
Tugboat | 95.01 |
mAP | 94.65 |
Algorithm | Faster R-CNN | Faster R-CNN + IoU | Faster R-CNN + IoU + Bayes | ||||
---|---|---|---|---|---|---|---|
Interval | Frame | Class | Confidence | Class | Confidence | Class | Confidence |
1 | 19 | Aircraft Carrier | 0.420 | Aircraft Carrier | 0.420 | Destroyer | 0.999 |
20 | Aircraft Carrier | 0.403 | Aircraft Carrier | 0.403 | Destroyer | 0.999 | |
21 | Aircraft Carrier | 0.539 | Aircraft Carrier | 0.539 | Destroyer | 0.997 | |
22 | Destroyer | 0.403 | Destroyer | 0.403 | Destroyer | 0.998 | |
23 | Destroyer | 0.433 | Destroyer | 0.433 | Destroyer | 0.999 | |
24 | Aircraft Carrier | 0.548 | Aircraft Carrier | 0.548 | Destroyer | 0.997 | |
25 | Destroyer | 0.463 | Destroyer | 0.463 | Destroyer | 0.998 | |
26 | Destroyer | 0.609 | Destroyer | 0.609 | Destroyer | 0.999 | |
27 | Destroyer | 0.611 | Destroyer | 0.611 | Destroyer | 0.999 | |
28 | Destroyer | 0.420 | Destroyer | 0.420 | Destroyer | 0.999 | |
29 | Aircraft Carrier | 0.434 | Aircraft Carrier | 0.434 | Destroyer | 0.999 | |
2 | 118 | Destroyer | 0.546 | Destroyer | 0.546 | Destroyer | 0.999 |
119 | Misdetection | 0 | Background | 0.900 | Destroyer | 0.999 | |
120 | Destroyer | 0.508 | Destroyer | 0.508 | Destroyer | 0.999 | |
121 | Destroyer | 0.454 | Destroyer | 0.454 | Destroyer | 0.999 | |
122 | Destroyer | 0.672 | Destroyer | 0.672 | Destroyer | 0.999 | |
123 | Destroyer | 0.432 | Destroyer | 0.432 | Destroyer | 0.999 | |
124 | Misdetection | 0 | Background | 0.900 | Destroyer | 0.999 | |
125 | Misdetection | 0 | Background | 0.900 | Destroyer | 0.995 | |
126 | Misdetection | 0 | Background | 0.900 | Destroyer | 0.966 | |
127 | Misdetection | 0 | Background | 0.900 | Destroyer | 0.806 | |
128 | Destroyer | 0.486 | Destroyer | 0.486 | Destroyer | 0.847 | |
4 | 234 | Destroyer | 0.612 | Destroyer | 0.612 | Destroyer | 0.999 |
235 | Aircraft Carrier | 0.611 | Aircraft Carrier | 0.611 | Destroyer | 0.999 | |
236 | Aircraft Carrier | 0.648 | Aircraft Carrier | 0.648 | Destroyer | 0.999 | |
237 | Aircraft Carrier | 0.616 | Aircraft Carrier | 0.616 | Destroyer | 0.990 | |
238 | Aircraft Carrier | 0.521 | Aircraft Carrier | 0.521 | Destroyer | 0.983 | |
239 | Aircraft Carrier | 0.630 | Aircraft Carrier | 0.630 | Destroyer | 0.955 | |
240 | Aircraft Carrier | 0.642 | Aircraft Carrier | 0.642 | Destroyer | 0.758 | |
241 | Destroyer | 0.858 | Destroyer | 0.858 | Destroyer | 0.974 | |
242 | Destroyer | 0.870 | Destroyer | 0.870 | Destroyer | 0.997 | |
243 | Destroyer | 0.814 | Destroyer | 0.814 | Destroyer | 0.999 | |
244 | Aircraft Carrier | 0.575 | Aircraft Carrier | 0.575 | Destroyer | 0.998 | |
245 | Destroyer | 0.766 | Destroyer | 0.766 | Destroyer | 0.999 | |
246 | Destroyer | 0.801 | Destroyer | 0.801 | Destroyer | 0.999 | |
247 | Destroyer | 0.689 | Destroyer | 0.689 | Destroyer | 0.999 | |
248 | Destroyer | 0.639 | Destroyer | 0.639 | Destroyer | 0.999 | |
249 | Aircraft Carrier | 0.601 | Aircraft Carrier | 0.601 | Destroyer | 0.999 | |
250 | Destroyer | 0.670 | Destroyer | 0.670 | Destroyer | 0.999 | |
251 | Aircraft Carrier | 0.558 | Aircraft Carrier | 0.558 | Destroyer | 0.999 | |
252 | Aircraft Carrier | 0.632 | Aircraft Carrier | 0.632 | Destroyer | 0.998 | |
253 | Aircraft Carrier | 0.553 | Aircraft Carrier | 0.553 | Destroyer | 0.997 | |
254 | Aircraft Carrier | 0.651 | Aircraft Carrier | 0.651 | Destroyer | 0.991 |
Class | AP (%) | |
---|---|---|
Faster R-CNN | Faster R-CNN + IoU + Bayes | |
Aircraft carrier | 99.33 | 100.00 |
Destroyer | 68.67 | 77.61 |
Submarine | 98.00 | 100.00 |
Container ship | 76.69 | 88.19 |
Bulk carrier | 88.00 | 94.67 |
Cruise ship | 96.33 | 96.97 |
Tugboat | 98.67 | 100.00 |
mAP (%) | 89.38 | 93.92 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, K.; Hong, S.; Choi, B.; Kim, E. Probabilistic Ship Detection and Classification Using Deep Learning. Appl. Sci. 2018, 8, 936. https://doi.org/10.3390/app8060936
Kim K, Hong S, Choi B, Kim E. Probabilistic Ship Detection and Classification Using Deep Learning. Applied Sciences. 2018; 8(6):936. https://doi.org/10.3390/app8060936
Chicago/Turabian StyleKim, Kwanghyun, Sungjun Hong, Baehoon Choi, and Euntai Kim. 2018. "Probabilistic Ship Detection and Classification Using Deep Learning" Applied Sciences 8, no. 6: 936. https://doi.org/10.3390/app8060936
APA StyleKim, K., Hong, S., Choi, B., & Kim, E. (2018). Probabilistic Ship Detection and Classification Using Deep Learning. Applied Sciences, 8(6), 936. https://doi.org/10.3390/app8060936