Automatic Annotation of Subsea Pipelines Using Deep Learning
<p>Examples of events in subsea pipeline surveys with varying scene conditions; from left to right: burial, exposure, anode, field joint, free span.</p> "> Figure 2
<p>Label distribution of a total 23,570 frames of the complete dataset.</p> "> Figure 3
<p>ResNet-50 architecture with modified head.</p> "> Figure 4
<p>Model training and evaluation process.</p> "> Figure 5
<p>Ground truth label, image, heatmap and predicted confidence scores for the five different event types.</p> "> Figure 6
<p>Steps for evaluating model’s performance: (<b>1</b>) validation set, (<b>2</b>) feature extraction, (<b>3</b>) classifier, (<b>4</b>) precision–recall curves for optimal thresholds selection, (<b>5</b>) applying optimal thresholds, (<b>6</b>) comparison with ground truth.</p> "> Figure 7
<p>Precision–recall curves for all labels. The inset shows a zoomed version of the top right corner.</p> "> Figure 8
<p>Confusion matrices on the test set for each class; anode, burial, exposure, field joint and free span.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
- Burial (B): the pipeline is buried underneath the seabed and thus protected.
- Exposure (E): the pipeline is exposed; visible and prone to damage. When the pipeline is exposed to other pipeline features/events become visible:
- –
- Anode (A): pipeline bracelet anodes are specifically designed to protect sub-sea pipelines from corrosion [18]. Data Coordinators visually recognize anodes by the banding that appears in the orthogonal direction of the pipeline; anodes have no surface vegetation growth.
- –
- Field joint (FJ): the point where two pipe sections meet and welded together, typically occurring every 12 m. Data coordinators recognize Field Joints due to the depression on the pipeline surface.
- –
- Free span (FS): pipeline segments that are elevated and not supported by the seabed (either due to seabed erosion/scouring or due to uneven seabed during installation), pose a significant risk to the asset; currents or moving objects (debris, nets and etc.) could damage the pipeline. FS is more apparent on the starboard and port video feeds; the center camera is used to judge the seabed depth against the pipeline.
2.1. Model Architecture
2.2. Performance Evaluation Methodology
3. Model Training
4. Hyperparameter Tuning and Model Validation
5. Model Performance on Test Set
6. Effect of Model Size
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Zingaretti, P.; Zanoli, S.M. Robust real-time detection of an underwater pipeline. Eng. Appl. Artif. Intell. 1998, 11, 257–268. [Google Scholar] [CrossRef]
- Jacobi, M.; Karimanzira, D. Underwater pipeline and cable inspection using autonomous underwater vehicles. In Proceedings of the 2013 MTS/IEEE OCEANS, Bergen, Norway, 10–14 June 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Jacobi, M.; Karimanzira, D. Multi Sensor Underwater Pipeline Tracking with AUVs; 2014 Oceans—St. John’s; IEEE: St. John’s, NL, Canada, 2014; pp. 1–6. [Google Scholar] [CrossRef]
- Narimani, M.; Nazem, S.; Loueipour, M. Robotics vision-based system for an underwater pipeline and cable tracker. In Proceedings of the OCEANS 2009-EUROPE, Bremen, Germany, 11–14 May 2009; pp. 1–6. [Google Scholar] [CrossRef]
- Conte, G.; Zanoli, S.; Perdon, A.M.; Tascini, G.; Zingaretti, P. Automatic analysis of visual data in submarine pipeline inspection. In Proceedings of the OCEANS 96 MTS/IEEE Conference Proceedings. The Coastal Ocean- Prospects for the 21st Century, Fort Lauderdale, FL, USA, 23–26 September 1996; Volume 3, pp. 1213–1219. [Google Scholar] [CrossRef]
- Ortiz, A.; Simó, M.; Oliver, G. A vision system for an underwater cable tracker. Mach. Vis. Appl. 2002, 13, 129–140. [Google Scholar] [CrossRef]
- Ortiz, A.; Antich, J.; Oliver, G. Experimental Evaluation of a Particle Filter-based Approach for Visually Tracking Undersea Cables. IFAC Proc. Vol. 2009, 42, 140–145. [Google Scholar] [CrossRef]
- Asif, M.; Rizal, M. An Active Contour and Kalman Filter for Underwater Target Tracking and Navigation. In Mobile Robots: Towards New Applications; Lazinica, A., Ed.; I-Tech Education and Publishing: Seattle, WA, USA, 2006. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, V.N.; Jenssen, R.; Roverso, D. Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning. Int. J. Electr. Power Energy Syst. 2018, 99, 107–120. [Google Scholar] [CrossRef] [Green Version]
- Zhang, W.; Witharana, C.; Li, W.; Zhang, C.; Li, X.; Parent, J. Using deep learning to identify utility poles with crossarms and estimate their locations from google street view images. Sensors 2018, 18, 2484. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jalil, B.; Leone, G.R.; Martinelli, M.; Moroni, D.; Pascali, M.A.; Berton, A. Fault Detection in Power Equipment via an Unmanned Aerial System Using Multi Modal Data. Sensors 2019, 19, 3014. [Google Scholar] [CrossRef] [Green Version]
- Miao, X.; Liu, X.; Chen, J.; Zhuang, S.; Fan, J.; Jiang, H. Insulator detection in aerial images for transmission line inspection using single shot multibox detector. IEEE Access 2019. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN. 2017. Available online: http://xxx.lanl.gov/abs/1506.01497 (accessed on 12 January 2020).
- Bonnin-Pascual, F.; Ortiz, A. A novel approach for defect detection on vessel structures using saliency-related features. Ocean Eng. 2018, 149, 397–408. [Google Scholar] [CrossRef]
- Bonin-Font, F.; Campos, M.M.; Codina, G.O. Towards Visual Detection, Mapping and Quantification of Posidonia Oceanica using a Lightweight AUV. IFAC-PapersOnLine 2016, 49, 500–505. [Google Scholar] [CrossRef]
- Martin-Abadal, M.; Guerrero-Font, E.; Bonin-Font, F.; Gonzalez-Cid, Y. Deep Semantic Segmentation in an AUV for Online Posidonia Oceanica Meadows Identification. IEEE Access 2018, 6, 60956–60967. [Google Scholar] [CrossRef]
- Petraglia, F.R.; Campos, R.; Gomes, J.G.R.C.; Petraglia, M.R. Pipeline tracking and event classification for an automatic inspection vision system. In Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS), Baltimore, MD, USA, 28–31 May 2017; pp. 1–4. [Google Scholar] [CrossRef]
- Fang, H.; Duan, M. Submarine Pipelines and Pipeline Cable Engineering. In Offshore Operation Facilities; Elsevier: Amsterdam, The Netherlands, 2014; pp. e1–e181. [Google Scholar] [CrossRef]
- Boutell, M.R.; Luo, J.; Shen, X.; Brown, C.M. Learning multi-label scene classification. Pattern Recognit. 2004, 37, 1757–1771. [Google Scholar] [CrossRef] [Green Version]
- Sinha, R.K.; Pandey, R.; Pattnaik, R. Deep Learning For Computer Vision Tasks: A review. arXiv 2018, arXiv:1804.03928. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. (IJCV) 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. arXiv 2017, arXiv:1708.02002. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv 2017, arXiv:1703.06870. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Curran Associates, Inc.: New York, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML’10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation Functions: Comparison of trends in Practice and Research for Deep Learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
- Geisser, S. The Predictive Sample Reuse Method with Applications. J. Am. Stat. Assoc. 1975, 70, 320–328. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in PyTorch. In Proceedings of the NIPS Autodiff Workshop, Long Beach, CA, USA, 9 December 2017. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Reddi, S.J.; Kale, S.; Kumar, S. On the Convergence of Adam and Beyond. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Smith, L.N. Cyclical Learning Rates for Training Neural Networks. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; pp. 464–472. [Google Scholar] [CrossRef] [Green Version]
- Smith, L.N. Cyclical Learning Rates for Training Neural Networks. arXiv 2015, arXiv:1506.01186. [Google Scholar]
- Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
- Sorower, M.S. A Literature Survey on Algorithms for Multi-Label Learning; Technical Report; Oregon State University: Corvallis, OR, USA, 2010. [Google Scholar]
- Gharroudi, O.; Elghazel, H.; Aussem, A. Ensemble Multi-label Classification: A Comparative Study on Threshold Selection and Voting Methods. In Proceedings of the 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), Vietri sul Mare, Italy, 9–11 November 2015; pp. 377–384. [Google Scholar] [CrossRef]
- Flach, P.A.; Kull, M. Precision-Recall-Gain Curves: PR Analysis Done Right. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS’15), Cambridge, MA, USA, 12–14 December 2015; Volume 1, pp. 838–846. [Google Scholar]
- Saito, T.; Rehmsmeier, M. The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets. PLoS ONE 2015, 10, 1–21. [Google Scholar] [CrossRef] [Green Version]
- Zhang, M.L.; Zhou, Z.H. A Review on Multi-Label Learning Algorithms. IEEE Trans. Knowl. Data Eng. 2014, 26, 1819–1837. [Google Scholar] [CrossRef]
- Yang, Y. An Evaluation of Statistical Approaches to Text Categorization. Inf. Retr. 1999, 1, 69–90. [Google Scholar] [CrossRef]
Event | Anode | Burial | Exposure | Field Joint | Free Span |
---|---|---|---|---|---|
Threshold | 0.357 | 0.367 | 0.632 | 0.542 | 0.430 |
Fold # | Exact Match Ratio | Precision | Recall | F1-Score |
---|---|---|---|---|
1 | 0.907 | 0.958 | 0.961 | 0.960 |
2 | 0.890 | 0.949 | 0.956 | 0.953 |
3 | 0.920 | 0.972 | 0.961 | 0.967 |
4 | 0.914 | 0.962 | 0.967 | 0.964 |
5 | 0.899 | 0.954 | 0.958 | 0.956 |
Threshold | Accuracy | Recall | Precision | F1-Score | |||||
---|---|---|---|---|---|---|---|---|---|
Event | Average | Std | Average | Std | Average | Std | Average | Std | |
Anode | 0.357 | 0.981 | 0.006 | 0.910 | 0.028 | 0.912 | 0.046 | 0.911 | 0.028 |
Burial | 0.367 | 0.978 | 0.001 | 0.959 | 0.011 | 0.953 | 0.013 | 0.956 | 0.004 |
Exposure | 0.632 | 0.978 | 0.001 | 0.984 | 0.004 | 0.986 | 0.003 | 0.985 | 0.001 |
Field Joint | 0.542 | 0.942 | 0.008 | 0.893 | 0.020 | 0.885 | 0.024 | 0.889 | 0.015 |
Free Span | 0.430 | 0.995 | 0.002 | 0.988 | 0.002 | 0.988 | 0.013 | 0.988 | 0.007 |
Aggregate | 0.906 | 0.011 | 0.961 | 0.004 | 0.959 | 0.008 | 0.960 | 0.005 |
Event | Threshold | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|---|
Anode | 0.357 | 0.986 | 0.952 | 0.912 | 0.931 |
Burial | 0.367 | 0.980 | 0.955 | 0.966 | 0.961 |
Exposure | 0.632 | 0.980 | 0.988 | 0.984 | 0.986 |
Field Joint | 0.542 | 0.951 | 0.928 | 0.882 | 0.904 |
Free Span | 0.430 | 0.997 | 0.997 | 0.990 | 0.994 |
Aggregate | 0.919 | 0.972 | 0.960 | 0.966 |
Network | # Parameters | Inference Time (ms) | Exact Match Ratio | Precision | Recall | F1-Score |
---|---|---|---|---|---|---|
ResNet-18 | 11,706,949 | 17.7 | 0.872 | 0.945 | 0.947 | 0.946 |
ResNet-34 | 21,815,109 | 20.8 | 0.903 | 0.953 | 0.966 | 0.960 |
ResNet-50 | 25,617,477 | 23.6 | 0.919 | 0.972 | 0.960 | 0.966 |
ResNet-101 | 44,609,605 | 31.2 | 0.916 | 0.956 | 0.973 | 0.965 |
ResNet-152 | 60,253,253 | 39.1 | 0.833 | 0.931 | 0.927 | 0.929 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Stamoulakatos, A.; Cardona, J.; McCaig, C.; Murray, D.; Filius, H.; Atkinson, R.; Bellekens, X.; Michie, C.; Andonovic, I.; Lazaridis, P.; et al. Automatic Annotation of Subsea Pipelines Using Deep Learning. Sensors 2020, 20, 674. https://doi.org/10.3390/s20030674
Stamoulakatos A, Cardona J, McCaig C, Murray D, Filius H, Atkinson R, Bellekens X, Michie C, Andonovic I, Lazaridis P, et al. Automatic Annotation of Subsea Pipelines Using Deep Learning. Sensors. 2020; 20(3):674. https://doi.org/10.3390/s20030674
Chicago/Turabian StyleStamoulakatos, Anastasios, Javier Cardona, Chris McCaig, David Murray, Hein Filius, Robert Atkinson, Xavier Bellekens, Craig Michie, Ivan Andonovic, Pavlos Lazaridis, and et al. 2020. "Automatic Annotation of Subsea Pipelines Using Deep Learning" Sensors 20, no. 3: 674. https://doi.org/10.3390/s20030674
APA StyleStamoulakatos, A., Cardona, J., McCaig, C., Murray, D., Filius, H., Atkinson, R., Bellekens, X., Michie, C., Andonovic, I., Lazaridis, P., Hamilton, A., Hossain, M. M., Di Caterina, G., & Tachtatzis, C. (2020). Automatic Annotation of Subsea Pipelines Using Deep Learning. Sensors, 20(3), 674. https://doi.org/10.3390/s20030674