Abstract
Scientifically, the identification of salt ore has definite practical significance for the exploitation of oil and gas. Traditionally, this is achieved by picking the salt boundaries with manual vision, which may introduce serious systematic bias. Nowadays, with the technological progress of machine vision used in image analysis, human effort has been replaced by machine capacity in salt mine recognition. Especially, with the in-depth application of deep learning technology in the field of machine vision, salt mine recognition using image analysis is revolutionizing with more acceptable efficiency and accuracy. To this end, with exploratory data analysis to mine the characteristics and data processing to increase the size of the image data for further enhancing the generalization capability of the designed model, a deep convolutional neural network based image segmentation model is investigated to achieve salt mine recognition in this paper. Concretely, a U-Net model integrated modified ResNet34 is first designed as a basic recognition model, and many attempts then are conducted to further optimizing the model according to the data characteristics, including adding auxiliary function, hyper-column, scSE and depth supervision scheme. In addition, multiple loss functions are also attempted to be adapted to further improving the model generalization capacity. The numerical analysis and evaluation finally show the efficiency of the investigations on loss value and recognition accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Liu, F., Lin, G., Shen, C.: CRF learning with CNN features for image segmentation. Pattern Recogn. 48(10), 2983–2992 (2015)
Wang, G., Zuluaga, M.A., Li, W., et al.: DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1559–1572 (2018)
Dolz, J., Gopinath, K., Yuan, J., et al.: HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. IEEE Trans. Med. Imaging 38(5), 1116–1126 (2018)
Zeng, Y., Jiang, K., Chen, J.: Automatic seismic salt interpretation with deep convolutional neural networks. In: 3rd International Conference on Information System and Data Mining, pp. 16–20 (2019)
Fawzi, A., Samulowitz, H., Turaga, D., et al.: Adaptive data augmentation for image classification. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 3688–3692 (2016)
Ibtehaz, N., Rahman, M.S.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020)
Zhu, C., Zheng, Y., Luu, K., et al.: Weakly supervised facial analysis with dense hyper-column features. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–33 (2016)
Liu, Y., Jourabloo, A., Liu, X.: Learning deep models for face anti-spoofing: binary or auxiliary supervision. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 389–398 (2018)
Roy, A.G., Navab, N., Wachinger, C.: Recalibrating fully convolutional networks with spatial and channel “squeeze and excitation” blocks. IEEE Trans. Med. Imaging 38(2), 540–549 (2018)
Pavlakos, G., Zhou, X., Daniilidis, K.: Ordinal depth supervision for 3D human pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition pp. 7307–7316 (2018)
Tao, M., Wei, W., Yuan, H., Huang, S.: Version-vector based video data online cloud backup in smart campus. Multimedia Tools Appl. 78(3), 3435–3456 (2019)
Yi-de, M., Qing, L., Zhi-Bai, Q.: Automated image segmentation using improved PCNN model based on cross-entropy. In: International Symposium on Intelligent Multimedia, Video and Speech Processing, pp. 743–746 (2004)
Guerrero-Pena, F.A., Fernandez, P.D.M., Ren, T.I., et al.: Multiclass weighted loss for instance segmentation of cluttered cells. In: 25th IEEE International Conference on Image Processing (ICIP), pp. 2451–2455 (2018)
Lin, T.Y., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. In: IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 240–248. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_28
Berman, M., Rannen Triki, A., Blaschko, M.B.: The lovász-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4413–4421 (2018)
Acknowledgments
This work was supported in part by the Natural Science Foundation of Guangdong Province (Grant No. 2018A030313014), the Guangdong University Key Project (2019KZDXM012), and the research team project of Dongguan University of Technology (Grant No. TDY-B2019009).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Tao, M., Li, X., Ding, K. (2020). Deep Convolutional Neural Network Based Image Segmentation for Salt Mine Recognition. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12488. Springer, Cham. https://doi.org/10.1007/978-3-030-62463-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-62463-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62462-0
Online ISBN: 978-3-030-62463-7
eBook Packages: Computer ScienceComputer Science (R0)