Nothing Special   »   [go: up one dir, main page]

Skip to main content

Learning Visual Free Space Detection for Deep-Diving Robots

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12662))

Included in the following conference series:

  • 2380 Accesses

Abstract

Since the sunlight only penetrates a few hundred meters into the ocean, deep-diving robots have to bring their own light sources for imaging the deep sea, e.g., to inspect hydrothermal vent fields. Such co-moving light sources mounted not very far from a camera introduce uneven illumination and dynamic patterns on seafloor structures but also illuminate particles in the water column and create scattered light in the illuminated volume in front of the camera. In this scenario, a key challenge for forward-looking robots inspecting vertical structures in complex terrain is to identify free space (water) for navigation. At the same time, visual SLAM and 3D reconstruction algorithms should only map rigid structures, but not get distracted by apparent patterns in the water, which often resulted in very noisy maps or 3D models with many artefacts. Both challenges, free space detection, and clean mapping could benefit from pre-segmenting the images before maneuvering or 3D reconstruction. We derive a training scheme that exploits depth maps of a reconstructed 3D model of a black smoker field in 1400 m water depth, resulting in a carefully selected, ground-truthed data set of 1000 images. Using this set, we compare the advantages and drawbacks of a classical Markov Random Field-based segmentation solution (graph cut) and a deep learning-based scheme (U-Net) to finding free space in forward-looking cameras in the deep ocean.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017)

    Article  Google Scholar 

  2. Baker, E.T., German, C.R.: On the global distribution of mid-ocean ridge hydrothermal vent-fields. Am. Geophys. Union Geophys. Monograph 148, 245–266 (2004)

    Google Scholar 

  3. Braginsky, B., Guterman, H.: Obstacle avoidance approaches for autonomous underwater vehicle: Simulation and experimental results. IEEE J. Oceanic Eng. 41(4), 882–892 (2016)

    Article  Google Scholar 

  4. Drews, P., Hernández, E., Elfes, A., Nascimento, E.R., Campos, M.: Real-time monocular obstacle avoidance using underwater dark channel prior. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4672–4677 (2016)

    Google Scholar 

  5. Erik Simon-Lledó, E., et al.: Biological effects 26 years after simulated deep-sea mining. Scientific Reports 8040(9) (2019). https://doi.org/10.1038/s41598-019-44492-w

  6. Gaya, J.O., Gonçalves, L.T., Duarte, A.C., Zanchetta, B., Drews, P., Botelho, S.S.C.: Vision-based obstacle avoidance using deep learning. In: 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), pp. 7–12 (2016)

    Google Scholar 

  7. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  8. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)

    Article  Google Scholar 

  9. Hernández, J.D., et al.: Autonomous underwater navigation and optical mapping in unknown natural environments. Sensors 16(8) (2016). https://www.mdpi.com/1424-8220/16/8/1174, https://doi.org/10.3390/s16081174

  10. Jaffe, J.S.: Computer modeling and the design of optimal underwater imaging systems. IEEE J. Oceanic Eng. 15(2), 101–111 (1990). https://doi.org/10.1109/48.50695

    Article  Google Scholar 

  11. Jerlov, N.G.: Marine Optics. Elsevier Scientific Publishing Company (1976)

    Google Scholar 

  12. Jordt, A., Köser, K., Koch, R.: Refractive 3d reconstruction on underwater images. Methods in Oceanography 15–16, 90–113 (2016). https://doi.org/10.1016/j.mio.2016.03.001, http://www.sciencedirect.com/science/article/pii/S2211122015300086

  13. Köser, K., Frese, U.: Challenges in underwater visual navigation and SLAM. In: Kirchner, F., Straube, S., Kühn, D., Hoyer, N. (eds.) AI Technology for Underwater Robots. ISCASE, vol. 96, pp. 125–135. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-30683-0_11

    Chapter  Google Scholar 

  14. Li, Y., Lu, H., Li, J., Li, X., Li, Y., Serikawa, S.: Underwater image descattering and classification by deep neural network. Comput. Electr. Eng. 54, 68–77 (2016). https://doi.org/10.1016/j.compeleceng.2016.08.008, http://www.sciencedirect.com/science/article/pii/S0045790616302075

  15. McGlamery, B.L.: Computer analysis and simulation of underwater camera system performance. Technical report, Visibility Laboratory, Scripps Institution of Oceanography, University of California in San Diego (1975)

    Google Scholar 

  16. Mobley, C.D.: Light and Water: Radiative Transfer in Natural Waters. Academic Press, San Diego (1994)

    Google Scholar 

  17. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  18. Rother, C., Kolmogorov, V., Blake, A.: “grabcut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004). https://doi.org/10.1145/1015706.1015720

    Article  Google Scholar 

  19. Tustison, N., Gee, J.: Introducing dice, Jaccard, and other label overlap measures to itk. Insight J. (2009)

    Google Scholar 

  20. Yi, F., Moon, I.: Image segmentation: a survey of graph-cut methods. In: International Conference on Systems and Informatics (ICSAI2012), pp. 1936–1941 (2012)

    Google Scholar 

  21. Yuan, Y., Chen, X., Wang, J.: Object-contextual representations for semantic segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 173–190. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_11

    Chapter  Google Scholar 

  22. Chuang, Y.-Y., Curless, B., Salesin, D.H., Szeliski, R.: A Bayesian approach to digital matting. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 2, p. II (2001)

    Google Scholar 

  23. Zaitoun, N.M., Aqel, M.J.: Survey on image segmentation techniques. Procedia Comput. Sci. 65, 797–806 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

This work has been funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) Projektnummer 396311425 (DEEP QUANTICAMS), through the Emmy Noether Programme. We would also like to thank the ROPOS team and the crew of RV Falkor, as well as Schmidt Ocean Institute, for supporting the cruise “Virtual Vents” to the Niua South Hydrothermal Vent Field.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kevin Köser .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shivaswamy, N., Kwasnitschka, T., Köser, K. (2021). Learning Visual Free Space Detection for Deep-Diving Robots. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12662. Springer, Cham. https://doi.org/10.1007/978-3-030-68790-8_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68790-8_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68789-2

  • Online ISBN: 978-3-030-68790-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics