Nothing Special   »   [go: up one dir, main page]

Skip to main content

Fast Omnidirectional Depth Densification

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11844))

Included in the following conference series:

Abstract

Omnidirectional cameras are commonly equipped with fisheye lenses to capture 360-degree visual information, and severe spherical projective distortion occurs when a 360-degree image is stored as a two-dimensional image array. As a consequence, traditional depth estimation methods are not directly applicable to omnidirectional cameras. Dense depth estimation for omnidirectional imaging has been achieved by applying several offline processes, such as patch-matching, optical flow, and convolutional propagation filtering, resulting in additional heavy computation. No dense depth estimation for real-time applications is available yet. In response, we propose an efficient depth densification method designed for omnidirectional imaging to achieve 360-degree dense depth video with an omnidirectional camera. First, we compute the sparse depth estimates using a conventional simultaneous localization and mapping (SLAM) method, and then use these estimates as input to a depth densification method. We propose a novel densification method using the spherical pull-push method by devising a joint spherical pyramid for color and depth, based on multi-level icosahedron subdivision surfaces. This allows us to propagate the sparse depth continuously over 360-degree angles efficiently in an edge-aware manner. The results demonstrate that our real-time densification method is comparable to state-of-the-art offline methods in terms of per-pixel depth accuracy. Combining our depth densification with a conventional SLAM allows us to capture real-time 360-degree RGB-D video with a single omnidirectional camera.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Note that contrary to the labeling convention of the image pyramid, we label each level from the coarse to fine level in ascending order, following the subdivision labeling convention.

References

  1. Barron, J.T., Poole, B.: The fast bilateral solver. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 617–632. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_38

    Chapter  Google Scholar 

  2. Bunschoten, R., Krose, B.: Robust scene reconstruction from an omnidirectional vision system. IEEE Trans. Robot. Autom. 19(2), 351–357 (2003)

    Article  Google Scholar 

  3. Caruso, D., Engel, J., Cremers, D.: Large-scale direct slam for omnidirectional cameras. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 141–148. IEEE (2015)

    Google Scholar 

  4. Chen, Z., Badrinarayanan, V., Drozdov, G., Rabinovich, A.: Estimating depth from RGB and sparse sensing. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 167–182 (2018)

    Google Scholar 

  5. Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)

    Article  Google Scholar 

  6. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. Siggraph 96, 43–54 (1996)

    Google Scholar 

  7. Guan, H., Smith, W.A.: Brisks: binary features for spherical images on a geodesic grid. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4516–4524 (2017)

    Google Scholar 

  8. Hawe, S., Kleinsteuber, M., Diepold, K.: Dense disparity maps from sparse disparity measurements. In: 2011 International Conference on Computer Vision, pp. 2126–2133. IEEE (2011)

    Google Scholar 

  9. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)

    Article  Google Scholar 

  10. Holynski, A., Kopf, J.: Fast depth densification for occlusion-aware augmented reality. In: SIGGRAPH Asia 2018 Technical Papers, p. 194. ACM (2018)

    Google Scholar 

  11. Huang, J., Chen, Z., Ceylan, D., Jin, H.: 6-DOF VR videos with a single 360-camera. In: 2017 IEEE Virtual Reality (VR), pp. 37–44. IEEE (2017)

    Google Scholar 

  12. Im, S., Ha, H., Rameau, F., Jeon, H.-G., Choe, G., Kweon, I.S.: All-around depth from small motion with a spherical panoramic camera. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 156–172. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_10

    Chapter  Google Scholar 

  13. Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with CNNs: depth completion and semantic segmentation. In: 2018 International Conference on 3D Vision (3DV), pp. 52–60. IEEE (2018)

    Google Scholar 

  14. Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. In: ACM Transactions on Graphics (ToG), vol. 26, p. 96. ACM (2007)

    Google Scholar 

  15. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: ACM Transactions on Graphics (ToG), vol. 23, pp. 689–694. ACM (2004)

    Google Scholar 

  16. Li, S.: Binocular spherical stereo. IEEE Trans. Intell. Transp. Syst. 9(4), 589–600 (2008)

    Article  MathSciNet  Google Scholar 

  17. Li, S., Fukumori, K.: Spherical stereo for the construction of immersive VR environment. In: IEEE Proceedings, VR 2005, Virtual Reality, pp. 217–222. IEEE (2005)

    Google Scholar 

  18. Lin, H.S., Chang, C.C., Chang, H.Y., Chuang, Y.Y., Lin, T.L., Ouhyoung, M.: A low-cost portable polycamera for stereoscopic \(360^{\circ }\) imaging. IEEE Trans. Circ. Syst. Video Technol. 29(4), 915–929 (2018)

    Article  Google Scholar 

  19. Mal, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. IEEE (2018)

    Google Scholar 

  20. Muja, M., Lowe, D.G.: Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2227–2240 (2014)

    Article  Google Scholar 

  21. Shen, S.: Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 22(5), 1901–1914 (2013)

    Article  MathSciNet  Google Scholar 

  22. Shirley, P., Chiu, K.: A low distortion map between disk and square. J. Graph. Tools 2(3), 45–52 (1997)

    Article  Google Scholar 

  23. Zhao, Q., Feng, W., Wan, L., Zhang, J.: SPHORB: a fast and robust binary feature on the sphere. Int. J. Comput. Vision 113(2), 143–159 (2015)

    Article  MathSciNet  Google Scholar 

  24. Zhu, Z.: Omnidirectional stereo vision. In: Proceedings of the Workshop on Omnidirectional Vision, Budapest, Hungary (2001)

    Google Scholar 

Download references

Acknowledgements

Min H. Kim acknowledges Korea NRF grants (2019R1A2C3007229, 2013M3A6A-6073718) and additional support by Cross-Ministry Giga KOREA Project (GK17-P0200), Samsung Electronics (SRFC-IT1402-02), ETRI (19ZR1400), and an ICT R&D program of MSIT/IITP of Korea (2016-0-00018).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min H. Kim .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1370 KB)

Supplementary material 2 (mp4 57316 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jang, H., Jeon, D.S., Ha, H., Kim, M.H. (2019). Fast Omnidirectional Depth Densification. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2019. Lecture Notes in Computer Science(), vol 11844. Springer, Cham. https://doi.org/10.1007/978-3-030-33720-9_53

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33720-9_53

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33719-3

  • Online ISBN: 978-3-030-33720-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics