Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Direct RGB-D visual odometry with point features

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

In this paper, we propose a traditional semi-dense direct visual odometry (VO) based on our preliminary study using low-order Gaussian derivative functions for solving a VO problem with pure frame-by-frame point tracking. With the off-line fitting analysis of residual sets that we firstly performed to determine the coarse-to-fine framework, this study employs a simple local interpolation to enrich the searching space of the subsample of the original image. Without any processing for dealing with implementation acceleration, tracking lost and divergence problems, the proposed approach achieves relatively acceptable performance compared with baseline algorithms of both the direct approach and the matching-based data association algorithm. An experimental study is conducted using a group of TUM datasets and the reference VO algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. One can zoom the.pdf to or so 330% for clear view.

  2. Note that \(\varvec{T}\) is \(4\times 4\). We give \(\varvec{T}\) directly in Eq. (2) simply for illustrating notation.

  3. Note that it is not the same item as the one in weighting function (12).

  4. Without mentioning it explicitly, the RMSE of RPE or only RPE indicates the RMSE of RPE of translation from now on.

References

  1. Baker SMI (2004) Lucas-kanade 20 years on: a unifying framework. Int J Comput Vis 56:221–255

    Article  Google Scholar 

  2. Bian J, Li Z, Wang N et al (2019) Unsupervised scale-consistent depth and ego-motion learning from monocular video. Adv Neural Inf Process Syst 32. https://doi.org/10.48550/arXiv.1908.10553

  3. Christensen K, Hebert M (2019) Edge-direct visual odometry. CoRR abs/1906.04838. arXiv:1906.04838

  4. Di Giammarino L, Giacomini E, Brizi L et al (2023) Photometric lidar and rgb-d bundle adjustment. IEEE Robot Autom Lett 8(7):4362–4369. https://doi.org/10.1109/LRA.2023.3281907

    Article  Google Scholar 

  5. Engel J, Schöps T, Cremers D (2014) LSD-SLAM: large-scale direct monocular slam. In: Fleet D, Pajdla T, Schiele B et al (eds) Computer vision - ECCV 2014. Springer International Publishing, Cham, pp 834–849

    Chapter  Google Scholar 

  6. Forster C, Pizzoli M, Scaramuzza D (2014) SVO: fast semi-direct monocular visual odometry. In: 2014 IEEE international conference on robotics and automation (ICRA), pp 15–22. https://doi.org/10.1109/ICRA.2014.6906584

  7. Gallego G, Delbrück T, Orchard G et al (2022) Event-based vision: a survey. IEEE Trans Pattern Anal Mach Intell 44(1):154–180. https://doi.org/10.1109/TPAMI.2020.3008413

    Article  Google Scholar 

  8. Grupp M (2017) EVO: Python package for the evaluation of odometry and slam. https://github.com/MichaelGrupp/evo

  9. Gutierrez-Gomez D, Mayol-Cuevas W, Guerrero J (2016) Dense RGB-D visual odometry using inverse depth. Robot Auton Syst 75:571–583. https://doi.org/10.1016/j.robot.2015.09.026

    Article  Google Scholar 

  10. Hosszejni D (2021) Bayesian estimation of the degrees of freedom parameter of the student-t distribution—a beneficial re-parameterization. Preprint at arXiv:2109.01726

  11. Javed Z, Kim GW (2022) OmniVO: toward robust omni directional visual odometry with multicamera collaboration for challenging conditions. IEEE Access 10:99861–99874. https://doi.org/10.1109/ACCESS.2022.3204870

    Article  Google Scholar 

  12. Judd KM, Gammell JD (2024) Multimotion visual odometry. Int J Robot Res 43:02783649241229095

    Article  Google Scholar 

  13. Kerl C, Sturm J, Cremers D (2013) Robust odometry estimation for RGB-D cameras. In: 2013 IEEE international conference on robotics and automation, pp 3748–3754. https://doi.org/10.1109/ICRA.2013.6631104

  14. Klein G, Murray D (2007) Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM international symposium on mixed and augmented reality, pp 225–234. https://doi.org/10.1109/ISMAR.2007.4538852

  15. Kuse M, Shen S (2016) Robust camera motion estimation using direct edge alignment and sub-gradient method. In: 2016 IEEE international conference on robotics and automation (ICRA), IEEE, pp 573–579

  16. Lee SY (2022) The use of a log-normal prior for the student t-distribution. Axioms 11(9):462. https://doi.org/10.3390/axioms11090462

    Article  MathSciNet  Google Scholar 

  17. Levin A, Szeliski R (2004) Visual odometry and map correlation. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004. CVPR 2004., pp I–I, https://doi.org/10.1109/CVPR.2004.1315088

  18. Li S, Lee D (2016) Fast visual odometry using intensity-assisted iterative closest point. IEEE Robot Autom Lett 1(2):992–999. https://doi.org/10.1109/LRA.2016.2530164

    Article  Google Scholar 

  19. Li S, Lee D (2017) RGB-D SLAM in dynamic environments using static point weighting. IEEE Robot Autom Lett 2(4):2263–2270. https://doi.org/10.1109/LRA.2017.2724759

    Article  Google Scholar 

  20. Liang Y, Zeng H, Zhang B et al (2024) Brightness alignment based coarse-to-fine self-supervised visual odometry. IEEE Trans Intell Veh. https://doi.org/10.1109/TIV.2024.3379575

    Article  Google Scholar 

  21. Lu G (2023) Deep unsupervised visual odometry via bundle adjusted pose graph optimization. In: 2023 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6131–6137

  22. Lu Y, Song D (2015) Robust RGB-D odometry using point and line features. In: 2015 IEEE international conference on computer vision (ICCV), pp 3934–3942. https://doi.org/10.1109/ICCV.2015.448

  23. Mur-Artal R, Tardós JD (2017) Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans Robot 33(5):1255–1262. https://doi.org/10.1109/TRO.2017.2705103

    Article  Google Scholar 

  24. Nguyen TT, Nguyen TP, Bouchara F (2021) Dynamic texture representation based on oriented magnitudes of gaussian gradients. J Vis Commun Image Represent 81:103330

    Article  Google Scholar 

  25. OpenCV (2022) Opencv3.4.3. https://opencv.org/blog/release/opencv-3-4-3/

  26. Pizenberg M (2019) DVO core. https://github.com/mpizenberg/dvo/tree/76f65f0c9b438675997f595471d39863901556a9

  27. Proença PF, Gao Y (2018) Probabilistic RGB-D odometry based on points, lines and planes under depth uncertainty. Robot Auton Syst 104:25–39. https://doi.org/10.1016/j.robot.2018.02.018

    Article  Google Scholar 

  28. Qin T, Li P, Shen S (2018) Vins-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans Robot 34(4):1004–1020

    Article  Google Scholar 

  29. Scaramuzza D, Fraundorfer F (2011) Visual odometry [tutorial]. IEEE Robot Autom Mag 18(4):80–92. https://doi.org/10.1109/MRA.2011.943233

    Article  Google Scholar 

  30. Shi J, et al (1994) Good features to track. In: 1994 Proceedings of IEEE conference on computer vision and pattern recognition. IEEE, pp 593–600

  31. Strasdat H, Davison AJ, Montiel J, et al (2011) Double window optimisation for constant time visual slam. In: 2011 international conference on computer vision, pp 2352–2359. https://doi.org/10.1109/ICCV.2011.6126517

  32. Sturm J, Engelhard N, Endres F, et al (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ international conference on intelligent robots and systems, pp 573–580. https://doi.org/10.1109/IROS.2012.6385773

  33. Teed Z, Deng J (2020) Raft: recurrent all-pairs field transforms for optical flow. In: Computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, proceedings, part II 16, Springer, pp 402–419

  34. Teed Z, Lipson L, Deng J (2024) Deep patch visual odometry. Adv Neural Inf Process Syst 36. https://doi.org/10.48550/arXiv.2208.04726.

  35. Valmadre J, Bertinetto L, Henriques J et al (2017) End-to-end representation learning for correlation filter based tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2805–2813

  36. Xue W, Mou X, Zhang L et al (2014) Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans Image Process 23(11):4850–4862. https://doi.org/10.1109/TIP.2014.2355716

    Article  MathSciNet  Google Scholar 

  37. Zhan H, Weerasekera CS, Bian JW et al (2020) Visual odometry revisited: What should be learnt?. In: 2020 IEEE international conference on robotics and automation (ICRA), pp 4203–4210. https://doi.org/10.1109/ICRA40945.2020.9197374

  38. Zhang Y, Tiňo P, Leonardis A et al (2021) A survey on neural network interpretability. IEEE Trans Emerg Top Comput Intell 5(5):726–742

    Article  Google Scholar 

  39. Zhou Y, Li H, Kneip L (2019) Canny-vo: Visual odometry with rgb-d cameras based on geometric 3-d–2-d edge alignment. IEEE Trans Robot 35(1):184–199. https://doi.org/10.1109/TRO.2018.2875382

    Article  Google Scholar 

  40. Zhu J (2017) Image gradient-based joint direct visual odometry for stereo camera. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI-17, pp 4558–4564. https://doi.org/10.24963/ijcai.2017/636

  41. Zuo X, Merrill N, Li W, et al (2021) Codevio: visual-inertial odometry with learned optimizable dense depth. In: 2021 IEEE international conference on robotics and automation (ICRA), IEEE, pp 14382–14388

  42. Zuo YF, Yang J, Chen J, et al (2022) Devo: depth-event camera visual odometry in challenging conditions. In: 2022 international conference on robotics and automation (ICRA). IEEE, pp 2179–2185

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xu An.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yao, Z., An, X., Charrier, C. et al. Direct RGB-D visual odometry with point features. Intel Serv Robotics 17, 1077–1089 (2024). https://doi.org/10.1007/s11370-024-00559-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-024-00559-w

Keywords

Navigation