Fast Reconstruction of 3D Point Cloud Model Using Visual SLAM on Embedded UAV Development Platform
"> Figure 1
<p>Three-dimensional (3D) reconstruction process of visual simultaneous localization and mapping (SLAM).</p> "> Figure 2
<p>The framework of the graphics processing unit (GPU)-enabled unmanned aerial vehicle (UAV) platform.</p> "> Figure 3
<p>The hardware composition of the research system.</p> "> Figure 4
<p>Calling OpenCV in robot operating system (ROS).</p> "> Figure 5
<p>Functions and process nodes in this study.</p> "> Figure 6
<p>The relationship between image pixels and threads.</p> "> Figure 7
<p>Topic communication design implemented in this study.</p> "> Figure 8
<p>Overview of system architecture.</p> "> Figure 9
<p>The flowchart of system processes.</p> "> Figure 10
<p>Flowchart of the front end of visual SLAM.</p> "> Figure 11
<p>Flowchart of the reconstruction module of the visual SLAM algorithm.</p> "> Figure 12
<p>Feature extraction and matching process.</p> "> Figure 13
<p>An example of ORB feature matching (no rejection of mismatch).</p> "> Figure 14
<p>An example of ORB feature matching (rejected by error).</p> "> Figure 15
<p>Keyframe extraction process.</p> "> Figure 16
<p>Relative poses of the keyframes corresponding to four sets of solutions.</p> "> Figure 17
<p>The number of iterated keyframes versus the number of pixels per iteration.</p> "> Figure 18
<p>The relationship between the number of iterated keyframes and the computing time of each iteration.</p> "> Figure 19
<p>Reconstruction results when NCC = 0.9 ((<b>a</b>) depth maps of the original image, (<b>b</b>–<b>f</b>) iteration for one frame, five frames, 10 frames, 15 frames, and 25 frames, respectively).</p> "> Figure 20
<p>Schematic diagram of running nodes through the launch file.</p> "> Figure 21
<p>Innovation Center Building of Qingshuihe Campus, UESTC, from the perspective of UAV.</p> "> Figure 22
<p>Resulting map of the 3D point cloud reconstruction experiment based on UAV platform. (<b>a</b>) Front view (<b>b</b>) Vertical view (<b>c</b>) Right-side view.</p> "> Figure 22 Cont.
<p>Resulting map of the 3D point cloud reconstruction experiment based on UAV platform. (<b>a</b>) Front view (<b>b</b>) Vertical view (<b>c</b>) Right-side view.</p> "> Figure 23
<p>The evaluation results in terms of reconstruction accuracy.</p> ">
Abstract
:1. Introduction
2. Proposed Method and Key Technologies
2.1. Proposed Methodology and Framework
2.2. Components and Advantages of the Methodology
2.2.1. Fast Processing
2.2.2. Loose Coupling of the Algorithm
2.2.3. Extensible System
2.3. Related Key Technologies
2.3.1. Hardware Layer Abstraction
2.3.2. Algorithm Decoupling and Distributed Computing
2.3.3. Parallel Computing Based on Graphics Processing Unit (GPU)
2.3.4. Communication and Messaging
3. Design and Implementation of Proposed Methodology
3.1. Overall Design
3.1.1. Overall System Architecture
3.1.2. System Workflow
3.1.3. Algorithmic Process
3.1.4. Simultaneous Localization and Mapping (SLAM) Front End
Design of Feature Extraction and Matching Method
Design of Keyframe Extraction
Estimating Camera Motion
3.1.5. SLAM Back End
Design of Closed-Loop Detection
Design of Pose Optimization Module
Dense Reconstruction
3.2. Design and Implementation of Parallel Algorithms
3.2.1. Implementation and Testing of Serial Algorithms
3.2.2. Parallel Strategy and Implementation
3.3. Node Communication and Its Implementation
3.3.1. Node Implementation
3.3.2. Three-Dimensional (3D) Point Cloud Visualization
3.4. System Environment, Deployment, and Operation
3.4.1. Robot Operating System (ROS) Network
3.4.2. Arithmetic Deployment and Operation
4. Description of the Experiments
4.1. Experimental Platform and Configuration
4.2. Experimental Design
4.2.1. Experiment 1: Visual Effect of 3D Point Cloud Reconstruction
4.2.2. Experiment 2: Time and Accuracy Evaluation of the Dense Point Cloud
4.3. Experimental Data
4.3.1. Data for Experiment #1
4.3.2. Data for Experiment #2
5. Experimental Results and Analysis
5.1. Experimental Results of Experiment #1 and Their Analysis
5.2. Experimental Results of Experiment #2 and Their Analysis
6. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
Appendix A
|
Appendix B
1. bool update( ) // Update the entire depth graph |
2. { |
3. for (pixel (x, y) from (0, 0) to (width-1, height-1) ) |
4. { // Traverse each pixel |
5. if (Convergence or Divergence) // Convergence or divergence of depth graph |
6. continue; |
7. bool ret ← epipolarSearch ( ); // Search for (x, y) matches on the polar line |
8. updateDepth ( ); // If matched successfully, update the depth map |
9. } |
10. } |
11. bool epipolarSearch( ) // Polar search |
12. { |
13. Vector3d P_ref; // Vector P of referenced frame |
14. Vector2d epipolar_line; // Polar line (segment form) |
15. Vector2d epipolar_direction ; // Polar line direction |
16. for (epipolar){ // Calculating NCC along the polar line |
17. Vector2d px_curr; // Matching points |
18. double ncc ← NCC(); // Computing NCC of the matched points and reference frames |
19. if ( best_ncc < N ) return false // Accept only highly matching NCC values |
20. } |
21. } |
Appendix C
1. __global__ void EpipolarMatchKernel( ) |
2. { // Assign the computing tasks through blockIdx and threadIdx |
3. int x ← blockIdx.x * blockDim.x + threadIdx.x; |
4. int y ← blockIdx.y * blockDim.y + threadIdx.y; |
5. double xx ← x+0.5f; |
6. double yy ← y+0.5f; |
7. double mu ← tex2D( ); // Retrieving the estimated depth |
8. const float sum_templ ← tex2D(sum_templ_tex, xx, yy); // Retrieving NCC matching statistics |
9. |
10. for ( every match on the polar line) // Search along the polar line |
11. if (the match succeeds) |
12. updateDepth ( ); // Update the depth graph |
13. double ncc ← ncc_numerator * rsqrtf(ncc_denominator + FLT_MIN); // Calculate value of NCC |
14. } |
Appendix D
|
Appendix E
1. cv::Mat img_8uC1; // Initialize the depth graph matrix |
2. cv_bridge::toCvShare( ); // Transform the message with the image and via cv_bridge |
3. slam::SE3<float> T_world_curr(dense_input->pose.orientation.w,); //Acquire pose information |
4. case State::TAKE_REFERENCE_FRAME: // Select the reference frame |
5. case State::UPDATE: // Update the depth graph |
6. PublishResults(); // Publish the processed results to talk |
Appendix F
1. uint64_t timestamp; // Define timestamps |
2. pcl_conversions::toPCL(ros::Time::now(), timestamp); |
3. timestamp ← ros::Time::now(); // Time stamp |
4. pc_->header.frame_id ← "/world"; // Obtain the coordinates of the center of light in the world coordinate system |
5. pc_->header.stamp ← timestamp; // Time stamp corresponding to reference frame point cloud |
6. pub_pc_.publish(pc_); // Mapping point clouds into space |
7. print out pc_->size() as number of Output point clouds; |
Appendix G
1. for(int y from 0 to depth.rows-1){ // Row-by-row traversal of pixels with reference frames. |
2. for(int x from 0 to depth.cols -1){ // Traverse the pixels column by column with reference frame |
3. const float3 xyz ← T_world_ref * ( f * depth.at<float>(y, x) ); // Reduction in point cloud position due to world coordinates, depth, and internal parameters of camera |
4. if( slam::convergence.at<int>(y, x) ) // If the depth of the pixel converges |
5. { |
6. PointType p; // Define the point cloud |
7. p.inten ← ref_img.at<uint8_t>(y, x); // The gray value of the pixel position in the reference frame |
8. pc_->push_back(p); // Save this point |
9. } |
10. } |
11. } |
References
- Ma, Y.; Wu, A.; Du, C. Vision Based Localization Algorithm for Unmanned Aerial Vehicles in Flight. Electron. Opt. Control 2013, 20, 42–46. [Google Scholar]
- Li, D.; Li, M. Research Advance and Application Prospect of Unmanned Aerial Vehicle Remote Sensing System. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 505–513. [Google Scholar]
- Wang, B.; Sun, Y.; Liu, D.; Nguyen, H.M.; Duong, T.Q. Social-Aware UAV-Assisted Mobile Crowd Sensing in Stochastic and Dynamic Environments for Disaster Relief Networks. IEEE Trans. Veh. Technol. 2019, 69, 1070–1074. [Google Scholar] [CrossRef]
- Stefanik, K.V.; Gassaway, J.C.; Kochersberger, K.; Abbott, A. UAV-Based Stereo Vision for Rapid Aerial Terrain Mapping. GISci. Remote. Sens. 2011, 48, 24–49. [Google Scholar] [CrossRef]
- Agarwal, S.; Snavely, N.; Simon, I.; Seitz, S.M.; Szeliski, R. Building Rome in a day. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2009. [Google Scholar]
- Schönberger, J.L. Robust Methods for Accurate and Efficient 3D Modeling from Unstructured Imagery. Ph.D. Thesis, ETH Zurich, Zürich, Switzerland, 2016. [Google Scholar]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4104–4113. [Google Scholar]
- Kersten, T.P.; Lindstaedt, M. Automatic 3D Object Reconstruction from Multiple Images for Architectural, Cultural Heritage and Archaeological Applications Using Open-Source Software and Web Services. Photogramm. Fernerkund. Geoinf. 2012, 2012, 727–740. [Google Scholar] [CrossRef]
- Nesbit, P.; Hugenholtz, C.H. Enhancing UAV–SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef] [Green Version]
- Shum, H.-Y.; Ke, Q.; Zhang, Z. Efficient bundle adjustment with virtual key frames: A hierarchical approach to multi-frame structure from motion. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999. [Google Scholar]
- Gherardi, R.; Farenzena, M.; Fusiello, A. Improving the efficiency of hierarchical structure-and-motion. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
- Guerra-Hernandez, J.; Cosenza, D.N.; Rodriguez, L.C.E.; Silva, M.; Tomé, M.; Díaz-Varela, R.A.; González-Ferreiro, E. Comparison of ALS- and UAV (SfM)-derived high-density point clouds for individual tree detection in Eucalyptus plantations. Int. J. Remote Sens. 2018, 39, 5211–5235. [Google Scholar] [CrossRef]
- Torres-Sánchez, J.; De Castro, A.-I.; Peña, J.M.; Jiménez-Brenes, F.M.; Arquero, O.; Lovera, M.; López-Granados, F. Mapping the 3D structure of almond trees using UAV acquired photogrammetric point clouds and object-based image analysis. Biosyst. Eng. 2018, 176, 172–184. [Google Scholar] [CrossRef]
- Ayache, N.; Faugeras, O.D. Building, Registrating, and Fusing Noisy Visual Maps. Int. J. Robot. Res. 1988, 7, 45–65. [Google Scholar] [CrossRef] [Green Version]
- Smith, R.C.; Self, M.; Cheeseman, P. Estimating Uncertain Spatial Relationships in Robotics. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003; pp. 435–461. [Google Scholar]
- Sofonia, J.J.; Shendryk, Y.; Phinn, S.; Roelfsema, C.; Kendoul, F.; Skocaj, D. Monitoring sugarcane growth response to varying nitrogen application rates: A comparison of UAV SLAM LiDAR and photogrammetry. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101878. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, C.; Song, Y.; Pang, B.; Zhang, G. Three-Dimensional Reconstruction Based on Visual SLAM of Mobile Robot in Search and Rescue Disaster Scenarios. Robotica 2020, 38, 350–373. [Google Scholar] [CrossRef]
- Civera, J.; Davison, A.J.; Montiel, J.M.M. Inverse Depth Parametrization for Monocular SLAM. IEEE Trans. Robot. 2008, 24, 932–945. [Google Scholar] [CrossRef] [Green Version]
- Davison, A.J. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the Proceedings Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; p. 1403. [Google Scholar]
- Chekhlov, D.; Pupilli, M.; Mayol-Cuevas, W.; Calway, A. Real-Time and Robust Monocular SLAM Using Predictive Multi-resolution Descriptors. In Proceedings of the 12th International Symposium on Visual Computing, Las Vegas, NV, USA, 12–14 December 2006; pp. 276–285. [Google Scholar]
- Martinez-Cantin, R.; Castellanos, J. Unscented SLAM for large-scale outdoor environments. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 3427–3432. [Google Scholar]
- Thrun, S.; Burgard, W.; Fox, D. A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots. Auton. Robot. 1998, 31, 29–53. [Google Scholar]
- Dissanayake, G.; Durrant-Whyte, H.; Bailey, T. A computationally efficient solution to the simultaneous localisation and map building (SLAM) problem. In Proceedings of the 2000 IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000. [Google Scholar]
- Williams, S.; Dissanayake, G.; Durrant-Whyte, H. Towards terrain-aided navigation for underwater robotics. Adv. Robot. 2001, 15, 533–549. [Google Scholar] [CrossRef] [Green Version]
- Zhu, D.; Sun, X.; Liu, S.; Guo, P. A SLAM method to improve the safety performance of mine robot. Saf. Sci. 2019, 120, 422–427. [Google Scholar] [CrossRef]
- Hong, S.; Kim, J. Three-dimensional Visual Mapping of Underwater Ship Hull Surface Using Piecewise-planar SLAM. Int. J. Control. Autom. Syst. 2020, 18, 564–574. [Google Scholar] [CrossRef]
- Guivant, J.E.; Nebot, E.M. Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Trans. Robot. Autom. 2001, 17, 242–257. [Google Scholar] [CrossRef]
- Guivant, J.; Nebot, E. Improving computational and memory requirements of simultaneous localization and map building algorithms. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; pp. 2731–2736. [Google Scholar]
- Williams, S.; Dissanayake, G.; Durrant-Whyte, H. An efficient approach to the simultaneous localisation and mapping problem. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; pp. 406–411. [Google Scholar]
- Kuschk, G.; Bozic, A.; Cremers, D. Real-time variational stereo reconstruction with applications to large-scale dense SLAM. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–17 June 2017; pp. 1348–1355. [Google Scholar]
- Smith, R.C.; Cheeseman, P. On the representation and estimation of spatial uncertainly. Int. J. Robot. Res. 1986, 5, 56–68. [Google Scholar] [CrossRef]
- Thrun, S.; Liu, Y.; Koller, D.; Ng, A.Y.; Ghahramani, Z.; Durrant-Whyte, H. Simultaneous Localization and Mapping with Sparse Extended Information Filters. Int. J. Robot. Res. 2004, 23, 693–716. [Google Scholar] [CrossRef]
- Montemerlo, M.S. Fastslam: A factored solution to the simultaneous localization and mapping problem with unknown data association. In Proceedings of the Eighteenth AAAI National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2002; pp. 593–598. [Google Scholar]
- Thrun, S. Probabilistic robotics. Commun. ACM 2002, 45, 52–57. [Google Scholar] [CrossRef]
- Huang, S.; Dissanayake, G. Convergence and Consistency Analysis for Extended Kalman Filter Based SLAM. IEEE Trans. Robot. 2007, 23, 1036–1049. [Google Scholar] [CrossRef]
- Geneva, P.; Maley, J.; Huang, G. An Efficient Schmidt-EKF for 3D Visual-Inertial SLAM. In Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 12105–12115. [Google Scholar]
- Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Klein, G.; Murray, D. Improving the Agility of Keyframe-Based SLAM. In Proceedings of the 10th European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 802–815. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardos, J.D. Fast relocalisation and loop closing in keyframe-based SLAM. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA 2014), Hong Kong, China, 31 May–7 June 2014; pp. 846–853. [Google Scholar]
- Olson, E.; Leonard, J.; Teller, S. Fast iterative alignment of pose graphs with poor initial estimates. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 2262–2269. [Google Scholar]
- Burgard, W.; Brock, O.; Stachniss, C. A Tree Parameterization for Efficiently Computing Maximum Likelihood Maps using Gradient Descent. In Proceedings of the 3rd Robotics: Science and Systems, Atlanta, GA, USA, 27–30 June 2007; p. 352. [Google Scholar]
- Kaess, M.; Ranganathan, A.; Dellaert, F. iSAM: Fast Incremental Smoothing and Mapping with Efficient Data Association. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 1365–1378. [Google Scholar]
- Konolige, K.; Agrawal, M. FrameSLAM: From Bundle Adjustment to Real-Time Visual Mapping. IEEE Trans. Robot. 2008, 24, 1066–1077. [Google Scholar] [CrossRef]
- Johannsson, H.; Kaess, M.; Fallon, M.; Leonard, J.J. Temporally scalable visual SLAM using a reduced pose graph. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 54–61. [Google Scholar]
- Kummerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. G2o: A general framework for graph optimization. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 3607–3613. [Google Scholar]
- Sunderhauf, N.; Protzel, P. Towards a robust back-end for pose graph SLAM. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1254–1261. [Google Scholar]
- Wilbers, D.; Merfels, C.; Stachniss, C. A Comparison of Particle Filter and Graph-Based Optimization for Localization with Landmarks in Automated Vehicles. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 220–225. [Google Scholar]
- Ko, N.Y.; Chung, J.H.; Jeon, D.B. The Implementation of Graph-based SLAM Using General Graph Optimization. J. Korea Inst. Electron. Commun. Sci. 2019, 14, 637–644. [Google Scholar]
- Li, B.; Wang, Y.; Zhang, Y.; Zhao, W.; Ruan, J.; Li, P. GP-SLAM: Laser-based SLAM approach based on regionalized Gaussian process map reconstruction. Auton. Robot. 2020, 44, 947–967. [Google Scholar] [CrossRef]
- Engel, J.; Sturm, J.; Cremers, D. Semi-dense Visual Odometry for a Monocular Camera. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1449–1456. [Google Scholar]
- Stühmer, J.; Gumhold, S.; Cremers, D. Real-Time Dense Geometry from a Handheld Camera. In Proceedings of the 32nd DAGM Symposium, Darmstadt, Germany, 22–24 September 2010; pp. 11–20. [Google Scholar]
- Newcombe, R.A.; Lovegrove, S.J.; Davison, A.J. DTAM: Dense tracking and mapping in real-time. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2320–2327. [Google Scholar]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM. In Proceedings of the 13th European Conference of Computer Vision, Zürich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
- Liu, R.; Li, B.B.; Luo, T. Three dimensional terrain modeling based on monocular UAV image sequences. Comput. Eng. Des. 2017, 38, 160–164. [Google Scholar]
- López, E.; García, S.; Barea, R.; Bergasa, L.M.; Molinos, E.J.; Arroyo, R.; Romera, E.; Pardo, S. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments. Sensors 2017, 17, 802. [Google Scholar] [CrossRef]
- Zhang, X.-L. The Research of Parallel Fastslam Algorithm Based on Cuda. Ph.D. Thesis, East China Jiaotong University, Nanchang, China, 2016. [Google Scholar]
- Zhu, F.-L.; Zeng, B.; Cao, J. Parallel optimization and implementation of SLAM algorithm based on particle filter. J. Guangdong Univ. Technol. 2017, 34, 94–98. [Google Scholar]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
- Faugeras, O.; Lustman, F. Motion and structure from motion in a piecewise planar environment. Int. J. Pattern Recognit. Artif. Intell. 1988, 2, 485–508. [Google Scholar] [CrossRef] [Green Version]
- Handa, A.; Newcombe, R.A.; Angeli, A.; Davison, A.J. Real-Time Camera Tracking: When is High Frame-Rate Best? In Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Volume 7578, pp. 222–235. [Google Scholar]
Platform | Hardware Information | Supporting CUDA | Software Information |
---|---|---|---|
NVIDIA Jetson TX2 | GPU:NVIDIA Pascal, 256 CUDA cores | Yes | Ubuntu 16.04 ROS kinetic OpenCV 3.3.1 |
CPU:HMP Dual Denver 2/2 MB L2+ Quad ARM A57/2 MB L2 | |||
Memory: 8GB |
Number of Frames | Pixel Depth Range (m) | Average Depth Value (m) | Maximum Motion (m) |
---|---|---|---|
200 | 0.827–2.84 | 1.531 | 4.576 |
Serial SLAM(s) | Time of Proposed Method (s) | Acceleration Ratio |
---|---|---|
63.53 | 8.41 | 7.55 |
SfM (s) | Time Taken by Proposed Method (s) |
---|---|
136.00 | 22.00 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, F.; Yang, H.; Tan, X.; Peng, S.; Tao, J.; Peng, S. Fast Reconstruction of 3D Point Cloud Model Using Visual SLAM on Embedded UAV Development Platform. Remote Sens. 2020, 12, 3308. https://doi.org/10.3390/rs12203308
Huang F, Yang H, Tan X, Peng S, Tao J, Peng S. Fast Reconstruction of 3D Point Cloud Model Using Visual SLAM on Embedded UAV Development Platform. Remote Sensing. 2020; 12(20):3308. https://doi.org/10.3390/rs12203308
Chicago/Turabian StyleHuang, Fang, Hao Yang, Xicheng Tan, Shuying Peng, Jian Tao, and Siyuan Peng. 2020. "Fast Reconstruction of 3D Point Cloud Model Using Visual SLAM on Embedded UAV Development Platform" Remote Sensing 12, no. 20: 3308. https://doi.org/10.3390/rs12203308
APA StyleHuang, F., Yang, H., Tan, X., Peng, S., Tao, J., & Peng, S. (2020). Fast Reconstruction of 3D Point Cloud Model Using Visual SLAM on Embedded UAV Development Platform. Remote Sensing, 12(20), 3308. https://doi.org/10.3390/rs12203308