Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

USTC FLICAR: : A sensors fusion dataset of LiDAR-inertial-camera for heavy-duty autonomous aerial work robots

Published: 01 September 2023 Publication History

Abstract

In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in the advancement of autonomous cars and unmanned aerial vehicles (UAVs). However, these two platforms differ from aerial work robots: UAVs are limited in their payload capacity, while cars are restricted to two-dimensional movements. To fill this gap, we create the “Giraffe” mapping robot based on a bucket truck, which is equipped with a variety of well-calibrated and synchronized sensors: four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the millimeter-level ground truth positions. We also make its ground twin, the “Okapi” mapping robot, to gather data for comparison. The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes, demonstrating the potential of combining autonomous driving perception systems with bucket trucks to create a versatile autonomous aerial working platform. Moreover, based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations for multimodal continuous data in both temporal and spatial dimensions. The dataset is available for download at: https://ustc-flicar.github.io/.

References

[1]
Barnes D, Gadd M, and Murcutt P, et al. (2020) The Oxford radar RobotCar dataset: A radar extension to the Oxford RobotCar dataset. In 2020 IEEE international conference on robotics and automation (ICRA), Paris, France, 31 May - 4 Jun. 2020, pp. 6433–643.
[2]
Beltrán J, Guindel C, and de la Escalera A, et al. (2022) Automatic extrinsic calibration method for LiDAR and camera sensor setups. In: IEEE Transactions on Intelligent Transportation Systems. New York, USA: IEEE.
[3]
Biber P and Straßer W (2003) The normal distributions transform: a new approach to laser scan matching. In: Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003, Vol. 3, pp. 2743–2748. IEEE.
[4]
Burri M, Nikolic J, and Gohl P, et al. (2016) The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research 35(10): 1157–1163.
[5]
Caesar H, Bankiti V, and Lang AH, et al. (2020) nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. New York, USA: IEEE, pp. 11621–11631.
[6]
Campos C, Elvira R, and Rodríguez JJG, et al. (2021) ORB-SLAM3: an accurate open-source library for visual, visual–inertial, and multimap SLAM. IEEE Transactions on Robotics 37(6): 1874–1890.
[7]
Carlevaris N, Ushani AK, and Eustice RM (2016) University of Michigan North Campus long-term vision and lidar dataset. The International Journal of Robotics Research 35(9): 1023–1035.
[8]
Delmerico J, Cieslewski T, and Rebecq H, et al. (2019) Are we ready for autonomous drone racing? the UZH-FPV drone racing dataset. In 2019 international conference on robotics and automation (ICRA), Montreal, QC, Canada, 20–24 May 2019, pp. 6713–6719. IEEE.
[9]
Everingham M, Van Gool L, and Williams CK, et al. (2010) The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision 88: 303–338.
[10]
Geiger A, Lenz P, and Stiller C, et al. (2013) Vision meets robotics: the KITTI dataset. The International Journal of Robotics Research 32(11): 1231–1237.
[11]
Geneva P and Huang G (2020) vicon2gt: Derivations and Analysis. In: Technical Report RPNG-2020-VICON2GT. Delaware: University of Delaware. Available at: https://udel.edu/∼ghuang/papers/tr_vicon2gt.pdf
[12]
Grupp M (2017) evo: Python package for the evaluation of odometry and SLAM. https://github.com/MichaelGrupp/evo
[13]
Huang AS, Antone M, and Olson E, et al. (2010) A high-rate, heterogeneous data set from the darpa urban challenge. The International Journal of Robotics Research 29(13): 1595–1601.
[14]
Jeong J, Cho Y, and Shin Y-S, et al. (2019) Complex urban dataset with multi-level sensors from highly diverse urban environments. The International Journal of Robotics Research 38(6): 642–657.
[15]
Kirillov A, Mintun E, and Ravi N, et al. (2023) Segment Anything. 2023 IEEE/CVF international conference on computer vision (ICCV), Paris, France, 2-6 Oct., 2023: arXiv preprint arXiv:2304.02643.
[16]
Li X, Xiao Y, and Wang B, et al. (2022) Automatic Targetless LiDAR–Camera Calibration: A Survey. Artificial Intelligence Review 56(21): 1–39.
[17]
Maddern W, Pascoe G, and Linegar C, et al. (2017) 1 year, 1000 km: the Oxford RobotCar dataset. The International Journal of Robotics Research 36(1): 3–15.
[18]
Majdik AL, Till C, and Scaramuzza D (2017) The Zurich urban micro aerial vehicle dataset. The International Journal of Robotics Research 36(3): 269–273.
[19]
Nguyen T-M, Yuan S, and Cao M, et al. (2022) Ntu viral: a visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint. The International Journal of Robotics Research 41(3): 270–280.
[20]
Olson E (2011) AprilTag: A Robust and Flexible Visual Fiducial System. In: 2011 IEEE international conference on robotics and automation, Shanghai, China, 9–13 May 2011, pp. 3400–3407. IEEE.
[21]
Pandey G, McBride JR, and Eustice RM (2011) Ford campus vision and lidar data set. The International Journal of Robotics Research 30(13): 1543–1552.
[22]
Pire T, Mujica M, and Civera J, et al. (2019) The Rosario dataset: multisensor data for localization and mapping in agricultural environments. The International Journal of Robotics Research 38(6): 633–641.
[23]
Qin T, Li P, and Shen S (2018) VINS-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34(4): 1004–1020.
[24]
Qiu K, Qin T, and Pan J, et al. (2020) Real-time temporal and rotational calibration of heterogeneous sensors using motion correlation analysis. IEEE Transactions on Robotics 37(2): 587–602.
[25]
Rehder J, Nikolic J, and Schneider T, et al. (2016) Extending kalibr: calibrating the extrinsics of multiple IMUs and of individual axes. In 2016 IEEE international conference on robotics and automation (ICRA), Stockholm, Sweden, 16–21 May 2016, pp. 4304–4311. IEEE.
[26]
Segal A, Haehnel D, and Thrun S (2009) Generalized-ICP. In: Robotics: Science and Systems. WA, Seattle: University of Washington, Vol. 2, p. 435.
[27]
Shan T and Englot B (2018) LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), Madrid, Spain, 1–5 Oct. 2018, pp. 4758–4765. IEEE.
[28]
Shan T, Englot B, and Meyers D, et al. (2020) LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), Las Vegas, NV, USA, Oct 2020, pp. 255135–295142. IEEE.
[29]
Shan T, Englot B, and Ratti C, et al. (2021) LVI-SAM: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. In: 2021 IEEE international conference on robotics and automation (ICRA), Xi’an, China, 30 May-5 Jun 2021, pp. 5692–5698. IEEE.
[30]
Sola J (2012) Quaternion kinematics for the error-state KF. Laboratoire dAnalyse et dArchitecture des Systemes-Centre national de la recherche scientifique (LAAS-CNRS), Toulouse, France, Tech. Rep.
[31]
Sturm J, Engelhard N, and Endres F, et al. (2012) A Benchmark for the Evaluation of RGB-D SLAM Systems. In: 2012 IEEE/RSJ international conference on intelligent robots and systems, Vilamoura-Algarve, Portugal, 7–12 Oct. 2012, pp. 573–580. IEEE.
[32]
Xu W and Zhang F (2021) FAST-LIO: a fast, robust LiDAR-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robotics and Automation Letters 6(2): 3317–3324.
[33]
Yan Z, Sun L, and Krajník T, et al. (2020) EU long-term dataset with multiple sensors for autonomous driving. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), Las Vegas, NV, USA, Oct 2020, pp. 2510697–2910704. IEEE.
[34]
Zhang Z (2000) A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11): 1330–1334.
[35]
Zhang J and Singh S (2014) LOAM: lidar odometry and mapping in real-time. In: Robotics: Science and Systems. Berkeley, CA, USA: University of California, Vol. 2, pp. 1–9.
[36]
Zhang J, Kaess M, and Singh S (2016) On degeneracy of optimization-based state estimation problems. In 2016 IEEE international conference on robotics and automation (ICRA), Stockholm, Sweden, 16–21 May 2016, pp. 809–816. IEEE.
[37]
Zhu F, Ren Y, and Zhang F (2022) Robust real-time LiDAR-inertial initialization. In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS), Kyoto, Japan, Oct. 2022, pp. 233948–273955. IEEE.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image International Journal of Robotics Research
International Journal of Robotics Research  Volume 42, Issue 11
Sep 2023
71 pages

Publisher

Sage Publications, Inc.

United States

Publication History

Published: 01 September 2023

Author Tags

  1. Dataset
  2. aerial work
  3. aerial robot
  4. mobile robot
  5. Simultaneous Localization and Mapping
  6. semantic segmentation
  7. computer vision
  8. LiDAR
  9. cameras
  10. bucket truck

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 26 Nov 2024

Other Metrics

Citations

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media