Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Complex urban dataset with multi-level sensors from highly diverse urban environments

Published: 01 May 2019 Publication History

Abstract

The high diversity of urban environments, at both the inter and intra levels, poses challenges for robotics research. Such challenges include discrepancies in urban features between cities and the deterioration of sensor measurements within a city. With such diversity in consideration, this paper aims to provide Light Detection and Ranging (LiDAR) and image data acquired in complex urban environments. In contrast to existing datasets, the presented dataset encapsulates various complex urban features and addresses the major issues of complex urban areas, such as unreliable and sporadic Global Positioning System (GPS) data, multi-lane roads, complex building structures, and the abundance of highly dynamic objects. This paper provides two types of LiDAR sensor data (2D and 3D) as well as navigation sensor data with commercial-level accuracy and high-level accuracy. In addition, two levels of sensor data are provided for the purpose of assisting in the complete validation of algorithms using consumer-grade sensors. A forward-facing stereo camera was utilized to capture visual images of the environment and the position information of the vehicle that was estimated through simultaneous localization mapping (SLAM) are offered as a baseline. This paper presents 3D map data generated by the SLAM algorithm in the LASer (LAS) format for a wide array of research purposes, and a file player and a data viewer have been made available via the Github webpage to allow researchers to conveniently utilize the data in a Robot Operating System (ROS) environment. The provided file player is capable of sequentially publishing large quantities of data, similar to the rosbag player. The dataset in its entirety can be found at http://irap.kaist.ac.kr/dataset.

References

[1]
Blanco-Claraco JL, Moreno-Dueñas FÁ, and González-Jiménez J (2014) The Málaga urban dataset: High-rate stereo and LIDAR in a realistic urban scenario. The International Journal of Robotics Research 33(2): 207–214.
[2]
Burri M, Nikolic J, Gohl P, et al. (2016) The EUROC micro aerial vehicle datasets. The International Journal of Robotics Research 35(10): 1157–1163.
[3]
Carlevaris-Bianco N, Ushani AK, and Eustice RM (2016) University of Michigan North Campus long-term vision and LIDAR dataset. The International Journal of Robotics Research 35(9): 1023–1035.
[4]
Cordts M, Omran M, Ramos S, et al. (2016) The Cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223.
[6]
Fallon M, Johannsson H, Kaess M, and Leonard JJ (2013) The MIT Stata Center dataset. The International Journal of Robotics Research 32(14): 1695–1699.
[7]
Geiger A, Lenz P, Stiller C, and Urtasun R (2013) Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research 32(11): 1231–1237.
[8]
Huang AS, Antone M, Olson E, et al. (2010) A high-rate, heterogeneous data set from the DARPA Urban Challenge. The International Journal of Robotics Research 29(13): 1595–1601.
[9]
Huang X, Cheng X, Geng Q, et al. (2018) The Apolloscape data-set for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 954–960.
[10]
Jeong J, Cho Y, and Kim A (2019) Road is enough! Extrinsic calibration of non-overlapping stereo camera and LIDAR using road information. arXiv preprint arXiv:1902.10586.
[11]
Jeong J, Cho Y, Shin YS, Roh H, and Kim A (2018) Complex urban LiDAR data set. In: Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, pp. 6344–6351.
[12]
Jung H, Oto Y, Mozos OM, Iwashita Y, and Kurazume R (2016) Multi-modal panoramic 3D outdoor datasets for place categorization. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 4545–4550.
[13]
Kaess M, Ranganathan A, and Dellaert F (2008) iSAM: Incremental smoothing and mapping. IEEE Transactions on Robotics 24(6): 1365–1378.
[14]
Leung KY, Halpern Y, Barfoot TD, and Liu HH (2011) The UTIAS multi-robot cooperative localization and mapping dataset. The International Journal of Robotics Research 30(8): 969–974.
[15]
Maddern W, Pascoe G, Linegar C, and Newman P (2017) 1 year, 1000 km: The Oxford Robotcar dataset. The International Journal of Robotics Research 36(1): 3–15.
[16]
Miller M, Chung SJ, and Hutchinson S (2018) The visual–inertial canoe dataset. The International Journal of Robotics Research 37(1): 13–20.
[17]
Mueggler E, Rebecq H, Gallego G, Delbruck T, and Scaramuzza D (2017) The event–camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. The International Journal of Robotics Research 36(2): 142–149.
[18]
Pandey G, McBride JR, and Eustice RM (2011) Ford Campus vision and LIDAR data set. The International Journal of Robotics Research 30(13): 1543–1552.
[19]
Peynot T, Scheding S, and Terho S (2010) The Marulan data sets: Multi-sensor perception in a natural environment with challenging conditions. The International Journal of Robotics Research 29(13): 1602–1607.
[20]
Pomerleau F, Liu M, Colas F, and Siegwart R (2012) Challenging data sets for point cloud registration algorithms. The International Journal of Robotics Research 31(14): 1705–1711.
[21]
Roh H, Jeong J, and Kim A (2017) Aerial image based heading correction for large scale SLAM in an urban canyon. IEEE Robotics and Automation Letters 2(4): 2232–2239.
[22]
Schütz M (2016) Potree: Rendering large point clouds in web browsers. Master Thesis, Technische Universität Wien, Wiedeń. Available at: https://www.cg.tuwien.ac.at/research/publications/2016/SCHUETZ-2016-POT/ (accessed March 2019).
[23]
Scott T, Morye AA, Piniés P, Paz LM, Posner I, and Newman P (2015) Exploiting known unknowns: Scene induced cross-calibration of LIDAR–stereo systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 3647–3653.
[24]
Segal A, Haehnel D, and Thrun S (2009) Generalized-ICP. In: Robotics: Science and Systems, Vol. 2.
[25]
Smith M, Baldwin I, Churchill W, Paul R, and Newman P (2009) The New College vision and laser data set. The International Journal of Robotics Research 28(5): 595–599.
[26]
Smith R, Self M, and Cheeseman P (1990) Estimating uncertain spatial relationships in robotics. In: Autonomous Robot Vehicles. New York: Springer, pp. 167–193.
[27]
Zhang Z (2000) A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11): 1330–1334.
[28]
Zhu AZ, Thakur D, Ozaslan T, Pfommer B, Kumar V, and Danilidis K (2018) The multi vehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robotics and Automation Letters 3(3): 2032–2039.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image International Journal of Robotics Research
International Journal of Robotics Research  Volume 38, Issue 6
May 2019
133 pages

Publisher

Sage Publications, Inc.

United States

Publication History

Published: 01 May 2019

Author Tags

  1. Dataset
  2. urban
  3. LiDARs
  4. cameras
  5. SLAM

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)YUTO MMSInternational Journal of Robotics Research10.1177/0278364924126107944:1(3-21)Online publication date: 1-Jan-2025
  • (2024)HeLiPRInternational Journal of Robotics Research10.1177/0278364924124213643:12(1867-1883)Online publication date: 16-Oct-2024
  • (2024)MUN-FRLInternational Journal of Robotics Research10.1177/0278364924123835843:12(1853-1866)Online publication date: 16-Oct-2024
  • (2024)MARS-LVIG datasetInternational Journal of Robotics Research10.1177/0278364924122796843:8(1114-1127)Online publication date: 7-Aug-2024
  • (2024)Deep Multimodal Data FusionACM Computing Surveys10.1145/364944756:9(1-36)Online publication date: 24-Apr-2024
  • (2024)GNSS/Multisensor Fusion Using Continuous-Time Factor Graph Optimization for Robust LocalizationIEEE Transactions on Robotics10.1109/TRO.2024.344369940(4003-4023)Online publication date: 15-Aug-2024
  • (2024)Autonomous Vehicle Localization Without Prior High-Definition MapIEEE Transactions on Robotics10.1109/TRO.2024.339214940(2888-2906)Online publication date: 22-Apr-2024
  • (2024)BTC: A Binary and Triangle Combined Descriptor for 3-D Place RecognitionIEEE Transactions on Robotics10.1109/TRO.2024.335307640(1580-1599)Online publication date: 11-Jan-2024
  • (2024)Ground-VIO: Monocular Visual-Inertial Odometry With Online Calibration of Camera-Ground Geometric ParametersIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.339312525:10(14328-14343)Online publication date: 6-May-2024
  • (2024)Advancements in 3D digital model generation for digital twins in industrial environmentsAdvanced Engineering Informatics10.1016/j.aei.2024.10292962:PDOnline publication date: 1-Oct-2024
  • Show More Cited By

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media