Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3412841.3441954acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Cooperative place recognition in robotic swarms

Published: 22 April 2021 Publication History

Abstract

In this paper we propose a study on landmark identification as a step towards a localization setup for real-world robotic swarms setup. In real world, landmark identification is often tackled as a place recognition problem through the use of computationally intensive Convolutional Neural Networks. However, the components of a robotic swarm usually have limited computational and sensing capabilities that allows only for the application of relatively shallow networks that results in large percentage of recognition errors. In a previous attempt of solving a similar setup - cooperative object recognition - the authors of [1] have demonstrated how the use of communication among a swarm and a naive Bayes classifier was able to substantially improve the correct recognition rate. An assumption of that paper not compatible with a swarm localization setup was that all swarm components would be looking at the same object. In this paper, we propose the use of a weighting factor to relapse this assumption. Through the use of simulation data, we show that our approach provides high recognition rates even in situations in which the robots would look at different objects.

References

[1]
P. Stegagno, C. Massidda, and H. H. Bülthoff, "Distributed target identification in robotic swarms," in Proceedings of the 30th Annual ACM Symposium on Applied Computing, ser. SAC '15. New York, NY, USA: Association for Computing Machinery, 2015, p. 307--313. [Online]. Available
[2]
L. Bayindir, "A review of swarm robotics tasks," Neurocomputing, vol. 172, 08 2015.
[3]
M. Bakhshipour, M. J. Ghadi, and F. Namdari, "Swarm robotics search & rescue: A novel artificial intelligence-inspired optimization approach," Applied Soft Computing, vol. 57, pp. 708 -- 726, 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1568494617301072
[4]
K. N. McGuire, C. De Wagter, K. Tuyls, H. J. Kappen, and G. C. H. E. de Croon, "Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment," Science Robotics, vol. 4, no. 35, 2019. [Online]. Available: https://robotics.sciencemag.org/content/4/35/eaaw9710
[5]
E. Zahugi, M. Shanta, and T. Prasad, "Oil spill cleaning up using swarm of robots," Advances in Intelligent Systems and Computing, vol. 178, pp. 215--224, 01 2013.
[6]
N. Kakalis and Y. Ventikos, "Robotic swarm concept for efficient oil spill confrontation," Journal of hazardous materials, vol. 154, pp. 880--7, 07 2008.
[7]
M. Senanayake, I. Senthooran, J. C. Barca, H. Chung, J. Kamruzzaman, and M. Murshed, "Search and tracking algorithms for swarms of robots: A survey," Robotics and Autonomous Systems, vol. 75, pp. 422 -- 434, 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0921889015001876
[8]
M. Kayser, L. Cai, S. Falcone, C. Bader, N. Inglessis, B. Darweesh, and N. Oxman, "Design of a multi-agent, fiber composite digital fabrication system," Science Robotics, vol. 3, no. 22, 2018. [Online]. Available: https://robotics.sciencemag.org/content/3/22/eaau5630
[9]
R. Dasgupta, S. O'Hara, and P. Petrov, "A multi-agent uav swarm for automatic target recognition," 01 2005, pp. 80--91.
[10]
R. Sahdev and J. K. Tsotsos, "Indoor place recognition system for localization of mobile robots," in 2016 13th Conference on Computer and Robot Vision (CRV), June 2016, pp. 53--60.
[11]
M. Betke and L. Gurvits, "Mobile robot localization using landmarks," Robotics and Automation, IEEE Transactions on, vol. 13, pp. 251 -- 263, 05 1997.
[12]
X. Xu, Y. Luo, and H. Hao, "Vision-based mobile robot localization using natural landmarks," in 2012 International Conference on Systems and Informatics (ICSAI2012), 2012, pp. 2012--2015.
[13]
Pifu Zhang, E. E. Milios, and J. Gu, "Underwater robot localization using artificial visual landmarks," in 2004 IEEE International Conference on Robotics and Biomimetics, 2004, pp. 705--710.
[14]
D. Heo, A. Oh, and T. Park, "A localization system of mobile robots using artificial landmarks," in 2011 IEEE International Conference on Automation Science and Engineering, 2011, pp. 139--144.
[15]
S. Lowry, N. Sünderhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke, and M. J. Milford, "Visual place recognition: A survey," IEEE Transactions on Robotics, vol. 32, no. 1, pp. 1--19, Feb 2016.
[16]
D. G. Lowe, "Object recognition from local scale-invariant features," in Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, 1999, pp. 1150--1157 vol.2.
[17]
J. Guo, P. Borges, C. Park, and A. Gawel, "Local descriptor for robust place recognition using lidar intensity," IEEE Robotics and Automation Letters, vol. PP, pp. 1--1, 01 2019.
[18]
D. Galvez-López and J. D. Tardos, "Bags of binary words for fast place recognition in image sequences," IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188--1197, 2012.
[19]
M. J. Milford and G. F. Wyeth, "Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights," in 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 1643--1649.
[20]
E. Pepperell, P. I. Corke, and M. J. Milford, "All-environment visual place recognition with smart," in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 1612--1618.
[21]
P. Newman, D. Cole, and K. Ho, "Outdoor slam using visual appearance and laser ranging," in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., 2006, pp. 1180--1187.
[22]
T. Naseer, W. Burgard, and C. Stachniss, "Robust visual localization across seasons," IEEE Transactions on Robotics, vol. 34, no. 2, pp. 289--302, 2018.
[23]
J. Facil, D. Olid, L. Montesano, and J. Civera, "Condition-invariant multi-view place recognition," 02 2019.
[24]
J. Collier, S. Se, and V. Kotamraju, "Multi-sensor appearance-based place recognition," 05 2013.
[25]
X. Shen, "A survey of object classification and detection based on 2d/3d data," 2019.
[26]
K. Welke, J. Issac, D. Schiebener, T. Asfour, and R. Dillmann, "Autonomous acquisition of visual multi-view object representations for object recognition on a humanoid robot," in 2010 IEEE International Conference on Robotics and Automation, 2010, pp. 2012--2019.
[27]
C. Baillard, C. Schmid, A. Zisserman, and A. Fitzgibbon, "Automatic line matching and 3D reconstruction of buildings from multiple views," in ISPRS Conference on Automatic Extraction of GIS Objects from Digital Imagery, ser. International Archives of Photogrammetry and Remote Sensing, vol. 32, Part 3-2W5, Munich, Germany, Sep. 1999, pp. 69--80. [Online]. Available: https://hal.inria.fr/inria-00590111
[28]
A. Mittal and L. Davis, "M2tracker: A multi-view approach to segmenting and tracking people in a cluttered scene using region-based stereo," vol. 2350, 05 2002.
[29]
S. Huang, Y. Chen, T. Yuan, S. Qi, Y. Zhu, and S.-C. Zhu, "Perspectivenet: 3d object detection from a single rgb image via perspective points," in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8905--8917. [Online]. Available: http://papers.nips.cc/paper/9093-perspectivenet-3d-object-detection-from-a-single-rgb-image-via-perspective-points.pdf
[30]
Y. Xiang, W. Kim, W. Chen, J. Ji, C. Choy, H. Su, R. Mottaghi, L. Guibas, and S. Savarese, "Objectnet3d: A large scale database for 3d object recognition," vol. 9912, 10 2016, pp. 160--176.
[31]
A. C. Sankaranarayanan, A. Veeraraghavan, and R. Chellappa, "Object detection, tracking and recognition for multiple smart cameras," Proceedings of the IEEE, vol. 96, no. 10, pp. 1606--1624, 2008.
[32]
N. Naikal, A. Y. Yang, and S. S. Sastry, "Towards an efficient distributed object recognition system in wireless smart camera networks," in 2010 13th International Conference on Information Fusion, 2010, pp. 1--8.
[33]
D. McGibney, T. Umeda, K. Sekiyama, H. Mukai, and T. Fukuda, "Cooperative distributed object classification for multiple robots with audio features," in 2011 International Symposium on Micro-NanoMechatronics and Human Science, 2011, pp. 134--139.
[34]
Piyush P., R. Rajan, L. Mary, and B. I. Koshy, "Vehicle detection and classification using audio-visual cues," in 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), 2016, pp. 726--730.
[35]
R. F. Carpio, L. Di Giulio, E. Garone, G. Ulivi, and A. Gasparri, "A distributed swarm aggregation algorithm for bar shaped multi-agent systems," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018, pp. 4303--4308.
[36]
W. Rawat and Z. Wang, "Deep convolutional neural networks for image classification: A comprehensive review," Neural Computation, vol. 29, no. 9, pp. 2352--2449, 2017. 28599112. [Online]. Available
[37]
R. O. Chavez-Garcia, J. Guzzi, L. M. Gambardella, and A. Giusti, "Image classification for ground traversability estimation in robotics," in Advanced Concepts for Intelligent Vision Systems, J. Blanc-Talon, R. Penne, W. Philips, D. Popescu, and P. Scheunders, Eds. Cham: Springer International Publishing, 2017, pp. 325--336.
[38]
A. Giusti, J. Guzzi, D. C. Cireşan, F. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. D. Caro, D. Scaramuzza, and L. M. Gambardella, "A machine learning approach to visual perception of forest trails for mobile robots," IEEE Robotics and Automation Letters, vol. 1, no. 2, pp. 661--667, 2016.

Cited By

View all
  • (2022)Swarm Localization Through Cooperative Landmark IdentificationDistributed Autonomous Robotic Systems10.1007/978-3-030-92790-5_33(429-441)Online publication date: 3-Jan-2022

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SAC '21: Proceedings of the 36th Annual ACM Symposium on Applied Computing
March 2021
2075 pages
ISBN:9781450381048
DOI:10.1145/3412841
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 April 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. localization
  2. sensor fusion
  3. swarm

Qualifiers

  • Research-article

Funding Sources

  • National Science Foundations

Conference

SAC '21
Sponsor:
SAC '21: The 36th ACM/SIGAPP Symposium on Applied Computing
March 22 - 26, 2021
Virtual Event, Republic of Korea

Acceptance Rates

Overall Acceptance Rate 1,650 of 6,669 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)2
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)Swarm Localization Through Cooperative Landmark IdentificationDistributed Autonomous Robotic Systems10.1007/978-3-030-92790-5_33(429-441)Online publication date: 3-Jan-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media