Abstract
Autonomous battery charging is crucial for mobile service robots in human-center indoor environments, enabling them to extend operational hours and coverage without human assistance. This paper presents an innovative approach for mobile service robots to charge their batteries using standard wall outlets, introducing no additional maintenance cost and requiring no modification to environments. A portable self-charging device, equipped with cameras, a force sensor, and a 2-degree-of-freedom end-effector carrying a standard 3-pin 120V power plug, is attached to an existing mobile robot. The robot identifies a wall outlet and navigates to it using an onboard depth camera. It inserts the plug into the wall outlet while the vision is obstructed. The plug-insertion operation is guided by a control policy that was trained by a simulation model using a deep reinforcement learning technique. This approach achieved a success rate of nearly \(90\%\) in experiments of inserting a power plug into a wall outlet. It eliminates the need of an installed docking station for autonomous charging or human plugging-in for manual charging.
Graphical Abstract
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Not applicable
Code availability
The software that supports this study is available in https://gazebosim.org/home. The code that support this study is available in https://github.com/suneric/indoor_service.
References
Holland J, Kingston L, McCarthy C, Armstrong E, O’Dwyer P, Merz F et al (2021) Service robots in the healthcare sector. Robotics 10(1):47. https://doi.org/10.3390/robotics10010047
iRobot.: Roomba Home Base. Online accessed on 10-15-2023. https://www.irobot.com/en_US/roomba-home-base/4648035.html
Robotics F.: ROS Component. Online acessed on 10-15-2023. https://www.roscomponents.com/en/fetch/224-fetch-robotics-docking-station.html
Guangrui F, Geng W (2017) Vision-based autonomous docking and re-charging system for mobile robot in warehouse environment. In: 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE). IEEE; p. 79–83
Amjad M, Farooq-i Azam M, Ni Q, Dong M, Ansari EA (2022) Wireless charging systems for electric vehicles. Renew Sustain Energy Rev 167:112730. https://doi.org/10.1016/j.rser.2022.112730
Rao MS, Shivakumar M (2021) IR based auto-recharging system for autonomous mobile robot. J Robot Control (JRC) 2(4):244–251. https://doi.org/10.18196/jrc.2486
Doumbia M, Cheng X, Havyarimana V (2019) An auto-recharging system design and implementation based on infrared signal for autonomous robots. In: 2019 5th International Conference on Control, Automation and Robotics (ICCAR). IEEE; p. 894–900
Quilez R, Zeeman A, Mitton N, Vandaele J (2015) Docking autonomous robots in passive docks with Infrared sensors and QR codes. EAI Endorsed Trans Self-Adapt Syst. https://doi.org/10.4108/icst.tridentcom.2015.259673
Eruhimov V, Meeussen W (2011) Outlet detection and pose estimation for robot continuous operation. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE; p. 2941–2946
Meeussen W, Wise M, Glaser S, Chitta S, McGann C, Mihelich P, et al (2010)Autonomous door opening and plugging in with a personal robot. In: 2010 IEEE international conference on robotics and automation. IEEE; p. 729–736
Koyasu H, Wada M (2017) Plugin-docking system for autonomous charging using particle filter. In: Thirteenth international conference on quality control by artificial vision 2017. vol. 10338. SPIE; p. 295–299
Jiao L, Zhang F, Liu F, Yang S, Li L, Feng Z et al (2019) A survey of deep learning-based object detection. IEEE access 7:128837–128868. https://doi.org/10.1109/ACCESS.2019.2939201
Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; p. 779–788
Xu J, Hou Z, Liu Z, Qiao H (2019) Compare contact model-based control and contact model-free learning: A survey of robotic peg-in-hole assembly strategies. arXiv preprint arXiv:1904.05240. https://doi.org/10.48550/arXiv.1904.05240
Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: A survey. Int J Robot Res 32(11):1238–1274. https://doi.org/10.1177/027836491349572
Schoettler G, Nair A, Luo J, Bahl S, Ojea JA, Solowjow E, et al (2020) Deep reinforcement learning for industrial insertion tasks with visual inputs and natural rewards. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; p. 5548–5555
Schoettler G, Nair A, Ojea JA, Levine S, Solowjow E (2020) Meta-reinforcement learning for robotic industrial insertion tasks. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; p. 9728–9735
Zhou Z, Li L, Wang R, Zhang X (2021) Deep learning on 3d object detection for automatic plug-in charging using a mobile manipulator. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE; p. 4148–4154
Mohammed MQ, Chung KL, Chyi CS (2020) Review of deep reinforcement learning-based object grasping: Techniques, open challenges, and recommendations. IEEE Access 8:178450–178481. https://doi.org/10.1109/ACCESS.2020.3027923
Sun Y, Zhang L, Ma O (2021) Force-Vision Sensor Fusion Improves Learning-Based Approach for Self-Closing Door Pulling. IEEE Access 9:137188–137197. https://doi.org/10.1109/ACCESS.2021.3118594
Code UB. International building code. International Code Council, USA. 1997;
Intel Corporation.: Tuning depth cameras for best performance. Online accessed on 05-30-2023. https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance
Fan R, Wang H, Xue B, Huang H, Wang Y, Liu M et al (2021) Three-filters-to-normal: An accurate and ultrafast surface normal estimator. IEEE Robot Autom Lett 6(3):5405–5412. https://doi.org/10.1109/LRA.2021.3067308
Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al (2014) Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer; p. 740–755
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S et al (2015) Imagenet large scale visual recognition challenge. Int J comput vis 115:211–252
Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput vis 88:303–338. https://doi.org/10.1007/s11263-009-0275-4
Quattoni A, Torralba A, Recognizing indoor scenes. In, (2009) IEEE conference on computer vision and pattern recognition. IEEE 2009:413–420
Kuznetsova A, Rom H, Alldrin N, Uijlings J, Krasin I, Pont-Tuset J et al (2020) The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. Int J Comput Vis 128(7):1956–1981
Zhou B, Zhao H, Puig X, Fidler S, Barriuso A, Torralba A (2017) Scene parsing through ade20k dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition; p. 633–641
Jocher G YOLOv5 by Ultralytics. Software. https://github.com/ultralytics/yolov5
Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, et al (2013) Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. https://doi.org/10.48550/arXiv.1312.5602
Acknowledgements
This research was supported by the University of Cincinnati’s Alan Shepard endorment funding awared to Dr. Ou Ma, Alan Shepard Chair Professor. We would like to acknowledge the contribution of students Charlie Pritz and Olatz Rodriguez Arechabala in capturing and labeling thousands of images used to train our machine learning model. Their work have significantly improved the quality of our dataset, and we are grateful for their efforts.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Author information
Authors and Affiliations
Contributions
Yufeng Sun and Ou Ma equally contributed the original ideas leading to the presented solution methods. Yufeng Sun developed the algorithms and wrote the software code, performed the simulations and experiments, and took the lead in writing the manuscript. Ou Ma advised the research project and provided critical feedback and helped shape the research, analysis, and manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Ethics approval
Not applicable.
Consent to participate
Not applicable.
Consent for publication
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file 1 (mp4 45222 KB)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sun, Y., Ma, O. Learning-based approach to enable mobile robots to charge batteries using standard wall outlets. Intel Serv Robotics 17, 981–991 (2024). https://doi.org/10.1007/s11370-024-00551-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11370-024-00551-4