Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Learning-based approach to enable mobile robots to charge batteries using standard wall outlets

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

Autonomous battery charging is crucial for mobile service robots in human-center indoor environments, enabling them to extend operational hours and coverage without human assistance. This paper presents an innovative approach for mobile service robots to charge their batteries using standard wall outlets, introducing no additional maintenance cost and requiring no modification to environments. A portable self-charging device, equipped with cameras, a force sensor, and a 2-degree-of-freedom end-effector carrying a standard 3-pin 120V power plug, is attached to an existing mobile robot. The robot identifies a wall outlet and navigates to it using an onboard depth camera. It inserts the plug into the wall outlet while the vision is obstructed. The plug-insertion operation is guided by a control policy that was trained by a simulation model using a deep reinforcement learning technique. This approach achieved a success rate of nearly \(90\%\) in experiments of inserting a power plug into a wall outlet. It eliminates the need of an installed docking station for autonomous charging or human plugging-in for manual charging.

Graphical Abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

Not applicable

Code availability

The software that supports this study is available in https://gazebosim.org/home. The code that support this study is available in https://github.com/suneric/indoor_service.

References

  1. Holland J, Kingston L, McCarthy C, Armstrong E, O’Dwyer P, Merz F et al (2021) Service robots in the healthcare sector. Robotics 10(1):47. https://doi.org/10.3390/robotics10010047

    Article  Google Scholar 

  2. iRobot.: Roomba Home Base. Online accessed on 10-15-2023. https://www.irobot.com/en_US/roomba-home-base/4648035.html

  3. Robotics F.: ROS Component. Online acessed on 10-15-2023. https://www.roscomponents.com/en/fetch/224-fetch-robotics-docking-station.html

  4. Guangrui F, Geng W (2017) Vision-based autonomous docking and re-charging system for mobile robot in warehouse environment. In: 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE). IEEE; p. 79–83

  5. Amjad M, Farooq-i Azam M, Ni Q, Dong M, Ansari EA (2022) Wireless charging systems for electric vehicles. Renew Sustain Energy Rev 167:112730. https://doi.org/10.1016/j.rser.2022.112730

    Article  Google Scholar 

  6. Rao MS, Shivakumar M (2021) IR based auto-recharging system for autonomous mobile robot. J Robot Control (JRC) 2(4):244–251. https://doi.org/10.18196/jrc.2486

    Article  Google Scholar 

  7. Doumbia M, Cheng X, Havyarimana V (2019) An auto-recharging system design and implementation based on infrared signal for autonomous robots. In: 2019 5th International Conference on Control, Automation and Robotics (ICCAR). IEEE; p. 894–900

  8. Quilez R, Zeeman A, Mitton N, Vandaele J (2015) Docking autonomous robots in passive docks with Infrared sensors and QR codes. EAI Endorsed Trans Self-Adapt Syst. https://doi.org/10.4108/icst.tridentcom.2015.259673

    Article  Google Scholar 

  9. Eruhimov V, Meeussen W (2011) Outlet detection and pose estimation for robot continuous operation. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE; p. 2941–2946

  10. Meeussen W, Wise M, Glaser S, Chitta S, McGann C, Mihelich P, et al (2010)Autonomous door opening and plugging in with a personal robot. In: 2010 IEEE international conference on robotics and automation. IEEE; p. 729–736

  11. Koyasu H, Wada M (2017) Plugin-docking system for autonomous charging using particle filter. In: Thirteenth international conference on quality control by artificial vision 2017. vol. 10338. SPIE; p. 295–299

  12. Jiao L, Zhang F, Liu F, Yang S, Li L, Feng Z et al (2019) A survey of deep learning-based object detection. IEEE access 7:128837–128868. https://doi.org/10.1109/ACCESS.2019.2939201

    Article  Google Scholar 

  13. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; p. 779–788

  14. Xu J, Hou Z, Liu Z, Qiao H (2019) Compare contact model-based control and contact model-free learning: A survey of robotic peg-in-hole assembly strategies. arXiv preprint arXiv:1904.05240. https://doi.org/10.48550/arXiv.1904.05240

  15. Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: A survey. Int J Robot Res 32(11):1238–1274. https://doi.org/10.1177/027836491349572

    Article  Google Scholar 

  16. Schoettler G, Nair A, Luo J, Bahl S, Ojea JA, Solowjow E, et al (2020) Deep reinforcement learning for industrial insertion tasks with visual inputs and natural rewards. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; p. 5548–5555

  17. Schoettler G, Nair A, Ojea JA, Levine S, Solowjow E (2020) Meta-reinforcement learning for robotic industrial insertion tasks. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; p. 9728–9735

  18. Zhou Z, Li L, Wang R, Zhang X (2021) Deep learning on 3d object detection for automatic plug-in charging using a mobile manipulator. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE; p. 4148–4154

  19. Mohammed MQ, Chung KL, Chyi CS (2020) Review of deep reinforcement learning-based object grasping: Techniques, open challenges, and recommendations. IEEE Access 8:178450–178481. https://doi.org/10.1109/ACCESS.2020.3027923

    Article  Google Scholar 

  20. Sun Y, Zhang L, Ma O (2021) Force-Vision Sensor Fusion Improves Learning-Based Approach for Self-Closing Door Pulling. IEEE Access 9:137188–137197. https://doi.org/10.1109/ACCESS.2021.3118594

    Article  Google Scholar 

  21. Code UB. International building code. International Code Council, USA. 1997;

  22. Intel Corporation.: Tuning depth cameras for best performance. Online accessed on 05-30-2023. https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance

  23. Fan R, Wang H, Xue B, Huang H, Wang Y, Liu M et al (2021) Three-filters-to-normal: An accurate and ultrafast surface normal estimator. IEEE Robot Autom Lett 6(3):5405–5412. https://doi.org/10.1109/LRA.2021.3067308

    Article  Google Scholar 

  24. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al (2014) Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer; p. 740–755

  25. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S et al (2015) Imagenet large scale visual recognition challenge. Int J comput vis 115:211–252

    Article  MathSciNet  Google Scholar 

  26. Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput vis 88:303–338. https://doi.org/10.1007/s11263-009-0275-4

    Article  Google Scholar 

  27. Quattoni A, Torralba A, Recognizing indoor scenes. In, (2009) IEEE conference on computer vision and pattern recognition. IEEE 2009:413–420

  28. Kuznetsova A, Rom H, Alldrin N, Uijlings J, Krasin I, Pont-Tuset J et al (2020) The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. Int J Comput Vis 128(7):1956–1981

    Article  Google Scholar 

  29. Zhou B, Zhao H, Puig X, Fidler S, Barriuso A, Torralba A (2017) Scene parsing through ade20k dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition; p. 633–641

  30. Jocher G YOLOv5 by Ultralytics. Software. https://github.com/ultralytics/yolov5

  31. Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, et al (2013) Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. https://doi.org/10.48550/arXiv.1312.5602

Download references

Acknowledgements

This research was supported by the University of Cincinnati’s Alan Shepard endorment funding awared to Dr. Ou Ma, Alan Shepard Chair Professor. We would like to acknowledge the contribution of students Charlie Pritz and Olatz Rodriguez Arechabala in capturing and labeling thousands of images used to train our machine learning model. Their work have significantly improved the quality of our dataset, and we are grateful for their efforts.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

Yufeng Sun and Ou Ma equally contributed the original ideas leading to the presented solution methods. Yufeng Sun developed the algorithms and wrote the software code, performed the simulations and experiments, and took the lead in writing the manuscript. Ou Ma advised the research project and provided critical feedback and helped shape the research, analysis, and manuscript.

Corresponding author

Correspondence to Yufeng Sun.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mp4 45222 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Y., Ma, O. Learning-based approach to enable mobile robots to charge batteries using standard wall outlets. Intel Serv Robotics 17, 981–991 (2024). https://doi.org/10.1007/s11370-024-00551-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-024-00551-4

Keywords

Navigation