Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/ICRA46639.2022.9812338guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Insulator Aiming Using Multi-Feature Fusion-Based Visual Servo Control for Washing Drone

Published: 23 May 2022 Publication History

Abstract

Insulator visual aiming is difficult for washing drone due to the complex washing environment, strong dis-turbance, lack of debugging environment, and other factors. Conventional visual servo control methods often fail to consider these complex factors adequately and fall short in reliable insulator visual aiming. To address these problems, we propose a novel multi-feature fusion-based drone visual servo control method for accurate insulator visual aiming. A multi-feature fusion neural network (MFFNet) is proposed to map the dif-ferent input modalities into an embedding space spanned by the learned deep features. Suitable control commands are generated by the simple combination of learned deep features. These deep features represent the intrinsic structural properties of the insulator and the motion pattern of the drones. Particularly, our method is trained purely in simulation and transferred to a real drone directly. Moreover, accurate visual aiming is guaranteed even in strong disturbance environments. Simulation and experimental results verify the high accurate insulator aiming, anti-disturbance, and sim-to-real transfer capabilities of the proposed method. Video: https://youtu.be/Ptlajzvp46A.

References

[1]
X. Qiao, Z. Zhang, X. Jiang, Y. He, and X. Li, “Application of grey theory in pollution prediction on insulator surface in power systems,” Engineering Failure Analysis, vol. 106, p. 104153, 2019.
[2]
R. Castillo-Sierra, O. Oviedo-Trespalacios, J. E. Candelo-Becerra, J. D. Soto, and M. Calle, “A novel method for prediction of washing cycles of electrical insulators in high pollution environments,” Inter-national Journal of Electrical Power & Energy Systems, vol. 130, p. 107026, 2021.
[3]
S. Ljungblad, Y. Man, M. A. Baytas, M. Gamboa, M. Obaid, and M. Fjeld, “What matters in professional drone pilots' practice? an interview study to understand the complexity of their work and inform human-drone interaction research,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI), 2021, pp. 1–16.
[4]
D. Floreano and R. J. Wood, “Science, technology and the future of small autonomous drones,” Nature, vol. 521, no. 7553, pp. 460–466, 2015.
[5]
M. Hassanalian and A. Abdelkefi, “Classifications, applications, and design challenges of drones: A review,” Progress in Aerospace Sciences, vol. 91, pp. 99–131, 2017.
[6]
P. Chen, Y. Dang, R. Liang, W. Zhu, and X. He, “Real-time object tracking on a drone with multi-inertial sensing data,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 131–139, 2017.
[7]
P. Zhu, L. Wen, D. Du, X. Bian, Q. Hu, and H. Ling, “Vision meets drones: Past, present and future,” arXiv preprint arXiv:, 2020.
[8]
F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006.
[9]
H. Xie and A. F. Lynch, “Input saturated visual servoing for unmanned aerial vehicles,” IEEE/ASME Transactions on Mechatronics, vol. 22, no. 2, pp. 952–960, 2016.
[10]
X. Zhang, Y. Fang, X. Zhang, J. Jiang, and X. Chen, “A novel geometric hierarchical approach for dynamic visual servoing of quadrotors,” IEEE Transactions on Industrial Electronics, vol. 67, no. 5, pp. 3840–3849, 2019.
[11]
R. Ozawa and F. Chaumette, “Dynamic visual servoing with image moments for an unmanned aerial vehicle using a virtual spring approach,” Advanced Robotics, vol. 27, no. 9, pp. 683–696, 2013.
[12]
T. Hamel and R. Mahony, “Visual servoing of an under-actuated dynamic rigid-body system: an image-based approach,” IEEE Trans-actions on Robotics and Automation, vol. 18, no. 2, pp. 187–198, 2002.
[13]
H. De Plinval, P. Morin, and P. Mouyon, “Stabilization of a class of underactuated vehicles with uncertain position measurements and application to visual servoing,” Automatica, vol. 77, pp. 155–169, 2017.
[14]
D. C. Schuurman and D. W. Capson, “Robust direct visual servo using network-synchronized cameras,” IEEE Transactions on Robotics and Automation, vol. 20, no. 2, pp. 319–334, 2004.
[15]
Y. Harish, H. Pandya, A. Gaud, S. Terupally, S. Shankar, and K. M. Krishna, “DFVS: Deep flow guided scene agnostic image based visual servoing,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 9000–9006.
[16]
H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteris-tics and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
[17]
K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R -CNN,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2961–2969.
[18]
S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros, “CNN-generated images are surprisingly easy to spot … for now,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8695–8704.
[19]
A. Saxena, H. Pandya, G. Kumar, A. Gaud, and K. M. Krishna, “Exploring convolutional networks for end-to-end visual servoing,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 3817–3823.
[20]
Q. Bateux, E. Marchand, J. Leitner, F. Chaumette, and P. Corke, “Training deep neural networks for visual servoing,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 3307–3314.
[21]
A. Wagner, J. Wright, A. Ganesh, Z. Zhou, H. Mobahi, and Y. Ma, “Toward a practical face recognition system: Robust alignment and illumination by sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 2, pp. 372–386, 2011.
[22]
Y. Wang, Y. Cao, Z.-J. Zha, J. Zhang, and Z. Xiong, “Deep degradation prior for low-quality image classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11049–11058.
[23]
S. Dodge and L. Karam, “Understanding how image quality affects deep neural networks,” in 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2016, pp. 1–6.
[24]
M. Zhang and A. A. Sawchuk, “Motion primitive-based human activity recognition using a bag-of-features approach,” in Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium (IHI), 2012, pp. 631–640.
[25]
M. Müller, A. Dosovitskiy, B. Ghanem, and V. Koltun, “Driving policy transfer via modularity and abstraction,” arXiv preprint arXiv:, 2018.
[26]
D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in 2011 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2011, pp. 2520–2525.
[27]
A. Loquercio, A. I. Maqueda, C. R. Del-Blanco, and D. Scaramuzza, “Dronet: Learning to fly by driving,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1088–1095, 2018.
[28]
P. Serra, R. Cunha, T. Hamel, D. Cabecinhas, and C. Silvestre, “Landing of a quadrotor on a moving target using dynamic image-based visual servo control,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1524–1535, 2016.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
2022 International Conference on Robotics and Automation (ICRA)
May 2022
6634 pages

Publisher

IEEE Press

Publication History

Published: 23 May 2022

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media