Abstract
Aerial robotics is a growing field with tremendous civil and military applications. Potential applications include surveying and maintenance tasks, aerial transportation and manipulation, search and rescue, and surveillance. The challenges associated with tackling robotics tasks in complex, three dimensional, indoor and outdoor environments bring into focus some of the limitations of accepted solutions to classical robotics problems in sensing, planning, localization, and mapping. A quadcopter which is capable of autonomous landing on a stationary platform using only onboard parameters such as sensing, recognition and computation is presented. We present state-of-the-art computer vision, deep learning neural net inception model, algorithms, detection and state estimation of the target for our project. We have deployed and tested in indoor environment due to limitations of resources and controlled environmental features. We rely on Faster-RCNN-Inception-V2-COCO model but other robust training model could be used for devices with limited computation power like Raspberry pi with same procedures and improved results. The Tensorflow model is rapidly growing with current version 1.12.0 with extended Keras support makes our project more dynamic and facile. To the best of our knowledge, this is the first demonstration of a fully autonomous quadrotor system capable of landing on a stationary target, using only onboard sensing and computing, without relying on external infrastructure which uses deep learning for target recognition, state estimation and target tracking.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bosse, M., Karl, W.C., Castanon, D., DeBitetto, P.: A vision augmented navigation system. In: IEEE Conference on Intelligent Transportation Systems, USA, pp. 1028–1033 (1997)
Kaminer, I., Pascoal, A.M., Kang, W., Yakimenko, O.: Integrated vision/inertial navigation system design using nonlinear filtering. IEEE Trans. Aerospace Electron. (1999)
Oberkampf, D., DeMenthon, D.F., Davis, L.S.: Iterative pose estimation using coplanar feature points. Comput. Vis. Image Underst. 63(3), 495–511 (1996)
Yang, Z.F., Tsai, W.H.: Using parallel line information for vision-based landmark location estimation and an application to automatic helicopter landing. Robot. Comput.-Integr. Manuf. 14(4), 297–306 (1998)
Gonzalez, R., Woods, R.: Digital Image Processing, 3rd edn. Addison-Wesley, Pearson US (2008)
Ilfat, G., Garaeva, A., Aslanyan, N.: Recognition of a predefined landmark using optical flow sensor/camera. The Department of Computer Science & Automation, TU Ilmenau 98693 Ilmenau, Russia (2017)
de Oliveira, C.S., Anvary, A.P., Anvary, A., Silva Jr., M.C., Alves Neto, A., Mozelli, L.A.: Comparison of cascade classifiers for automatic landing pad detection in digital images. Universidade Federal de S\(^\sim \)ao Jo\(^\sim \)ao Del-Rei CELTA - Center for Studies in Electronics Engineering and Automation, Brazil (2015)
Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)
Falanga, D., Zanchettin, A., Simovic, A., Delmerico, J., Scaramuzza, D.: Vision-based autonomous quadrotor landing on a moving platform. In: IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Shanghai (2017)
Sorenson, H.: Kalman Filtering: Theory and Application. IEEE Press Selected Reprint Series. IEEE Press, California (1985)
Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: IEEE International Conference Robotics Automation (ICRA), China, pp. 15–22 (2014)
Lynen, S., Achtelik, M.W., Weiss, S., Chli, M., Siegwart, R.: A robust and modular multi-sensor fusion approach applied to MAV navigation. In: IEEE/RSJ International Conference Intelligent Robotics Systems (IROS), Japan, pp. 3923–3929 (2013)
Mueller, M.W., Hehn, M., D’Andrea, R.: A computationally efficient motion primitive for quadrocopter trajectory generation. IEEE Trans. Robot. 31(6), 1294–1310 (2015)
Faessler, M., Fontana, F., Forster, C., Scaramuzza, D.: Automatic re-initialization and failure recovery for aggressive flight with a monocular vision-based quadrotor. In: IEEE International Conference Robotics Automation, USA, pp. 1722–1729 (2015)
Faessler, M., Fontana, F., Forster, C., Scaramuzza, D.: Thrust mixing, saturation, and body-rate control for accurate aggressive quadrotor flight. IEEE Robot. Autom. Lett. 2(2), 476–482 (2017)
Nivrito, A.K.M., Wahed, M., Bin, R.: Comparative analysis between inception-v3 and other learning systems using facial expressions detection. BARC report, India, pp. 16-27 (2016)
Jana, D.K., Ghosh, R.: Novel interval type-2 fuzzy logic controller for improving risk assessment model of cyber security. J. Inf. Secur. Appl. 40, 173–182 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Anand, A., Barman, S., Prakash, N.S., Peyada, N.K., Sinha, J.D. (2020). Vision Based Automatic Landing of Unmanned Aerial Vehicle. In: Castillo, O., Jana, D., Giri, D., Ahmed, A. (eds) Recent Advances in Intelligent Information Systems and Applied Mathematics. ICITAM 2019. Studies in Computational Intelligence, vol 863. Springer, Cham. https://doi.org/10.1007/978-3-030-34152-7_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-34152-7_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34151-0
Online ISBN: 978-3-030-34152-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)