Drone Detection and Pose Estimation Using Relational Graph Networks
<p>Parrot Bebop-2 quadrotor and the definition of its eight keypoints.</p> "> Figure 2
<p>The quadrotor and its keypoints detection framework. We validate three kinds of keypoints heads; see the text description for the details.</p> "> Figure 3
<p>Solve the problem of <span class="html-italic">Z</span> axis in PnP. In two adjacent frames, <span class="html-italic">t</span> and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics></math>, under similar circumstances, the <span class="html-italic">Z</span> axis of the frame <span class="html-italic">t</span> is vertical upward (<b>a</b>), and of the frame <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics></math> is downward (<b>b</b>). As the <span class="html-italic">Z</span> axis we defined is upward, our improved PnP algorithm can solve the problem by a test case (<b>c</b>,<b>d</b>); see the text for details.</p> "> Figure 4
<p>The projection of the reference points and three constraints of 2-points.</p> "> Figure 5
<p>Some examples of quadrotor simulation data. It can be seen that the order of the motor shaft must be deduced from the nose and tail.</p> "> Figure 6
<p>Some keypoints annotation examples of our Parrot dataset.</p> "> Figure 7
<p>The position estimation in the camera frame, compared with OptiTrack (as Ground-Truth).</p> "> Figure 8
<p>The attitude estimation in the camera frame, compared with OptiTrack (as Ground-Truth).</p> "> Figure 9
<p>The position estimation in the world frame, compared with OptiTrack.</p> "> Figure 10
<p>The velocity estimation in the world frame, compared with OptiTrack.</p> "> Figure 11
<p>Representative results of quadrotor pose estimation by our relational graph keypoints detection model and PnP algorithm. It can be seen that the quadrotor pose can be estimated from a variety of observation angles.</p> ">
Abstract
:1. Introduction
2. 6D Drone Pose Estimation
2.1. Relational Keypoints
2.2. Detection Framework
2.3. PnP Pose Estimation
Algorithm 1: Choosing the most suitable solution from possible solutions. |
|
3. Experiments
3.1. Implementation Details
3.2. Keypoints Detection on Simulation Dataset
3.3. Keypoints Detection on Parrot Dataset
3.4. Experiments on State Estimation
4. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Sun, J.; Li, B.; Jiang, Y.; Wen, C. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes. Sensors 2016, 16, 1778. [Google Scholar] [CrossRef] [PubMed]
- Rivas, A.; Chamoso, P.; Gonzálezbriones, A.; Corchado, J.M. Detection of Cattle Using Drones and Convolutional Neural Networks. Sensors 2018, 18, 2048. [Google Scholar] [CrossRef] [PubMed]
- Pestana, J.; Sanchez-Lopez, J.L.; Puente, P.D.L.; Carrio, A.; Campoy, P. A Vision-based Quadrotor Swarm for the participation in the 2013 International Micro Air Vehicle Competition. In Proceedings of the International Conference on Unmanned Aircraft Systems, Orlando, FL, USA, 27–30 May 2014. [Google Scholar]
- Aker, C.; Kalkan, S. Using deep networks for drone detection. In Proceedings of the 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–6. [Google Scholar]
- Rozantsev, A.; Lepetit, V.; Fua, P. Detecting Flying Objects Using a Single Moving Camera. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 879–892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lee, T.J.; Yi, D.H.; Cho, D.I.D. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots. Sensors 2016, 16, 311. [Google Scholar] [CrossRef] [PubMed]
- Mohamed Bin Zayed International Robotic Chanllenge, Chanllenge 1, Track a UAV. Available online: http://www.mbzirc.com/ (accessed on 4 January 2019).
- Dey, D.; Geyer, C.; Singh, S.; Digioia, M. A cascaded method to detect aircraft in video imagery. Int. J. Robot. Res. 2011, 30, 1527–1540. [Google Scholar] [CrossRef]
- Yoshihashi, R.; Trinh, T.T.; Kawakami, R.; You, S.; Iida, M.; Naemura, T. Learning Multi-frame Visual Representation for Joint Detection and Tracking of Small Objects. arXiv, 2017; arXiv:1709.04666v2. [Google Scholar]
- Hajri, R. UAV to UAV Target Detection and Pose Estimation. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2012. [Google Scholar]
- Xie, Y.; Pan, F.; Xing, B.; Gao, Q.; Feng, X.; Li, W. A New On-Board UAV Pose Estimation System Based on Monocular Camera. In Proceedings of the 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 27–28 August 2016; pp. 504–508. [Google Scholar]
- Fu, Q.; Quan, Q.; Cai, K.Y. Robust Pose Estimation for Multirotor UAVs Using Off-Board Monocular Vision. IEEE Trans. Ind. Electron. 2017, 64, 7942–7951. [Google Scholar] [CrossRef]
- Su, W.; Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Emaru, T. UAV pose estimation using IR and RGB cameras. In Proceedings of the IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 11–14 December 2017; pp. 151–156. [Google Scholar]
- Saxena, A.; Driemeyer, J.; Ng, A.Y. Learning 3-D object orientation from images. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
- Rothganger, F.; Lazebnik, S.; Schmid, C.; Ponce, J. 3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints. Int. J. Comput. Vis. 2006, 66, 231–259. [Google Scholar] [CrossRef] [Green Version]
- Collet, A.; Martinez, M.; Srinivasa, S.S. The MOPED framework: Object recognition and pose estimation for manipulation. Int. J. Robot. Res. 2011, 30, 1284–1306. [Google Scholar] [CrossRef] [Green Version]
- Liu, T.; Guo, Y.; Yang, S.; Yin, S.; Zhu, J. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems. Sensors 2017, 17, 334. [Google Scholar] [CrossRef] [PubMed]
- Kehl, W.; Manhardt, F.; Tombari, F.; Ilic, S.; Navab, N. SSD-6D: Making RGB-Based 3D Detection and 6D Pose Estimation Great Again. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1530–1538. [Google Scholar]
- Mahendran, S.; Ali, H.; Vidal, R. 3D Pose Regression Using Convolutional Neural Networks. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 494–495. [Google Scholar]
- Rad, M.; Lepetit, V. BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3848–3856. [Google Scholar]
- Kendall, A.; Grimes, M.; Cipolla, R. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 2938–2946. [Google Scholar]
- Kendall, A.; Cipolla, R. Modelling uncertainty in deep learning for camera relocalization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4762–4769. [Google Scholar]
- Brachmann, E.; Krull, A.; Nowozin, S.; Shotton, J.; Michel, F.; Gumhold, S.; Rother, C. DSAC—Differentiable RANSAC for Camera Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2492–2500. [Google Scholar]
- Brachmann, E.; Rother, C. Learning Less is More—6D Camera Localization via 3D Surface Regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4654–4662. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Xiang, Y.; Mottaghi, R.; Savarese, S. Beyond PASCAL: A benchmark for 3D object detection in the wild. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, 24–26 March 2014; pp. 75–82. [Google Scholar]
- Crivellaro, A.; Rad, M.; Verdie, Y.; Yi, K.M.; Fua, P.; Lepetit, V. A Novel Representation of Parts for Accurate 3D Object Detection and Tracking in Monocular Images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 4391–4399. [Google Scholar]
- Gong, W.; Zhang, X.; Gonzàlez, J.; Sobral, A.; Bouwmans, T.; Tu, C.; Zahzah, E. Human Pose Estimation from Monocular Images: A Comprehensive Survey. Sensors 2016, 16, 1966. [Google Scholar] [CrossRef] [PubMed]
- Lepetit, V.; Fua, P. Keypoint recognition using randomized trees. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1465–1479. [Google Scholar] [CrossRef] [PubMed]
- Toshev, A.; Szegedy, C. DeepPose: Human Pose Estimation via Deep Neural Networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
- Tompson, J.J.; Jain, A.; LeCun, Y.; Bregler, C. Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; pp. 1799–1807. [Google Scholar]
- Newell, A.; Yang, K.; Deng, J. Stacked Hourglass Networks for Human Pose Estimation. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 483–499. [Google Scholar]
- Pishchulin, L.; Insafutdinov, E.; Tang, S.; Andres, B.; Andriluka, M.; Gehler, P.V.; Schiele, B. DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4929–4937. [Google Scholar]
- Papandreou, G.; Zhu, T.; Kanazawa, N.; Toshev, A.; Tompson, J.; Bregler, C.; Murphy, K. Towards accurate multi-person pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Volume 3, p. 6. [Google Scholar]
- Yang, W.; Li, S.; Ouyang, W.; Li, H.; Wang, X. Learning Feature Pyramids for Human Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1290–1299. [Google Scholar]
- Huang, S.; Gong, M.; Tao, D. A Coarse-Fine Network for Keypoint Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3047–3056. [Google Scholar]
- Zhang, H.; Goodfellow, I.J.; Metaxas, D.N.; Odena, A. Self-Attention Generative Adversarial Networks. arXiv, 2018; arXiv:1805.08318. [Google Scholar]
- Yang, Z.; Zhao, J.J.; Dhingra, B.; He, K.; Cohen, W.W.; Salakhutdinov, R.; LeCun, Y. GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations. arXiv, 2018; arXiv:1806.05662. [Google Scholar]
- Vaswani, A.; Parmar, N.; Uszkoreit, J.; Shazeer, N.; Kaiser, L. Image Transformer. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Wang, X.; Girshick, R.B.; Gupta, A.; He, K. Non-Local Neural Networks. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7794–7803. [Google Scholar]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R.B. Mask R-CNN. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
- Fu, C.; Liu, W.; Ranga, A.; Tyagi, A.; Berg, A.C. DSSD: Deconvolutional Single Shot Detector. arXiv, 2017; arXiv:1701.06659. [Google Scholar]
- Lin, T.; Goyal, P.; Girshick, R.B.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar]
- Girshick, R.B. Fast R-CNN. In Proceedings of the International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Lin, T.; Dollar, P.; Girshick, R.B.; He, K.; Hariharan, B.; Belongie, S.J. Feature Pyramid Networks for Object Detection. In Proceedings of the Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Li, Z.; Peng, C.; Yu, G.; Zhang, X.; Deng, Y.; Sun, J. Light-Head R-CNN: In Defense of Two-Stage Object Detector. arXiv, 2017; arXiv:1711.07264. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
- Li, S.; Xu, C.; Xie, M. A Robust O(n) Solution to the Perspective-n-Point Problem. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1444–1450. [Google Scholar] [CrossRef] [PubMed]
- Quan, L.; Lan, Z. Linear N-point camera pose determination. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 774–780. [Google Scholar] [CrossRef] [Green Version]
- Bebop 2, Parrot Drones. Available online: http://www.parrot.com/product/parrot-bebop-2/ (accessed on 4 January 2019).
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.S.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Shrivastava, A.; Gupta, A.; Girshick, R.B. Training Region-Based Object Detectors with Online Hard Example Mining. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 761–769. [Google Scholar]
- Lin, T.; Maire, M.; Belongie, S.J.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
- OptiTrack, NaturalPoint, Inc. Available online: https://www.optitrack.com/ (accessed on 4 January 2019).
- Cao, Z.; Simon, T.; Wei, S.; Sheikh, Y. Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields. In Proceedings of the Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1302–1310. [Google Scholar]
- Bar-Shalom, Y.; Kirubarajan, T.; Li, X.R. Estimation with Applications to Tracking and Navigation: Theory, Algorithms, and Software; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
- Kluge, S.; Reif, K.; Brokate, M. Stochastic Stability of the Extended Kalman Filter With Intermittent Observations. IEEE Trans. Autom. Control 2010, 55, 514–518. [Google Scholar] [CrossRef]
Layer | Output Size | Kernel Size | Scale | Repeat | Output Channels |
---|---|---|---|---|---|
Image | 224×224 | ||||
Conv1 | 112×112 | 3×3 | 2 | 1 | 24 |
Max Pool | 56×56 | 3×3 | 2 | ||
Conv2 | 28×28 | 2 | 1 | 144 | |
28×28 | 1 | 3 | 144 | ||
Conv3 | 14×14 | 2 | 1 | 288 | |
14×14 | 1 | 7 | 288 | ||
Conv4 | 7×7 | 2 | 1 | 576 | |
7×7 | 1 | 3 | 576 | ||
GAP | 1×1 | 7×7 | 576 | ||
FC | 1000 |
Model | |||
Lh-rcnn-k4 | 0.8176 | 0.8412 | 0.8001 |
Lh-rcnn-k8 | 0.8123 | 0.8397 | 0.7985 |
Lh-rcnn-k8-NL | 0.8189 | 0.8476 | 0.8091 |
Lh-rcnn-k4-4 | 0.9083 | 0.9346 | 0.9055 |
Lh-rcnn-k4-4-RG (ours) | 0.9366 | 0.9437 | 0.9212 |
Model | |||
Lh-rcnn-k4 | 0.8345 | 0.8551 | 0.8152 |
Lh-rcnn-k8 | 0.8252 | 0.8435 | 0.8124 |
Lh-rcnn-k8-NL | 0.8358 | 0.8576 | 0.8173 |
Lh-rcnn-k4-4 | 0.9107 | 0.9407 | 0.9060 |
Lh-rcnn-k4-4-RG (ours) | 0.9326 | 0.9523 | 0.9188 |
Model | |||
Lh-rcnn-k4 | 0.6453 | 0.8897 | 0.6811 |
Lh-rcnn-k8 | 0.6411 | 0.8804 | 0.6791 |
Lh-rcnn-k8-NL | 0.6481 | 0.8759 | 0.6940 |
Lh-rcnn-k4-4 | 0.6891 | 0.9111 | 0.7446 |
Lh-rcnn-k4-4-RG (ours) | 0.7415 | 0.9446 | 0.7908 |
Model | |||
Lh-rcnn-k4 | 0.7523 | 0.9297 | 0.7985 |
Lh-rcnn-k8 | 0.7473 | 0.9154 | 0.7896 |
Lh-rcnn-k8-NL | 0.7591 | 0.9213 | 0.8090 |
Lh-rcnn-k4-4 | 0.7764 | 0.9346 | 0.8275 |
Lh-rcnn-k4-4-RG (ours) | 0.8054 | 0.9473 | 0.8479 |
Model | |||
Lh-rcnn-k4 | 0.6124 | 0.8448 | 0.6571 |
Lh-rcnn-k8 | 0.6118 | 0.8492 | 0.6505 |
Lh-rcnn-k8-NL | 0.6251 | 0.8509 | 0.6692 |
Lh-rcnn-k4-4 | 0.6743 | 0.8987 | 0.7205 |
Lh-rcnn-k4-4-RG (ours) | 0.7298 | 0.9176 | 0.7754 |
Model | |||
Lh-rcnn-k4 | 0.7084 | 0.8965 | 0.7631 |
Lh-rcnn-k8 | 0.7023 | 0.9005 | 0.7592 |
Lh-rcnn-k8-NL | 0.7119 | 0.9056 | 0.7687 |
Lh-rcnn-k4-4 | 0.7686 | 0.9189 | 0.8049 |
Lh-rcnn-k4-4-RG (ours) | 0.7869 | 0.9234 | 0.8195 |
Method | Backbone | Input Size | Speed (fps) | AP [0.5:0.95] |
---|---|---|---|---|
CMU-Pose [59] | VGG-19 | 654 × 368 | 20 | 0.5815 |
G-RMI [35] | Resnet50 | 1200 × 800 | 18 | 0.6446 |
Mask-RCNN [42] | Resnet50-FPN | 1200 × 800 | 10 | 0.6672 |
Lh-rcnn-k4 | xception* | 1200 × 800 | 90 | 0.6453 |
Lh-rcnn-k8 | xception* | 1200 × 800 | 89 | 0.6411 |
Lh-rcnn-k8-NL [41] | xception* | 1200 × 800 | 75 | 0.6481 |
Lh-rcnn-k4-4 | xception* | 1200 × 800 | 85 | 0.6891 |
Lh-rcnn-k4-4-RG (ours) | xception* | 1200 × 800 | 71 | 0.7415 |
Position Tracking (m) | Velocity Tracking (m/s) | |||
---|---|---|---|---|
Mean Error | Standard Deviation | Mean Error | Standard Deviation | |
NCA | 0.0757 | 0.1130 | 0.1228 | 0.2182 |
NCV | 0.0934 | 0.1384 | 0.2034 | 0.3221 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jin, R.; Jiang, J.; Qi, Y.; Lin, D.; Song, T. Drone Detection and Pose Estimation Using Relational Graph Networks. Sensors 2019, 19, 1479. https://doi.org/10.3390/s19061479
Jin R, Jiang J, Qi Y, Lin D, Song T. Drone Detection and Pose Estimation Using Relational Graph Networks. Sensors. 2019; 19(6):1479. https://doi.org/10.3390/s19061479
Chicago/Turabian StyleJin, Ren, Jiaqi Jiang, Yuhua Qi, Defu Lin, and Tao Song. 2019. "Drone Detection and Pose Estimation Using Relational Graph Networks" Sensors 19, no. 6: 1479. https://doi.org/10.3390/s19061479
APA StyleJin, R., Jiang, J., Qi, Y., Lin, D., & Song, T. (2019). Drone Detection and Pose Estimation Using Relational Graph Networks. Sensors, 19(6), 1479. https://doi.org/10.3390/s19061479