A Real-Time Registration Algorithm of UAV Aerial Images Based on Feature Matching
<p>System functional architecture.</p> "> Figure 2
<p>Overall design flow chart.</p> "> Figure 3
<p>Orthographic projection map with geographic information.</p> "> Figure 4
<p>Attentional Graph neural network (adapted from ref. [<a href="#B8-jimaging-09-00067" class="html-bibr">8</a>]).</p> "> Figure 5
<p>A map with layers and blocks, the numbers in Figure represent the numbers of the map blocks: (<b>a</b>) plan diagram; (<b>b</b>) map pyramid.</p> "> Figure 6
<p>Pan–tilt–zoom camera (mounted under the UAV).</p> "> Figure 7
<p>Top view of the camera’s field-of-view. <math display="inline"><semantics> <mrow> <mo> </mo> <mi>A</mi> <mrow> <mo>(</mo> <mrow> <mi>n</mi> <mo>,</mo> <mi>e</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> represents the camera position and coordinates, <math display="inline"><semantics> <mi>P</mi> </semantics></math> represents the center point of the camera image, <math display="inline"><semantics> <mi>h</mi> </semantics></math> is the altitude of the UAV, and <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> represents the displacement of <math display="inline"><semantics> <mi>P</mi> </semantics></math> when the camera rotates up and down. (<b>a</b>) Camera without rotation; (<b>b</b>) camera is rotated up and down by <math display="inline"><semantics> <mi>α</mi> </semantics></math> degree; (<b>c</b>) camera is rotated up and down by degrees and left and right by <math display="inline"><semantics> <mi>β</mi> </semantics></math> degrees.</p> "> Figure 7 Cont.
<p>Top view of the camera’s field-of-view. <math display="inline"><semantics> <mrow> <mo> </mo> <mi>A</mi> <mrow> <mo>(</mo> <mrow> <mi>n</mi> <mo>,</mo> <mi>e</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> represents the camera position and coordinates, <math display="inline"><semantics> <mi>P</mi> </semantics></math> represents the center point of the camera image, <math display="inline"><semantics> <mi>h</mi> </semantics></math> is the altitude of the UAV, and <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> represents the displacement of <math display="inline"><semantics> <mi>P</mi> </semantics></math> when the camera rotates up and down. (<b>a</b>) Camera without rotation; (<b>b</b>) camera is rotated up and down by <math display="inline"><semantics> <mi>α</mi> </semantics></math> degree; (<b>c</b>) camera is rotated up and down by degrees and left and right by <math display="inline"><semantics> <mi>β</mi> </semantics></math> degrees.</p> "> Figure 8
<p>Weighted fusion of inter-frame and global matching in the figure, PreFrame represents the transformed previous frame and CurrFrame represents the current frame.</p> "> Figure 9
<p>The multirotor X-type tethered UAV and pan–tilt–zoom camera.</p> "> Figure 10
<p>Comparison of feature matching between the Orb and SuperGlue algorithms: (<b>a</b>) Orb feature matching (left: map, right: camera, the numbers in the figure represent incorrect matches); (<b>b</b>) SuperGlue feature matching (left: map, right: camera).</p> "> Figure 11
<p>Registration effect: (<b>a</b>) Orb algorithm; (<b>b</b>) our proposed method.</p> "> Figure 12
<p>Comparison of the matching effects prior to and following blocking: (<b>a</b>) effect of matching prior to blocking (the map is on the left, the camera image is on the right, and the numbers in the figure represent incorrect matches); (<b>b</b>) effect of matching following blocking (the map is on the left and the camera image is on the right).</p> "> Figure 13
<p>Matching effects prior to and following image rotation: (<b>a</b>) matching effect prior to rotation; (<b>b</b>) matching effect following rotation.</p> "> Figure 14
<p>Registration effect: (<b>a</b>) match with map; (<b>b</b>) match with the transformed previous frame.</p> "> Figure 15
<p>Comparison of the number of matching points with the transformed previous frame and map.</p> "> Figure 16
<p>Comparison of the inter-frame and global <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>e</mi> <mi>r</mi> <mi>r</mi> <mi>o</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> values.</p> "> Figure 17
<p>Registration difference image (without updating the map features).</p> "> Figure 18
<p>Registration difference image (after updating the map features).</p> "> Figure 19
<p>Registration image (without updating the map features).</p> "> Figure 20
<p>Registration image (after updating the map features).</p> "> Figure 21
<p>Comparison of the number of matching points and <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>x</mi> </mrow> </semantics></math> values: (<b>a</b>) number of matches; (<b>b</b>) average pixel value of the registered difference image (<math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>x</mi> </mrow> </semantics></math> ).</p> "> Figure 22
<p>Registration difference image (without updating the map features).</p> "> Figure 23
<p>Registration difference image (after updating the map features).</p> "> Figure 24
<p>Registration image (without updating the map features).</p> "> Figure 25
<p>Registration image (after updating the map features).</p> "> Figure 26
<p>Comparison of the number of matching points and <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>x</mi> </mrow> </semantics></math> values: (<b>a</b>) number of matches; (<b>b</b>) average pixel value of the registered difference image (<math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>x</mi> </mrow> </semantics></math> ).</p> "> Figure 26 Cont.
<p>Comparison of the number of matching points and <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>x</mi> </mrow> </semantics></math> values: (<b>a</b>) number of matches; (<b>b</b>) average pixel value of the registered difference image (<math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>x</mi> </mrow> </semantics></math> ).</p> ">
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Overall Design Framework
3.2. Hierarchical Blocking Strategy Combined with Prior UAV Data
3.2.1. SuperPoint and SuperGlue Feature-Matching Algorithms
3.2.2. Hierarchical and Block Strategy
3.2.3. Automatic Map Block Search Strategy Combined with Prior UAV Data
3.2.4. Rotation
3.3. Inter-Frame Information Fusion
3.3.1. Inter-Frame and Global Matching Fusion
3.3.2. Anomaly Matrix Detection and Removal
3.4. Map Feature Update
4. Experimental Results
4.1. The Effect of the Proposed Method and the Orb Algorithm
4.2. Blocking and Rotation Experiments
4.3. Comparison Conducted Prior to and Following the Addition of Inter-Frame-Matching Information
4.4. Comparison of the Registration Effects Prior to and Following the Real-Time Update of Map Features
4.4.1. Experiment 1 (Group 1)
4.4.2. Experiment 2 (Group 2)
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Menouar, H.; Guvenc, I.; Akkaya, K.; Uluagac, A.S.; Kadri, A.; Tuncer, A. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges. IEEE Commun. Mag. 2017, 55, 22–28. [Google Scholar] [CrossRef]
- Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
- Liu, P.; Chen, A.Y.; Huang, Y.-N.; Han, J.-Y.; Lai, J.-S.; Kang, S.-C.; Wu, T.-H.; Wen, M.-C.; Tsai, M.-H. A review of rotorcraft unmanned aerial vehicle (UAV) developments and applications in civil engineering. Smart Struct. Syst. 2014, 13, 1065–1094. [Google Scholar] [CrossRef]
- Li, S. A Review of Feature Detection and Match Algorithms for Localization and Mapping; IOP Conference Series: Materials Science and Engineering; IOP Publishing: Tianjin, China, 2017; Volume 231, p. 012003. [Google Scholar]
- Tsai, C.H.; Lin, Y.C. An accelerated image matching technique for UAV orthoimage registration. ISPRS J. Photogramm. Remote Sens. 2017, 128, 130–145. [Google Scholar] [CrossRef]
- Li, Q.; Wang, G.; Liu, J.; Chen, S. Robust scale-invariant feature matching for remote sensing image registration. IEEE Geosci. Remote Sens. Lett. 2009, 6, 287–291. [Google Scholar]
- Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y.; Liu, L. Remote sensing image registration with modified SIFT and enhanced feature matching. IEEE Geosci. Remote Sens. Lett. 2016, 14, 3–7. [Google Scholar] [CrossRef]
- Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4938–4947. [Google Scholar]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; IEEE: New York, NY, USA, 2002; Volume 2, pp. 1150–1157. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: New York, NY, USA, 2011; pp. 2564–2571. [Google Scholar]
- Viswanathan, D.G. Features from accelerated segment test (fast). In Proceedings of the 10th Workshop on Image Analysis for Multimedia Interactive Services, London, UK, 6–8 May 2009; pp. 6–8. [Google Scholar]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
- Wang, C.; Chen, J.; Chen, J.; Yue, A.; He, D.-X.; Huang, Q.; Zhang, Y. Unmanned aerial vehicle oblique image registration using an ASIFT-based matching method. J. Appl. Remote Sens. 2018, 12, 025002. [Google Scholar] [CrossRef]
- Liu, Y.; He, M.; Wang, Y.; Sun, Y.; Gao, X. Farmland Aerial Images Fast-Stitching Method and Application Based on Improved SIFT Algorithm. IEEE Access 2022, 10, 95411–95424. [Google Scholar] [CrossRef]
- Wu, T.; Hung, I.; Xu, H.; Yang, L.; Wang, Y.; Fang, L.; Lou, X. An Optimized SIFT-OCT Algorithm for Stitching Aerial Images of a Loblolly Pine Plantation. Forests 2022, 13, 1475. [Google Scholar] [CrossRef]
- Goh, J.N.; Phang, S.K.; Chew, W.J. Real-Time and Automatic Map Stitching through Aerial Images from UAV; Journal of Physics: Conference Series; IOP Publishing: Selangor, Malaysia, 2021; Volume 2120, p. 012025. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Xiong, P.; Liu, X.; Gao, C.; Zhou, Z.; Gao, C.; Liu, Q. A real-time stitching algorithm for UAV aerial images. In Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013), Hangzhou, China, 22–23 March 2013; Atlantis Press: Beijing, China, 2013; pp. 1613–1616. [Google Scholar]
- Zhang, G.; Qin, D.; Yang, J.; Yan, M.; Tang, H.; Bie, H.; Ma, L. UAV Low-Altitude Aerial Image Stitching Based on Semantic Segmentation and ORB Algorithm for Urban Traffic. Remote Sens. 2022, 14, 6013. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yuan, Y.; Huang, L.; Guo, J.; Zhang, C.; Chen, X.; Wang, J. OCNet: Object context for semantic segmentation. Int. J. Comput. Vis. 2021, 129, 2375–2398. [Google Scholar] [CrossRef]
- Yuan, Y.; Huang, W.; Wang, X.; Xu, H. Automated accurate registration method between UAV image and Google satellite map. Multimed. Tools Appl. 2020, 79, 16573–16591. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Zhuo, X.; Koch, T.; Kurz, F.; Fraundorfer, F. Automatic UAV image geo-registration by matching UAV images to georeferenced image data. Remote Sens. 2017, 9, 376. [Google Scholar] [CrossRef] [Green Version]
- Lin, Y.; Medioni, G. Map-enhanced UAV image sequence registration and synchronization of multiple image sequences. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; IEEE: New York, NY, USA, 2007; pp. 1–7. [Google Scholar]
- Nassar, A.; Amer, K.; ElHakim, R.; ElHelw, M. A deep CNN-based framework for enhanced aerial imagery registration with applications to UAV geolocalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1513–1523. [Google Scholar]
- Zhang, F.; Liu, F. Parallax-tolerant image stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3262–3269. [Google Scholar]
- Wan, Q.; Chen, J.; Luo, L.; Gong, W.; Wei, L. Drone Image Stitching Using Local Mesh-Based Bundle Adjustment and Shape-Preserving Transform. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7027–7037. [Google Scholar] [CrossRef]
- Chen, J.; Li, Z.-C.; Peng, C.; Wang, Y.; Gong, W. UAV Image Stitching Based on Optimal Seam and Half-Projective Warp. Remote Sens. 2022, 14, 1068. [Google Scholar] [CrossRef]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 224–236. [Google Scholar]
- Scarselli, F.; Gori, M.; Tsoi, A.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef] [Green Version]
- Mnih, V.; Heess, N.; Graves, A.; Kavukcuoglu, K. Recurrent models of visual attention. arXiv 2014, arXiv:1406.6247. [Google Scholar]
- Tolstikhin, I.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. Mlp-mixer: An all-mlp architecture for vision. arXiv 2021, arXiv:2105.01601. [Google Scholar]
- Jakubović, A.; Velagić, J. Image feature matching and object detection using brute-force matchers. In Proceedings of the International Symposium ELMAR, Zadar, Croatia, 16–19 September 2018; IEEE: New York, NY, USA, 2018; pp. 83–86. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: New York, NY, USA, 2011; pp. 2548–2555. [Google Scholar]
Experimental Group | Algorithm | Number of Matches | Accurate Number | Accuracy Rate |
---|---|---|---|---|
1 | Orb | 190 | 10 | 0.05 |
SuperGlue | 583 | 274 | 0.47 | |
2 | Orb | 397 | 16 | 0.04 |
SuperGlue | 1706 | 1283 | 0.75 |
Strategy | Algorithm | Frame Rate |
---|---|---|
Non-blocking | SuperGlue | 9 |
Blocking | SuperGlue | 12 |
Sampling Frame | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
Non-blocking | 7 | 55 | 14 | 26 | 41 | 25 | 4 | 5 | 96 | 64 |
Blocking | 282 | 314 | 88 | 166 | 95 | 105 | 136 | 87 | 400 | 242 |
Sampling Frame | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
Before Rotation | 56 | 70 | 71 | 35 | 91 | 113 | 22 | 30 | 60 | 41 |
After Rotation | 274 | 332 | 334 | 178 | 452 | 496 | 147 | 198 | 225 | 208 |
Sampling Frame | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Aver_ Value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Inter-frame | 18 | 12 | 22 | 53 | 19 | 39 | 5 | 4 | 1 | 15 | 14 | 20 | 27 | 34 | 20 | 22.7 |
Global | 821 | 79 | 199 | 594 | 87 | 352 | 328 | 150 | 249 | 663 | 38 | 39 | 55 | 19 | 42 | 226 |
Frame Number | Update | 15 | 30 | 45 | 60 | 75 | 90 | 105 | 120 | 135 | 150 | 165 | 180 | 195 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of matches | Yes | 341 | 351 | 340 | 357 | 354 | 354 | 342 | 345 | 347 | 358 | 383 | 373 | 390 |
No | 86 | 3 | 9 | 56 | 7 | 6 | 15 | 24 | 1 | 21 | 3 | 13 | 135 | |
Index | Yes | 36.5 | 36.2 | 36.6 | 36.7 | 36.9 | 36.6 | 37.1 | 37.1 | 37.3 | 38.5 | 43.5 | 44.0 | 44.5 |
No | 39.6 | 51.2 | 40.7 | 37.9 | 38.3 | 39.8 | 38.4 | 111.7 | 39.2 | 39.0 | 47.0 | 44.4 | 45.3 |
Frame Number | Update | 15 | 30 | 45 | 60 | 75 | 90 | 105 | 120 | 135 | 150 | 165 | 180 | 195 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of matches | Yes | 328 | 323 | 282 | 292 | 294 | 304 | 270 | 293 | 277 | 281 | 285 | 293 | 294 |
No | 320 | 309 | 283 | 239 | 305 | 173 | 217 | 239 | 262 | 242 | 280 | 258 | 260 | |
Index | Yes | 44.1 | 43.1 | 42.7 | 42.5 | 42.1 | 42.0 | 42.4 | 42.6 | 42.3 | 42.3 | 42.3 | 42.5 | 42.3 |
No | 46.6 | 43.1 | 47.5 | 43.3 | 43.7 | 42.2 | 42.2 | 42.8 | 43.5 | 42.9 | 44.8 | 46.5 | 44.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Z.; Xu, G.; Xiao, J.; Yang, J.; Wang, Z.; Cheng, S. A Real-Time Registration Algorithm of UAV Aerial Images Based on Feature Matching. J. Imaging 2023, 9, 67. https://doi.org/10.3390/jimaging9030067
Liu Z, Xu G, Xiao J, Yang J, Wang Z, Cheng S. A Real-Time Registration Algorithm of UAV Aerial Images Based on Feature Matching. Journal of Imaging. 2023; 9(3):67. https://doi.org/10.3390/jimaging9030067
Chicago/Turabian StyleLiu, Zhiwen, Gen Xu, Jiangjian Xiao, Jingxiang Yang, Ziyang Wang, and Siyuan Cheng. 2023. "A Real-Time Registration Algorithm of UAV Aerial Images Based on Feature Matching" Journal of Imaging 9, no. 3: 67. https://doi.org/10.3390/jimaging9030067
APA StyleLiu, Z., Xu, G., Xiao, J., Yang, J., Wang, Z., & Cheng, S. (2023). A Real-Time Registration Algorithm of UAV Aerial Images Based on Feature Matching. Journal of Imaging, 9(3), 67. https://doi.org/10.3390/jimaging9030067