Accurate Matching of Invariant Features Derived from Irregular Curves
<p>The output of our curve matching and feature matching on <span class="html-italic">temple</span> scene.</p> "> Figure 2
<p>The pipeline of our method consists of two stages—curve matching and feature matching—to extract invariant features from curves. In the first stage, there are three sub-steps—<span class="html-italic">local region searching</span>, <span class="html-italic">segmented matching</span>, and <span class="html-italic">candidate filtering</span>—where the curves detected in the image pair are accurately matched into curve segments. Considering the convenient use of curve features, the invariant features are extracted from the matched curves in the second stage by using a self-adaptive curve fitting strategy, which is under the joint optimization of <span class="html-italic">curve segmentation</span> and <span class="html-italic">“outlier” removal</span>.</p> "> Figure 3
<p>The guided curve matching.</p> "> Figure 4
<p>Local region searching. The red curve is <span class="html-italic">c</span> in <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math>, while others belong to <math display="inline"><semantics> <msubsup> <mi>I</mi> <mn>2</mn> <mo>′</mo> </msubsup> </semantics></math>. Suppose these curves cross two cells. The black points are keypoints and the green curves <math display="inline"><semantics> <msubsup> <mfenced separators="" open="{" close="}"> <msubsup> <mi>c</mi> <mi>j</mi> <mo>′</mo> </msubsup> </mfenced> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> </semantics></math> in the dotted region are the potential correspondences of the curve <span class="html-italic">c</span>.</p> "> Figure 5
<p>An example of segmented matching for one curve and its candidate. The blue dots are the first pair of fiducial points obtained by the MSCD description and NNDR matching. The blue curves are the matched curve segments, whose matching starts from the blue points and goes along the curve. Next, the same operations are executed on the remaining curve parts; then, the green dots are obtained and the green segments are matched. Then, the red dots are obtained and the red segments are matched in the same way. After the whole matching step, for one pair of curve correspondences, multiple pairs of curve segments can be accurately matched.</p> "> Figure 6
<p>The illustration of GDD. One side <span class="html-italic">p</span> is along the direction <math display="inline"><semantics> <msub> <mi>d</mi> <mo>⊥</mo> </msub> </semantics></math>, while the other side <span class="html-italic">n</span> is along the negative direction.</p> "> Figure 7
<p>Explanation of the surrounding keypoints. For one curve segment, we narrow the keypoint-searching region by minimizing an enclosing rectangle (the red one). The green points are considered the ideal surrounding keypoints for the curve segment, but the black ones can also be added when green points are lacking. The selection of the surrounding keypoints is based on their RMSE, whose threshold is denoted as <math display="inline"><semantics> <msub> <mi>ε</mi> <mrow> <mi>k</mi> <mi>p</mi> </mrow> </msub> </semantics></math>.</p> "> Figure 8
<p>Demonstration of each step for curve matching in scenes of different textures. From top to bottom, the texture types are ordinary, low-texture, and repeated-texture, respectively. (<b>a</b>) At the first step, local searching, we acquire multiple curves in <math display="inline"><semantics> <msubsup> <mi>I</mi> <mn>2</mn> <mo>′</mo> </msubsup> </semantics></math> as candidates for <span class="html-italic">c</span>. (<b>b</b>) After segmented matching, a curve may have multiple correspondences that accurately match. (<b>c</b>) Lastly, by candidate filter, the unique candidate of one curve is retained.</p> "> Figure 9
<p>The effect of each step for curve matching, which is compared with direct matching by MSCD [<a href="#B37-remotesensing-14-01198" class="html-bibr">37</a>]. The <b>left</b> column is the mismatches without adding the corresponding step of our method, and the <b>right</b> column shows the result of adopting the corresponding step.</p> "> Figure 10
<p>Comparison of curve matching using MSCD [<a href="#B37-remotesensing-14-01198" class="html-bibr">37</a>] and our method.</p> "> Figure 11
<p>Comparison of the extracted features on scenes of different textures. From top to bottom, the groups are ordinary (<span class="html-italic">temple</span> and <span class="html-italic">railtrack</span>), low-texture (<span class="html-italic">playground</span> and <span class="html-italic">table</span>), and repeated-texture (<span class="html-italic">roof</span> and <span class="html-italic">pattern</span>). The matched features across images are connected by green lines.</p> "> Figure 12
<p>The mismatching for twisted curves.</p> "> Figure 13
<p>The maximal curvature points on the detected curve pairs without curve fitting.</p> "> Figure 14
<p>Stitching results using SIFT keypoints and our features. Group (<b>a</b>) and group (<b>c</b>) are the results of keypoints; group (<b>b</b>) and group (<b>d</b>) are our results.</p> "> Figure 15
<p>More comparisons of performances of SIFT and our method. From top to bottom, the scenes are ordinary, low-texture, and repeated-texture. (<b>a</b>) the matching keypoints, connected by green lines; (<b>b</b>) the matching features from our method; (<b>c</b>) the curve-matching results from our method; (<b>d</b>) the stitching results from SIFT keypoints; (<b>e</b>) the stitching results from our method.</p> "> Figure 16
<p>Matching features for relative pose estimation extracted by SIFT and our method. From top to bottom, the datasets are <span class="html-italic">picture</span>, <span class="html-italic">bear</span>, and <span class="html-italic">ball</span>.</p> "> Figure 17
<p>The result of feature-based image searching using our extracted features. The first column is the image for searching, while the next columns are the searching results, which have been ranked according to the matching number of features (only the top eight frames are shown).</p> ">
Abstract
:1. Introduction
- (1)
- We propose an end-to-end method to match invariant features derived from irregular curves in images with different textures;
- (2)
- Compared to the existing curve-matching approaches, we introduce a three-step matching strategy to match the curves more accurately by elaborate searching and description;
- (3)
- We present a self-adaptive fitting approach of matching features to eliminate the disturbance of pixel-level descriptions caused by perspective change;
- (4)
- Extensive experiments are implemented to show the effectiveness of our method compared to the state-of-the-art keypoint detection methods, and different fields are involved to demonstrate the applications of the matching invariant features.
2. Related Work
3. Proposed Approach
3.1. Curve Matching
Algorithm 1 The pseudo-algorithm of the curve matching stage. |
|
Algorithm 2 The pseudo-algorithm of segmented matching. |
|
3.2. Feature Matching
4. Experimental Results and Analysis
4.1. Parameters
4.2. Implementation Details
4.3. Datasets
4.4. Analysis of Curve Matching
4.5. Analysis of Feature Matching
4.6. Analysis of Processing Time
4.7. Applications
5. Discussions and Limitations
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
References
- Szeliski, R. Image alignment and stitching: A tutorial. Found. Trends Comput. Graph. Vis. 2007, 2, 1–104. [Google Scholar] [CrossRef]
- Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
- Gao, J.; Kim, S.J.; Brown, M.S. Constructing image panoramas using dual-homography warping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 49–56. [Google Scholar]
- Zaragoza, J.; Chin, T.J.; Brown, M.S.; Suter, D. As-projective-as-possible image stitching with moving dlt. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2339–2346. [Google Scholar]
- Zhang, F.; Liu, F. Parallax-tolerant image stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3262–3269. [Google Scholar]
- Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S.M.; Szeliski, R. Building rome in a day. Commun. ACM 2011, 54, 105–112. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. Orb-slam: A versatile and accurate monocular slam system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Ilg, E.; Saikia, T.; Keuper, M.; Brox, T. Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 614–630. [Google Scholar]
- Nistér, D. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 756–770. [Google Scholar] [CrossRef]
- Li, B.; Heng, L.; Lee, G.H.; Pollefeys, M. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1595–1601. [Google Scholar]
- Elqursh, A.; Elgammal, A. Line-based relative pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3049–3056. [Google Scholar]
- Hartley, R.I. A linear method for reconstruction from lines and points. In Proceedings of the IEEE International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 882–887. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. Orb: An efficient alternative to sift or surf. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Akinlar, C.; Topal, C. Edlines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. Lsd: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef]
- Shi, J. Good feature to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
- Zitová, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef] [Green Version]
- Harris, C.G.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 10–5244. [Google Scholar]
- Bay, H.; Tuytelaars, T. Luc van gool. SURF: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 430–443. [Google Scholar]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 778–792. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. Brisk: Binary robust invariant scalable keypoints. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Danielsson, P.E. Euclidean distance mapping. Comput. Graph. Image Process. 1980, 14, 227–248. [Google Scholar] [CrossRef] [Green Version]
- Bookstein, A.; Kulyukin, V.A.; Raita, T. Generalized hamming distance. Inf. Retr. 2002, 5, 353–375. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Zhao, J.; Ma, J.; Tian, J.; Zhang, D. A robust method for vector field learning with application to mismatch removing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2977–2984. [Google Scholar]
- Abdellali, H.; Frohlich, R.; Kato, Z. A direct least-squares solution to multi-view absolute and relative pose from 2d–3d perspective line pairs. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea, 27–28 October 2019; pp. 2119–2128. [Google Scholar]
- Xiang, T.Z.; Xia, G.S.; Bai, X.; Zhang, L. Image stitching by line-guided local warping with global similarity constraint. Pattern Recognit. 2018, 83, 481–497. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Yuan, L.; Sun, J.; Quan, L. Dual-feature warping-based motion model estimation. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4283–4291. [Google Scholar]
- Aggarwal, N.; Karl, W.C. Line detection in images through regularized hough transform. IEEE Trans. Image Process. 2006, 15, 582–591. [Google Scholar] [CrossRef] [PubMed]
- Galamhos, C.; Matas, J.; Kittler, J. Progressive probabilistic hough transform for line detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; pp. 554–560. [Google Scholar]
- Fan, B.; Wu, F.; Hu, Z. Robust line matching through line–point invariants. Pattern Recognit. 2012, 45, 794–805. [Google Scholar] [CrossRef]
- Bay, H.; Ferraris, V.; Van Gool, L. Wide-baseline stereo matching with line segments. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 329–336. [Google Scholar]
- Li, K.; Yao, J.; Lu, X.; Li, L.; Zhang, Z. Hierarchical line matching based on line–junction–line structure descriptor and local homography estimation. Neurocomputing 2016, 184, 207–220. [Google Scholar] [CrossRef]
- Zhang, L.; Koch, R. An efficient and robust line segment matching approach based on lbd descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, F.; Hu, Z. Msld: A robust descriptor for line matching. Pattern Recognit. 2009, 42, 941–953. [Google Scholar] [CrossRef]
- Zouqi, M.; Samarabandu, J.; Zhou, Y. Multi-modal image registration using line features and mutual information. In Proceedings of the IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 129–132. [Google Scholar]
- Lee, J.H.; Zhang, G.; Lim, J.; Suh, I.H. Place recognition using straight lines for vision-based slam. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3799–3806. [Google Scholar]
- Belongie, S.; Malik, J.; Puzicha, J. Shape context: A new descriptor for shape matching and object recognition. In Proceedings of the NIPS’00: Proceedings of the 13th International Conference on Neural Information Processing Systems, Denver, CO, USA, 1 January 2000; pp. 831–837. [Google Scholar]
- Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
- Liu, D.; Wang, Y.; Tang, Z.; Lu, X. A robust circle detection algorithm based on top-down least-square fitting analysis. Comput. Electr. Eng. 2014, 40, 1415–1428. [Google Scholar] [CrossRef]
- Akinlar, C.; Topal, C. Edpf: A real-time parameter-free edge segment detector with a false detection control. Int. J. Pattern Recognit. Artif. Intell. 2012, 26, 1255002. [Google Scholar] [CrossRef]
- Topal, C.; Akinlar, C. Edge drawing: A combined real-time edge and segment detector. J. Vis. Commun. Image Represent. 2012, 23, 862–872. [Google Scholar] [CrossRef]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 224–236. [Google Scholar]
- Liu, Y.; Xu, X.; Li, F. Image feature matching based on deep learning. In Proceedings of the IEEE 4th International Conference on Computer and Communications, Chengdu, China, 7–10 December 2018; pp. 1752–1756. [Google Scholar]
- Ono, Y.; Trulls, E.; Fua, P.; Yi, K.M. Lf-net: Learning local features from images. In Proceedings of the Thirty-second Conference on Neural Information Processing Systems, Montréal, QC, Canada, 2–8 December 2018; pp. 6237–6247. [Google Scholar]
- Yi, M.K.; Trulls, E.; Lepetit, V.; Fua, P. Lift: Learned invariant feature transform. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 467–483. [Google Scholar]
- Vedaldi, A.; Fulkerson, B. Vlfeat: An open and portable library of computer vision algorithms. In Proceedings of the 18th ACM International Conference on Multimedia, Firenze, Italy, 25–29 October 2010; pp. 1469–1472. Available online: http://www.vlfeat.org/ (accessed on 15 March 2021).
- Lourakis, I.A.M. Levmar: Levenberg-marquardt nonlinear least squares algorithms in c/c++ 2004. Available online: http://users.ics.forth.gr/~lourakis/levmar/ (accessed on 12 January 2021).
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of rgb-d slam systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
Parameter | Value | Parameter | Value |
---|---|---|---|
0.55 | 0.6 | ||
0.65 | 0.55 | ||
0.3 | 1.0 | ||
0.5 | 1.0 |
Method | Number ↑ | RMSE ↓ |
---|---|---|
W/o curve segmentation | 313 | 3.0941 |
W/o “outlier” removal | 199 | 2.9772 |
W/o surrounding keypoints | 286 | 2.8371 |
Ours | 330 | 2.6913 |
Texture | Scene | Cubic | Quartic | Quintic | |||
---|---|---|---|---|---|---|---|
Number | Error | Number | Error | Number | Error | ||
Ordinary | Temple | 314 | 2.837 | 409 | 2.917 | 461 | 2.885 |
Railtrack | 464 | 3.908 | 472 | 3.988 | 406 | 4.082 | |
Low-texture | Playground | 329 | 2.009 | 506 | 2.542 | 453 | 2.365 |
Table | 76 | 2.034 | 81 | 2.149 | 78 | 2.114 | |
Repeated-texture | Roof | 328 | 2.517 | 408 | 2.518 | 334 | 2.603 |
Pattern | 1618 | 4.256 | 1801 | 4.512 | 1842 | 4.163 |
Ordinary | Low-Texture | Repeated-Texture | ||||
---|---|---|---|---|---|---|
Temple | Railtrack | Playground | Table | Roof | Pattern | |
SIFT [13] | 279 | 353 | 143 | 64 | 25 | 40 |
SURF [20] | 291 | 381 | 135 | 68 | 33 | 73 |
ORB [14] | 264 | 372 | 121 | 49 | 20 | 57 |
BRISK [23] | 284 | 359 | 160 | 57 | 19 | 67 |
LIFT [48] | 298 | 377 | 190 | 67 | 134 | 245 |
SuperPoint [45] | 318 | 427 | 203 | 45 | 162 | 311 |
LF-Net [47] | 269 | 435 | 256 | 68 | 276 | 477 |
Ours | 314 | 464 | 329 | 76 | 328 | 1618 |
Texture | Scene | Curve-Matching Time (s) | Feature-Matching Time (s) | Image Size |
---|---|---|---|---|
Ordinary | Temple | 55 | 26 | 730 × 487 |
railtrack | 69 | 49 | 720×540 | |
Low-texture | Playground | 48 | 23 | 720 × 540 |
Table | 7 | 16 | 1440×1080 | |
Repeated-texture | Roof | 63 | 21 | 720 × 540 |
Pattern | 96 | 27 | 600×450 |
Scene | SIFT | Ours | Scene | SIFT | Ours |
---|---|---|---|---|---|
Temple | 5.55 | 5.09 | Railtrack | 13.41 | 11.81 |
Playground | 8.39 | 5.81 | Table | 2.77 | 2.41 |
Roof | 4.10 | 3.84 | Pattern | 29.15 | 26.71 |
Data | Points | Avg. Error of R | Avg. Error of T |
---|---|---|---|
Picture | SIFT | 1.1589 | 3.2564 |
Ours | 1.2350 | 4.1220 | |
Both | 0.5563 | 1.8952 | |
Bear | SIFT | 1.3720 | 1.7954 |
Ours | 1.3596 | 1.7258 | |
Both | 0.8962 | 0.8841 | |
Ball | SIFT | 1.1080 | 5.2665 |
Ours | 1.8141 | 6.3214 | |
Both | 1.0245 | 5.2533 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, H.; Yin, S.; Sui, H.; Yang, Q.; Lei, D.; Yang, W. Accurate Matching of Invariant Features Derived from Irregular Curves. Remote Sens. 2022, 14, 1198. https://doi.org/10.3390/rs14051198
Liu H, Yin S, Sui H, Yang Q, Lei D, Yang W. Accurate Matching of Invariant Features Derived from Irregular Curves. Remote Sensing. 2022; 14(5):1198. https://doi.org/10.3390/rs14051198
Chicago/Turabian StyleLiu, Huajun, Shuang Yin, Haigang Sui, Qingye Yang, Dian Lei, and Wei Yang. 2022. "Accurate Matching of Invariant Features Derived from Irregular Curves" Remote Sensing 14, no. 5: 1198. https://doi.org/10.3390/rs14051198