Nothing Special   »   [go: up one dir, main page]

CN110414558B - Feature point matching method based on event camera - Google Patents

Feature point matching method based on event camera Download PDF

Info

Publication number
CN110414558B
CN110414558B CN201910551377.9A CN201910551377A CN110414558B CN 110414558 B CN110414558 B CN 110414558B CN 201910551377 A CN201910551377 A CN 201910551377A CN 110414558 B CN110414558 B CN 110414558B
Authority
CN
China
Prior art keywords
point
feature point
descriptor
points
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910551377.9A
Other languages
Chinese (zh)
Other versions
CN110414558A (en
Inventor
余磊
陈欣宇
杨文�
杨公宇
叶琪霖
王碧杉
周立凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910551377.9A priority Critical patent/CN110414558B/en
Publication of CN110414558A publication Critical patent/CN110414558A/en
Application granted granted Critical
Publication of CN110414558B publication Critical patent/CN110414558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method for extracting descriptors from detected feature points, and matching the feature points by using the generated descriptors. The invention aims to solve the problem that the traditional feature point descriptor algorithm can not be stably applied to an event camera, and the invention utilizes the time stamp information of the event camera to extract the descriptor from the feature point, so that the advantage of the event camera can be better utilized, the descriptor information is richer, and the matching result is more accurate.

Description

Feature point matching method based on event camera
Technical Field
The invention belongs to the field of image processing, and is used for generating a feature point descriptor based on an event camera and performing feature point matching.
Background
Machine vision relies primarily on frame-based cameras, which acquire entire frames at a fixed temporal exposure and frame rate, store and process image information in a matrix form. Such a simple image storage format may not be ideal for image processing and feature extraction, in large part because grayscale-based images contain much redundant information. Pixel intensity information is useful for human eye identification and retrieval, but adds to the difficulty of machine-based image processing. At the same time, sequential image readout can adversely affect image processing hardware because a large amount of unnecessary data must be processed before the desired features are obtained. Meanwhile, a common camera is sensitive to illumination change, and is easy to be too dark or overexposed when facing a high-dynamic illumination scene, and motion blur or poor imaging effect generated by a common optical image in a high-speed motion state seriously affects the image quality. The imaging of a common optical camera in a high-speed motion state is shown in fig. 1.
Event cameras have gained more and more attention in the field of machine vision, and are a novel visual sensor simulating human retina, the pixel array surface of the event camera triggers and outputs the pixel point position, time and intensity of light intensity change through the light intensity change, so the output of the event camera is not the video frame sequence of a standard camera but a series of asynchronous event streams.More specifically, at tjTime of day uj=(xj,yj) The brightness increment at the pixel position reaches a threshold value + -c (c > 0), then an event ej=(xj,yj,tj,pj) To be triggered, pjE { +1, -1} indicates that the polarity of the event is positive indicating an increase in brightness and negative indicating a decrease in brightness, so the event camera outputs an asynchronous stream of events whose absolute brightness values are no longer directly visible since the events only record incremental changes. The cameras are not limited by traditional exposure time and frame rate, the time coordinate precision can reach microsecond level, background redundant information can be effectively filtered, the camera motion information can be concentrated and captured, the data transmission bandwidth is saved, the data storage pressure is reduced, and the cameras have the advantages of high dynamic, low time delay and low power consumption, so that the cameras can provide reliable visual information during high-speed motion or in scenes characterized by high dynamic range. The imaging of the event camera in the high motion state is shown in fig. 2.
In some difficult scenes, such as high-speed motion or a state with severe change of illumination conditions, the motion blur or poor imaging effect generated by a common optical image causes great difficulty in feature point detection, feature point description and feature point matching. However, the event camera can acquire a real-time state of a high-speed moving object and possess a high dynamic range.
Since the event camera outputs individually isolated points, the points appearing in the event frame at different moments in time in different motion states may not be stable for the same object. Therefore, the conventional feature point descriptor algorithm may not be stably applicable. Meanwhile, compared with a common optical image, the event point of the event output also contains respective time information. This information is not well exploited by using the conventional descriptor algorithm directly. Therefore, extracting descriptors for event feature points from the perspective of timestamp information would better take advantage of the advantages of an event camera.
Disclosure of Invention
The invention generates the feature point descriptor based on the event camera and matches the feature points according to the descriptor. Under the inspiration of a shape context algorithm, a feature point descriptor generation method aiming at the imaging characteristics of an event image is provided, and feature points are matched according to the generated descriptor. The method for generating the feature point descriptors is slightly different from the traditional method, and because the time camera can obtain the position, the time and the polarity of the event point, the invention utilizes the timestamp information, which can better utilize the advantages of the event camera.
The technical scheme provided by the invention is a feature point descriptor matching method based on an event camera, which comprises the following specific steps:
step 1, using characteristic point piN concentric circles are established at logarithmic distance intervals in a local area with the circle center as R1 as the outermost circle radius and R2 as the innermost circle radius, i.e., (log10(R1), log10(R2)) are logarithmically (i.e., logspace) equally divided into N elements, and then the region is equally divided in the circumferential direction M to generate a grid, i.e., bins, as shown in fig. 4.
Step 2, comparing the characteristic points piTime stamp t ofpiAnd time stamp t of each event point in the local areaqiIf t ispi<tqiIf t is equal to 1, the point is set topi>tqiThen the point is set to "0".
Step 3, counting the characteristic points piThe number of "1" in each bins in the local area, i.e. the statistical distribution histogram h of these points in the binsi(k) Called feature point piThe descriptor of (1), wherein the descriptor size is M x N.
And 4, traversing all the feature points to obtain descriptors corresponding to all the feature points.
Step 5, according to hi(k) And calculating the similarity between every two feature point sets, namely a cost value, and then counting the corresponding relation of a group of point sets with the lowest overall cost value by using the Hungarian algorithm, thereby obtaining the feature point matching relation. The cost value is calculated by the formula:
Figure BDA0002105569940000031
wherein k is the kth bit in the descriptor, hi(k),hj(k) Respectively represent characteristic points piAnd q isjThe description of (1).
And 6, removing mismatching by using a vector consistency method to obtain the best matching.
Compared with the prior art, the invention has the advantages and beneficial effects that: the event points output by the event cameras also contain respective timestamp information, which is not utilized by descriptor extraction in existing matching algorithms. The advantages of the event camera can be better utilized by extracting the descriptors from the feature points by utilizing the timestamp information of the event camera, so that the descriptor information is richer, and the matching result is more accurate.
Drawings
Fig. 1 is a general optical camera imaging.
Fig. 2 is an event camera imaging.
Fig. 3 is a flowchart describing a child generation method and feature point matching.
FIG. 4 is a feature point description sub-meshing diagram.
Fig. 5 is a feature point matching result.
Detailed Description
The invention is mainly based on an event camera and processes event stream data. In consideration of the characteristics of the event camera, a method for extracting descriptors of feature points is provided, and feature point matching is performed according to the descriptors. In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following examples. It should be understood that the specific examples described herein are intended to be illustrative only and are not intended to be limiting.
As shown in fig. 3, the feature point descriptor matching method based on the event camera provided in the embodiment of the present invention includes the following specific implementation steps:
step 1, setting N to 8, M to 12, R1 to 12, and R2 to 1. By a characteristic point piEstablishing N concentric circles at intervals of logarithmic distance in a local area with the circle center as R1 as the outermost circle radius and R2 as the innermost circle radius, and equally dividing the area along the circumferential direction M to generate a netAnd (4) grid.
Step 2, comparing the characteristic points piTime stamp t ofpiAnd time stamp t of each event point in the local areaqiIf t ispi<tqiIf t is equal to 1, the point is set topi>tqiThen the point is set to "0", i.e. intra-trellis coded.
Step 3, counting the characteristic points piThe number of "1" in each bins in the local area, i.e. the statistical distribution histogram h of these points in the binsi(k) Called feature point piThe description of (1).
And 4, traversing all the feature points to obtain descriptors corresponding to all the feature points.
And 5, calculating a cost value between every two feature point sets, wherein the cost value is X2 test statistic (chi-square test statistic, and the deviation degree between the actual observed value and the theoretical inferred value of the statistical sample). And (3) counting the corresponding relation of a group of point sets with the lowest overall cost value by using the Hungarian algorithm, thereby obtaining the characteristic point matching relation.
And 6, removing mismatching by using a vector consistency method to obtain the best matching.
The result of using the vector consistency method to remove the mismatching is shown in fig. 5, the initial matching is 1642 pairs, the number of the mismatching removed is 1101 pairs, and the ratio is 0.6705, and as can be seen from fig. 5, the matching performed by the method of the present invention has many feature points, the matching connecting lines between the two graphs are basically in the same direction, and the matching result is accurate.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (1)

1. The feature point matching method based on the event camera is characterized by comprising the following steps of:
step 1, using characteristic point piAs the center of a circle,Establishing N concentric circles at intervals of logarithmic distance in a local area with the radius of the outermost circle R1 and the radius of the innermost circle R2, and then equally dividing the area along the circumferential direction M to generate a grid;
step 2, comparing the characteristic points piTime stamp t ofpiAnd time stamp t of each event point in the local areaqiIf t ispi<tqiIf t is equal to 1, the point is set topi>tqiIf yes, the point is set to be 0;
step 3, counting the characteristic points piThe number of "1" in each bins in the local area, i.e. the statistical distribution histogram h of these points in the binsi(k) Called feature point piA descriptor of (1);
step 4, traversing all the feature points to obtain descriptors corresponding to all the feature points;
step 5, according to hi(k) Calculating the similarity between every two feature point sets, namely a cost value, and then counting the corresponding relation of a group of point sets with the lowest overall cost value by using a Hungarian algorithm, thereby obtaining a feature point matching relation;
the cost value is calculated in step 5 by the formula,
Figure FDA0003056037140000011
wherein k is the kth bit in the descriptor, hi(k),hj(k) Respectively represent characteristic points piAnd q isjA descriptor of (1);
and 6, removing mismatching by using a vector consistency method to obtain the best matching.
CN201910551377.9A 2019-06-24 2019-06-24 Feature point matching method based on event camera Active CN110414558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910551377.9A CN110414558B (en) 2019-06-24 2019-06-24 Feature point matching method based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910551377.9A CN110414558B (en) 2019-06-24 2019-06-24 Feature point matching method based on event camera

Publications (2)

Publication Number Publication Date
CN110414558A CN110414558A (en) 2019-11-05
CN110414558B true CN110414558B (en) 2021-07-20

Family

ID=68359703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910551377.9A Active CN110414558B (en) 2019-06-24 2019-06-24 Feature point matching method based on event camera

Country Status (1)

Country Link
CN (1) CN110414558B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696143B (en) * 2020-06-16 2022-11-04 清华大学 Event data registration method and system
CN112367181B (en) * 2020-09-29 2022-10-18 歌尔科技有限公司 Camera network distribution method, device, equipment and medium
CN111931752B (en) * 2020-10-13 2021-01-01 中航金城无人系统有限公司 Dynamic target detection method based on event camera
CN114140365B (en) * 2022-01-27 2022-07-22 荣耀终端有限公司 Event frame-based feature point matching method and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10937239B2 (en) * 2012-02-23 2021-03-02 Charles D. Huston System and method for creating an environment and for sharing an event
CN103727930B (en) * 2013-12-30 2016-03-23 浙江大学 A kind of laser range finder based on edge matching and camera relative pose scaling method
US10702707B2 (en) * 2014-08-01 2020-07-07 CP Studios LLC Hand sanitizer station
CN106934465A (en) * 2017-03-08 2017-07-07 中国科学院上海高等研究院 For the removable calculating storage device and information processing method of civil aviaton's industry
CN109801314B (en) * 2019-01-17 2020-10-02 同济大学 Binocular dynamic vision sensor stereo matching method based on deep learning

Also Published As

Publication number Publication date
CN110414558A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414558B (en) Feature point matching method based on event camera
Zhu et al. A retina-inspired sampling method for visual texture reconstruction
CN106875437B (en) RGBD three-dimensional reconstruction-oriented key frame extraction method
CN111770290A (en) Noise reduction method for dynamic vision sensor output event stream
CN112116582B (en) Method for detecting and identifying cigarettes in inventory or display scene
CN111639580B (en) Gait recognition method combining feature separation model and visual angle conversion model
CN111815715B (en) Calibration method and device of zoom pan-tilt camera and storage medium
CN112232356A (en) Event camera denoising method based on cluster degree and boundary characteristics
WO2023236886A1 (en) Cloud occlusion prediction method based on dense optical flow method
CN109559353A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
CN114913239A (en) Event camera sensor and RGB camera combined calibration method and device
CN111696044B (en) Large-scene dynamic visual observation method and device
CN103578121B (en) Method for testing motion based on shared Gauss model under disturbed motion environment
CN109784215B (en) In-vivo detection method and system based on improved optical flow method
CN111160107A (en) Dynamic region detection method based on feature matching
CN112561949B (en) Rapid moving object detection algorithm based on RPCA and support vector machine
CN114612507A (en) High-speed target tracking method based on pulse sequence type image sensor
CN111696143B (en) Event data registration method and system
CN110210404B (en) Face recognition method and system
CN111161399B (en) Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN113411510A (en) Camera automatic exposure algorithm based on image quality evaluation and red hot forging
CN111881841A (en) Face detection and recognition method based on binocular vision
Wu et al. Video surveillance object recognition based on shape and color features
CN114582017A (en) Generation method and generation system of gesture data set and storage medium
CN110765991B (en) High-speed rotating electrical machine fuse real-time detection system based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant