Nothing Special   »   [go: up one dir, main page]

CN110414558B - Feature point matching method based on event camera - Google Patents

Feature point matching method based on event camera Download PDF

Info

Publication number
CN110414558B
CN110414558B CN201910551377.9A CN201910551377A CN110414558B CN 110414558 B CN110414558 B CN 110414558B CN 201910551377 A CN201910551377 A CN 201910551377A CN 110414558 B CN110414558 B CN 110414558B
Authority
CN
China
Prior art keywords
feature point
point
feature
event
descriptor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910551377.9A
Other languages
Chinese (zh)
Other versions
CN110414558A (en
Inventor
余磊
陈欣宇
杨文�
杨公宇
叶琪霖
王碧杉
周立凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910551377.9A priority Critical patent/CN110414558B/en
Publication of CN110414558A publication Critical patent/CN110414558A/en
Application granted granted Critical
Publication of CN110414558B publication Critical patent/CN110414558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种对已检测到的特征点提取描述子的方法,并使用生成的描述子对特征点进行匹配。本发明的目的在于解决传统的特征点描述子算法可能无法稳定适用于事件相机的问题,本发明利用事件相机的时间戳信息对特征点提取描述子会更好的利用事件相机的优势,使描述子信息更加丰富,使匹配结果更准确。

Figure 201910551377

The invention discloses a method for extracting descriptors for detected feature points, and uses the generated descriptors to match the feature points. The purpose of the present invention is to solve the problem that the traditional feature point descriptor algorithm may not be stably applicable to the event camera. The present invention uses the timestamp information of the event camera to extract the descriptor for the feature point, which can better utilize the advantages of the event camera and make the description Sub-information is richer, making matching results more accurate.

Figure 201910551377

Description

Feature point matching method based on event camera
Technical Field
The invention belongs to the field of image processing, and is used for generating a feature point descriptor based on an event camera and performing feature point matching.
Background
Machine vision relies primarily on frame-based cameras, which acquire entire frames at a fixed temporal exposure and frame rate, store and process image information in a matrix form. Such a simple image storage format may not be ideal for image processing and feature extraction, in large part because grayscale-based images contain much redundant information. Pixel intensity information is useful for human eye identification and retrieval, but adds to the difficulty of machine-based image processing. At the same time, sequential image readout can adversely affect image processing hardware because a large amount of unnecessary data must be processed before the desired features are obtained. Meanwhile, a common camera is sensitive to illumination change, and is easy to be too dark or overexposed when facing a high-dynamic illumination scene, and motion blur or poor imaging effect generated by a common optical image in a high-speed motion state seriously affects the image quality. The imaging of a common optical camera in a high-speed motion state is shown in fig. 1.
Event cameras have gained more and more attention in the field of machine vision, and are a novel visual sensor simulating human retina, the pixel array surface of the event camera triggers and outputs the pixel point position, time and intensity of light intensity change through the light intensity change, so the output of the event camera is not the video frame sequence of a standard camera but a series of asynchronous event streams.More specifically, at tjTime of day uj=(xj,yj) The brightness increment at the pixel position reaches a threshold value + -c (c > 0), then an event ej=(xj,yj,tj,pj) To be triggered, pjE { +1, -1} indicates that the polarity of the event is positive indicating an increase in brightness and negative indicating a decrease in brightness, so the event camera outputs an asynchronous stream of events whose absolute brightness values are no longer directly visible since the events only record incremental changes. The cameras are not limited by traditional exposure time and frame rate, the time coordinate precision can reach microsecond level, background redundant information can be effectively filtered, the camera motion information can be concentrated and captured, the data transmission bandwidth is saved, the data storage pressure is reduced, and the cameras have the advantages of high dynamic, low time delay and low power consumption, so that the cameras can provide reliable visual information during high-speed motion or in scenes characterized by high dynamic range. The imaging of the event camera in the high motion state is shown in fig. 2.
In some difficult scenes, such as high-speed motion or a state with severe change of illumination conditions, the motion blur or poor imaging effect generated by a common optical image causes great difficulty in feature point detection, feature point description and feature point matching. However, the event camera can acquire a real-time state of a high-speed moving object and possess a high dynamic range.
Since the event camera outputs individually isolated points, the points appearing in the event frame at different moments in time in different motion states may not be stable for the same object. Therefore, the conventional feature point descriptor algorithm may not be stably applicable. Meanwhile, compared with a common optical image, the event point of the event output also contains respective time information. This information is not well exploited by using the conventional descriptor algorithm directly. Therefore, extracting descriptors for event feature points from the perspective of timestamp information would better take advantage of the advantages of an event camera.
Disclosure of Invention
The invention generates the feature point descriptor based on the event camera and matches the feature points according to the descriptor. Under the inspiration of a shape context algorithm, a feature point descriptor generation method aiming at the imaging characteristics of an event image is provided, and feature points are matched according to the generated descriptor. The method for generating the feature point descriptors is slightly different from the traditional method, and because the time camera can obtain the position, the time and the polarity of the event point, the invention utilizes the timestamp information, which can better utilize the advantages of the event camera.
The technical scheme provided by the invention is a feature point descriptor matching method based on an event camera, which comprises the following specific steps:
step 1, using characteristic point piN concentric circles are established at logarithmic distance intervals in a local area with the circle center as R1 as the outermost circle radius and R2 as the innermost circle radius, i.e., (log10(R1), log10(R2)) are logarithmically (i.e., logspace) equally divided into N elements, and then the region is equally divided in the circumferential direction M to generate a grid, i.e., bins, as shown in fig. 4.
Step 2, comparing the characteristic points piTime stamp t ofpiAnd time stamp t of each event point in the local areaqiIf t ispi<tqiIf t is equal to 1, the point is set topi>tqiThen the point is set to "0".
Step 3, counting the characteristic points piThe number of "1" in each bins in the local area, i.e. the statistical distribution histogram h of these points in the binsi(k) Called feature point piThe descriptor of (1), wherein the descriptor size is M x N.
And 4, traversing all the feature points to obtain descriptors corresponding to all the feature points.
Step 5, according to hi(k) And calculating the similarity between every two feature point sets, namely a cost value, and then counting the corresponding relation of a group of point sets with the lowest overall cost value by using the Hungarian algorithm, thereby obtaining the feature point matching relation. The cost value is calculated by the formula:
Figure BDA0002105569940000031
wherein k is the kth bit in the descriptor, hi(k),hj(k) Respectively represent characteristic points piAnd q isjThe description of (1).
And 6, removing mismatching by using a vector consistency method to obtain the best matching.
Compared with the prior art, the invention has the advantages and beneficial effects that: the event points output by the event cameras also contain respective timestamp information, which is not utilized by descriptor extraction in existing matching algorithms. The advantages of the event camera can be better utilized by extracting the descriptors from the feature points by utilizing the timestamp information of the event camera, so that the descriptor information is richer, and the matching result is more accurate.
Drawings
Fig. 1 is a general optical camera imaging.
Fig. 2 is an event camera imaging.
Fig. 3 is a flowchart describing a child generation method and feature point matching.
FIG. 4 is a feature point description sub-meshing diagram.
Fig. 5 is a feature point matching result.
Detailed Description
The invention is mainly based on an event camera and processes event stream data. In consideration of the characteristics of the event camera, a method for extracting descriptors of feature points is provided, and feature point matching is performed according to the descriptors. In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following examples. It should be understood that the specific examples described herein are intended to be illustrative only and are not intended to be limiting.
As shown in fig. 3, the feature point descriptor matching method based on the event camera provided in the embodiment of the present invention includes the following specific implementation steps:
step 1, setting N to 8, M to 12, R1 to 12, and R2 to 1. By a characteristic point piEstablishing N concentric circles at intervals of logarithmic distance in a local area with the circle center as R1 as the outermost circle radius and R2 as the innermost circle radius, and equally dividing the area along the circumferential direction M to generate a netAnd (4) grid.
Step 2, comparing the characteristic points piTime stamp t ofpiAnd time stamp t of each event point in the local areaqiIf t ispi<tqiIf t is equal to 1, the point is set topi>tqiThen the point is set to "0", i.e. intra-trellis coded.
Step 3, counting the characteristic points piThe number of "1" in each bins in the local area, i.e. the statistical distribution histogram h of these points in the binsi(k) Called feature point piThe description of (1).
And 4, traversing all the feature points to obtain descriptors corresponding to all the feature points.
And 5, calculating a cost value between every two feature point sets, wherein the cost value is X2 test statistic (chi-square test statistic, and the deviation degree between the actual observed value and the theoretical inferred value of the statistical sample). And (3) counting the corresponding relation of a group of point sets with the lowest overall cost value by using the Hungarian algorithm, thereby obtaining the characteristic point matching relation.
And 6, removing mismatching by using a vector consistency method to obtain the best matching.
The result of using the vector consistency method to remove the mismatching is shown in fig. 5, the initial matching is 1642 pairs, the number of the mismatching removed is 1101 pairs, and the ratio is 0.6705, and as can be seen from fig. 5, the matching performed by the method of the present invention has many feature points, the matching connecting lines between the two graphs are basically in the same direction, and the matching result is accurate.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (1)

1.基于事件相机的特征点匹配方法,其特征在于,包括如下步骤:1. The feature point matching method based on event camera, is characterized in that, comprises the following steps: 步骤1,在以特征点pi为圆心、R1为最外圆半径、R2为最内圆半径的局域内按对数距离间隔建立N个同心圆,然后将此区域沿圆周方向M等分,生成网格;Step 1: Establish N concentric circles at logarithmic distance intervals in the local area with the feature point p i as the center, R1 as the outermost circle radius, and R2 as the innermost circle radius, and then divide this area into equal parts along the circumferential direction M, generate grid; 步骤2,比较特征点pi处的时间戳tpi与局域内各事件点的时间戳tqi的大小,若tpi<tqi,则该点置为“1”,若tpi>tqi,则该点置为“0”;Step 2: Compare the size of the timestamp t pi at the feature point pi with the timestamp t qi of each event point in the local area. If t pi <t qi , the point is set to “1”, and if t pi >t qi , then the point is set to "0"; 步骤3,统计特征点pi局域内每个bins内“1”的个数,即bins内这些点的统计分布直方图hi(k),称为特征点pi的描述子;Step 3, count the number of "1"s in each bin of the feature point p i , that is, the statistical distribution histogram h i (k) of these points in the bins, which is called the descriptor of the feature point p i ; 步骤4,遍历所有特征点,获得所有特征点对应的描述子;Step 4, traverse all feature points to obtain descriptors corresponding to all feature points; 步骤5,根据hi(k)计算每两个特征点集之间相似性,即cost值,然后使用匈牙利算法统计出总体cost值最低的一组点集对应关系,从而获得了特征点匹配关系;Step 5: Calculate the similarity between each two feature point sets according to h i (k), that is, the cost value, and then use the Hungarian algorithm to count the corresponding relationship of a set of point sets with the lowest overall cost value, so as to obtain the feature point matching relationship. ; 步骤5中计算cost值的公式为,The formula for calculating the cost value in step 5 is,
Figure FDA0003056037140000011
Figure FDA0003056037140000011
其中,k为描述子中第k位,hi(k),hj(k)分别表示特征点pi和qj的描述子;Among them, k is the kth position in the descriptor, h i (k), h j (k) represent the descriptors of the feature points p i and q j respectively; 步骤6,使用矢量一致性方法去除误匹配,获得最佳匹配。Step 6, use the vector consistency method to remove the mismatch to obtain the best match.
CN201910551377.9A 2019-06-24 2019-06-24 Feature point matching method based on event camera Active CN110414558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910551377.9A CN110414558B (en) 2019-06-24 2019-06-24 Feature point matching method based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910551377.9A CN110414558B (en) 2019-06-24 2019-06-24 Feature point matching method based on event camera

Publications (2)

Publication Number Publication Date
CN110414558A CN110414558A (en) 2019-11-05
CN110414558B true CN110414558B (en) 2021-07-20

Family

ID=68359703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910551377.9A Active CN110414558B (en) 2019-06-24 2019-06-24 Feature point matching method based on event camera

Country Status (1)

Country Link
CN (1) CN110414558B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696143B (en) * 2020-06-16 2022-11-04 清华大学 A registration method and system for event data
CN112367181B (en) * 2020-09-29 2022-10-18 歌尔科技有限公司 Camera network distribution method, device, equipment and medium
CN111931752B (en) * 2020-10-13 2021-01-01 中航金城无人系统有限公司 Dynamic target detection method based on event camera
CN114140365B (en) * 2022-01-27 2022-07-22 荣耀终端有限公司 Event frame-based feature point matching method and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10937239B2 (en) * 2012-02-23 2021-03-02 Charles D. Huston System and method for creating an environment and for sharing an event
CN103727930B (en) * 2013-12-30 2016-03-23 浙江大学 A kind of laser range finder based on edge matching and camera relative pose scaling method
WO2016019367A2 (en) * 2014-08-01 2016-02-04 Hygenia, LLC Hand sanitizer station
CN106934465A (en) * 2017-03-08 2017-07-07 中国科学院上海高等研究院 For the removable calculating storage device and information processing method of civil aviaton's industry
CN109801314B (en) * 2019-01-17 2020-10-02 同济大学 Binocular dynamic vision sensor stereo matching method based on deep learning

Also Published As

Publication number Publication date
CN110414558A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414558B (en) Feature point matching method based on event camera
Zhu et al. A retina-inspired sampling method for visual texture reconstruction
Duan et al. EventZoom: Learning to denoise and super resolve neuromorphic events
CN108388885B (en) Multi-person close-up real-time identification and automatic screenshot method for large live broadcast scene
CN110223329A (en) A kind of multiple-camera multi-object tracking method
WO2020062393A1 (en) Initial data processing method and system based on machine learning
CN112232356B (en) Event camera denoising method based on group degree and boundary characteristics
CN104240229B (en) A kind of adaptive method for correcting polar line of infrared binocular camera
CN114913239B (en) A method and device for joint calibration of event camera sensor and RGB camera
CN111815715B (en) Calibration method and device of zoom pan-tilt camera and storage medium
TW201944291A (en) Face recognition method
CN112987026A (en) Event field synthetic aperture imaging algorithm based on hybrid neural network
CN110619652A (en) Image registration ghost elimination method based on optical flow mapping repeated area detection
CN111639580A (en) Gait recognition method combining feature separation model and visual angle conversion model
Planamente et al. Da4event: towards bridging the sim-to-real gap for event cameras using domain adaptation
CN114283103A (en) Multi-depth-of-field fusion technology for ultra-high-definition panoramic image in AIT process of manned spacecraft
CN111696044B (en) Large-scene dynamic visual observation method and device
CN103578121B (en) Method for testing motion based on shared Gauss model under disturbed motion environment
CN111696143B (en) A registration method and system for event data
CN109671044A (en) A kind of more exposure image fusion methods decomposed based on variable image
CN102519401B (en) On-line real-time sound film concentricity detection system based on field programmable gate array (FPGA) and detection method thereof
CN111161399B (en) Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN111160340B (en) A moving target detection method, device, storage medium and terminal equipment
WO2018036241A1 (en) Method and apparatus for classifying age group
CN110430400B (en) Ground plane area detection method of binocular movable camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant