Nothing Special   »   [go: up one dir, main page]

CN115471682A - Image matching method based on SIFT fusion ResNet50 - Google Patents

Image matching method based on SIFT fusion ResNet50 Download PDF

Info

Publication number
CN115471682A
CN115471682A CN202211110416.XA CN202211110416A CN115471682A CN 115471682 A CN115471682 A CN 115471682A CN 202211110416 A CN202211110416 A CN 202211110416A CN 115471682 A CN115471682 A CN 115471682A
Authority
CN
China
Prior art keywords
image
sift
feature
points
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211110416.XA
Other languages
Chinese (zh)
Inventor
乔干
姜显扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202211110416.XA priority Critical patent/CN115471682A/en
Publication of CN115471682A publication Critical patent/CN115471682A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image matching method based on SIFT fusion ResNet50, which belongs to the field of raw digital image processing, and adopts SIFT algorithm to construct a Gaussian difference pyramid for a reference image and an image to be matched and determine a scale space; positioning extreme points in the constructed scale space as key points; calculating the main direction and gradient value of each key point to determine SIFT feature points; carrying out feature description on SIFT feature points by adopting a depth residual error network ResNet50 to obtain feature descriptors; and calculating Euclidean distance between the reference image and the feature descriptors of the image to be matched as the similarity of the feature descriptors and judging whether the regions in the reference image and the image to be matched are matched. According to the method, the depth residual error network ResNet50 is adopted to solve the problem of the decline of the fitting performance of the network deepening accuracy rate, the defect of unstable feature description existing in SIFT calculation neighborhood gradient is avoided, and the problem of low efficiency of feature descriptors is solved.

Description

Image matching method based on SIFT fusion ResNet50
Technical Field
The invention belongs to the field of digital image processing, and particularly relates to an image matching method based on SIFT fusion ResNet 50.
Background
The image matching refers to the corresponding relation between images of the same scene at two different time points, and is a basic problem in the field of computer vision research, and is also a research starting point or basis for computer vision applications, such as depth recovery, camera calibration, motion analysis, three-dimensional reconstruction and the like.
In the feature matching method, the most used today is the point feature. The feature point extraction algorithm which is common at present comprises the following steps: harris operator, forIstner operator, SIFT algorithm and wavelet transform based edge point extraction method. The SIFT algorithm is the most stable algorithm at present due to its unique advantages. The Scale Invariant Feature Transform (SIFT) algorithm is an image local Feature description operator which is proposed by David g.lowe in 1999 and is based on a Scale space and keeps invariance to image scaling, rotation and even affine transformation. The generation of the SIFT feature vector comprises the following four steps: 1. detecting extreme points in the scale space; 2. removing extreme points with low contrast and unstable edge extreme points to obtain feature points; 3. calculating direction parameters of the characteristic points; 4. and generating SIFT feature point vectors.
When a traditional SIFT algorithm is used for matching partial images, the matching accuracy is low due to the problems that the efficiency of a feature descriptor based on manual design is low, the number of matching points is small and the like.
Disclosure of Invention
The invention provides an image matching method based on SIFT fusion ResNet50, which aims to solve the problems of low efficiency of designing feature descriptors, few matching points and the like when partial images are matched by the traditional SIFT.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
the invention relates to an image matching method based on SIFT fusion ResNet50, which comprises the following steps:
s1, inputting two images in a display area as a reference image and an image to be matched;
s2, constructing a Gaussian difference character tower for the reference image and the image to be matched by adopting an SIFT algorithm, and determining a scale space based on the Gaussian difference character tower;
s3, positioning extreme points in the constructed scale space, and taking the extreme points as key points;
s4, calculating the main direction and the gradient value of each key point, and determining SIFT feature points according to the main direction and the gradient value;
s5, carrying out feature description on SIFT feature points by adopting a depth residual error network ResNet50 to obtain feature descriptors;
s6, calculating Euclidean distance between the reference image and the feature descriptors of the image to be matched to serve as the similarity of the feature descriptors, and judging whether the regions in the reference image and the image to be matched belong to the same region or not based on the similarity.
Preferably, after the two images are input in step S1, the two images are denoised.
Preferably, the specific steps of establishing the image multi-scale space in step S2 are as follows:
for a reference image and an image to be matched, obtaining a scale space through convolution operation respectively, wherein the calculation formula is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y) (1)
in the formula, L (x, y, sigma) is a scale space, x and y are space coordinates, sigma is a scale factor, I (x, y) is a two-dimensional image, and G (x, y, sigma) is a Gaussian kernel function;
the expression of the gaussian kernel function G (x, y, σ) is:
Figure BDA0003842876040000021
the gaussian difference pyramid is obtained by subtracting adjacent gaussian pyramids, and the expression of the gaussian difference pyramid is as follows:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y) (3)
in the formula, D (x, y, sigma) is a Gaussian difference pyramid, G (x, y, k sigma) and G (x, y, sigma) represent two adjacent Gaussian pyramids, and k is the ratio of two adjacent scale factors;
and constructing a multi-scale image expression form by changing the numerical value of the scale factor sigma, and establishing an image multi-scale space.
Preferably, the step S3 locates the extreme point in the image multi-scale space, and the specific step of using the extreme point as the key point includes:
s3.1, comparing each pixel point in the Gaussian pyramid with 8 points adjacent to the plane and 2 × 9 pixel points on the upper layer and the lower layer, and taking a maximum value or a minimum value point in the points as a local extreme value point;
and S3.2, removing extreme points with low contrast and unstable edges to obtain key points.
Preferably, the step S4 of calculating the principal direction and the gradient value of each key point, and the specific step of determining the SIFT feature points according to the principal direction and the gradient value is as follows:
s4.1, calculating the direction and gradient value of each key point, wherein the calculation formula is as follows:
Figure BDA0003842876040000022
Figure BDA0003842876040000031
in the formula, m (x, y) is a gradient value, and theta (x, y) is the direction of the key point;
s4.2, counting pixels in the neighborhood of the key points, establishing a statistical histogram, dividing the histogram into 8 directions by taking 0-360 degrees as a boundary, wherein the difference between the directions of each group is 45 degrees, the horizontal and vertical coordinates of the histogram respectively represent the number of the neighborhood pixels of the key points in the gradient direction and the same gradient direction, the peak value of the histogram is taken as the main direction, and points which simultaneously have the positions, the scales and the directions are defined as SIFT feature points.
Preferably, in step S5, the SIFT feature points are subjected to feature description by using a depth residual error network ResNet50, and a specific manner of obtaining the feature descriptors is as follows: taking SIFT feature points in a reference image and an image to be matched as end points, intercepting a gray image block from the periphery, inputting the gray image block into a ResNet50 network through down-sampling to learn and describe features, and obtaining a feature descriptor.
Preferably, the calculation formula of the euclidean distance between the reference image and the feature descriptor of the image to be matched in step S6 is as follows:
Figure BDA0003842876040000032
in the formula, f 1 As a feature descriptor of the original image, f 2 128 represents a 128-dimensional feature descriptor, i represents the ith feature descriptor, and the range of i is 0-128.
Preferably, in the step S6, the euclidean distance between each feature descriptor of the reference image and each feature descriptor of the image to be matched is calculated, and matching is performed according to the matching condition of the formula (7)
Figure BDA0003842876040000033
Wherein d is i And (3) representing the Euclidean distance between each feature descriptor of the reference image and each feature descriptor of the image to be matched, wherein i represents the ith feature descriptor, the range of i is 0-128, e is a matching threshold value, and 0.6 is taken.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
according to the image matching method based on SIFT fusion ResNet50, the depth residual error network ResNet50 is adopted to carry out feature description on SIFT feature points and obtain feature descriptors, the depth residual error network ResNet50 solves the problem that the fitting performance of the network deepening accuracy rate is reduced, the defect that feature description is unstable in SIFT calculation neighborhood gradient is avoided, and the problem that the efficiency of the feature descriptors is low is solved.
Drawings
Fig. 1 is a flowchart of an image matching method based on SIFT fusion ResNet50 according to the present invention.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
Referring to the attached figure 1, the invention relates to an image matching method based on SIFT fusion ResNet50, which comprises the following steps:
s1, inputting two images of a display area as a reference image and an image to be matched, wherein the sizes of the two images are 640 x 480, and denoising the two images;
s2, constructing a Gaussian difference pyramid for the reference image and the image to be matched by adopting an SIFT algorithm, determining a scale space based on the Gaussian difference pyramid, and specifically establishing the image multi-scale space by the following steps:
for a reference image and an image to be matched, obtaining a scale space through convolution operation respectively, wherein the calculation formula is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y) (1)
in the formula, L (x, y, sigma) is a scale space, x and y are space coordinates, sigma is a scale factor of a Gaussian function, I (x, y) is a two-dimensional image, and G (x, y, sigma) is a Gaussian kernel function;
the expression of the gaussian kernel function G (x, y, σ) is:
Figure BDA0003842876040000041
the gaussian difference pyramid is obtained by subtracting adjacent gaussian pyramids, and the expression of the gaussian difference pyramid is as follows:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y) (3)
in the formula, D (x, y, sigma) is a Gaussian difference pyramid, G (x, y, k sigma) and G (x, y, sigma) represent two adjacent Gaussian kernels, and k is the ratio of two adjacent scale factors;
constructing a multi-scale image representation form by changing the numerical value of the scale factor sigma, and establishing an image multi-scale space;
s3, positioning extreme points in the constructed scale space, taking the extreme points as key points, and specifically comprising the following steps:
s3.1, comparing each pixel point in the Gaussian pyramid with 8 points adjacent to the plane and 2 x 9 pixel points on the upper layer and the lower layer, and taking a maximum value or a minimum value point in the points as a local extreme value point;
and S3.2, removing extreme points with low contrast and unstable edges to obtain key points.
S4, calculating the main direction and the gradient value of each key point, and determining SIFT feature points according to the main direction and the gradient value, wherein the method specifically comprises the following steps:
s4.1, calculating the direction and gradient value of each key point, wherein the calculation formula is as follows:
Figure BDA0003842876040000051
Figure BDA0003842876040000052
in the formula, m (x, y) is a gradient value, and theta (x, y) is the direction of the key point;
s4.2, counting pixels of key point neighborhood, establishing a statistical histogram, dividing the histogram into 8 directions by taking 0-360 degrees as a boundary, wherein the difference between the directions of each group is 45 degrees, the horizontal and vertical coordinates of the histogram respectively represent the number of the key point neighborhood pixels in the gradient direction and the same gradient direction, the peak value of the histogram is taken as a main direction, and points which simultaneously have the position, the scale and the direction are defined as SIFT feature points;
s5, carrying out feature description on the SIFT feature points by adopting a deep residual error network ResNet50 to obtain a feature descriptor, wherein the specific mode is as follows: taking SIFT feature points in a reference image and an image to be matched as end points, cutting gray image blocks with the size of 64 x 64 from the periphery, discarding the image blocks with the size of less than 64 x 64 after cutting, and then performing 32 x 32 down-sampling, namely inputting the image blocks into a ResNet50 network through down-sampling to perform learning and feature description to obtain a related 128-dimensional feature descriptor;
s6, calculating Euclidean distance between the reference image and the feature descriptors of the image to be matched as the similarity of the feature descriptors, judging whether the regions in the reference image and the image to be matched belong to the same region or not based on the similarity,
the calculation formula of the Euclidean distance of the feature descriptor is as follows:
Figure BDA0003842876040000053
in the formula, f 1 As a feature descriptor of the original image, f 2 For the feature descriptors of the image to be matched, 128 denotes 128-dimensional feature descriptors (i ranges from 0 to 128).
Traversing Euclidean distance d of each feature descriptor of the reference image and each feature descriptor of the image to be matched i (i = 1...., n), matching according to the matching condition of the formula (7),
Figure BDA0003842876040000054
wherein d is i The Euclidean distance between each feature descriptor of the reference image and each feature descriptor of the image to be matched is represented by i, the range of i is 0-128, e is a matching threshold value, 0.6 is taken, and the criterion for judging the matching effect is to observe whether the number of matching points is increased or not under the condition that no error occurs in matching. And (3) for the 128-dimensional feature descriptor of the image to be matched, finding out the Euclidean distance between the 128-dimensional feature descriptor and the image needing to be matched by using a formula (6), finding out the nearest Euclidean distance and the second nearest Euclidean distance, and when the ratio of the nearest distance to the second nearest distance is lower than a matching threshold value e, successfully matching.
The present invention has been described in detail with reference to the embodiments, but the description is only for the preferred embodiments of the present invention and should not be construed as limiting the scope of the present invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.

Claims (8)

1. An image matching method based on SIFT fusion ResNet50 is characterized in that: which comprises the following steps:
s1, inputting two images of a display area as a reference image and an image to be matched;
s2, constructing a Gaussian difference character tower for the reference image and the image to be matched by adopting an SIFT algorithm, and determining a scale space based on the Gaussian difference character tower;
s3, positioning extreme points in the constructed scale space, and taking the extreme points as key points;
s4, calculating the main direction and the gradient value of each key point, and determining SIFT feature points according to the main direction and the gradient value;
s5, carrying out feature description on SIFT feature points by adopting a depth residual error network ResNet50 to obtain feature descriptors;
and S6, calculating Euclidean distance between the reference image and the feature descriptors of the image to be matched to serve as the similarity of the feature descriptors, and judging whether the regions in the reference image and the region in the image to be matched belong to the same region or not based on the similarity.
2. The SIFT fusion ResNet 50-based image matching method according to claim 1, wherein: and after the two images are input in the step S1, denoising the two images.
3. The SIFT fusion ResNet 50-based image matching method according to claim 1, wherein: the specific steps of establishing the image multi-scale space in the step S2 are as follows:
for a reference image and an image to be matched, obtaining a scale space through convolution operation respectively, wherein the calculation formula is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y) (1)
in the formula, L (x, y, sigma) is a scale space, x and y are space coordinates, sigma is a scale factor of a Gaussian function, I (x, y) is a two-dimensional image, and G (x, y, sigma) is a Gaussian kernel function;
the expression of the gaussian kernel function G (x, y, σ) is:
Figure FDA0003842876030000011
the gaussian difference pyramid is obtained by subtracting adjacent gaussian pyramids, and the expression of the gaussian difference pyramid is as follows:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y) (3)
in the formula, D (x, y, sigma) is a Gaussian difference pyramid, G (x, y, k sigma) and G (x, y, sigma) represent two adjacent Gaussian pyramids, and k is the ratio of two adjacent scale factors;
and constructing a multi-scale image expression form by changing the numerical value of the scale factor sigma, and establishing an image multi-scale space.
4. The SIFT fusion ResNet 50-based image matching method according to claim 3, wherein: the step S3 positions an extreme point in the image multi-scale space, and the specific step of using the extreme point as a key point includes:
s3.1, comparing each pixel point in the Gaussian pyramid with 8 points adjacent to the plane and 2 x 9 pixel points on the upper layer and the lower layer, and taking a maximum value or a minimum value point in the points as a local extreme value point;
and S3.2, removing extreme points with low contrast and unstable edges to obtain key points.
5. The SIFT fusion ResNet 50-based image matching method according to claim 1, wherein: the step S4 of calculating the main direction and the gradient value of each key point and determining the SIFT feature points according to the main direction and the gradient value comprises the following specific steps:
s4.1, calculating the direction and gradient value of each key point, wherein the calculation formula is as follows:
Figure FDA0003842876030000021
Figure FDA0003842876030000022
in the formula, m (x, y) is a gradient value, and theta (x, y) is the direction of the key point;
s4.2, counting pixels of the neighborhood of the key point, establishing a statistical histogram, dividing the histogram into 8 directions by taking 0-360 degrees as a boundary, wherein the difference between the directions of each group is 45 degrees, the horizontal and vertical coordinates of the histogram respectively represent the number of the neighborhood pixels of the key point in the gradient direction and the same gradient direction, the peak value of the histogram is taken as the main direction, and points which simultaneously have the position, the scale and the direction are defined as SIFT feature points.
6. The SIFT fusion ResNet 50-based image matching method according to claim 1, wherein: in the step S5, a deep residual error network ResNet50 is adopted to perform feature description on the SIFT feature points, and the specific way of obtaining the feature descriptors is as follows: taking SIFT feature points in a reference image and an image to be matched as end points, cutting gray image blocks from the periphery, and inputting the gray image blocks into a ResNet50 network through down-sampling to perform learning and feature description to obtain a feature descriptor.
7. The SIFT fusion ResNet 50-based image matching method according to claim 1, wherein: in the step S6, the calculation formula of the euclidean distance between the reference image and the feature descriptor of the image to be matched is as follows:
Figure FDA0003842876030000023
in the formula, f 1 As a feature descriptor of the original image, f 2 For the feature descriptors of the image to be matched, 128 represents a 128-dimensional feature descriptor, i represents the ith feature descriptor, and the range of i is 0-128.
8. The SIFT fusion ResNet 50-based image matching method according to claim 7, wherein: in the step S6, the Euler distance between each feature descriptor of the reference image and each feature descriptor of the image to be matched is calculated, and matching is carried out according to the matching condition of the formula (7)
Figure FDA0003842876030000024
Wherein d is i And (3) representing the Euclidean distance between each feature descriptor of the reference image and each feature descriptor of the image to be matched, wherein i represents the ith feature descriptor, the range of i is 0-128, e is a matching threshold value, and 0.6 is taken.
CN202211110416.XA 2022-09-13 2022-09-13 Image matching method based on SIFT fusion ResNet50 Pending CN115471682A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211110416.XA CN115471682A (en) 2022-09-13 2022-09-13 Image matching method based on SIFT fusion ResNet50

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211110416.XA CN115471682A (en) 2022-09-13 2022-09-13 Image matching method based on SIFT fusion ResNet50

Publications (1)

Publication Number Publication Date
CN115471682A true CN115471682A (en) 2022-12-13

Family

ID=84332858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211110416.XA Pending CN115471682A (en) 2022-09-13 2022-09-13 Image matching method based on SIFT fusion ResNet50

Country Status (1)

Country Link
CN (1) CN115471682A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953332A (en) * 2023-03-15 2023-04-11 四川新视创伟超高清科技有限公司 Dynamic image fusion brightness adjustment method and system and electronic equipment
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953332A (en) * 2023-03-15 2023-04-11 四川新视创伟超高清科技有限公司 Dynamic image fusion brightness adjustment method and system and electronic equipment
CN115953332B (en) * 2023-03-15 2023-08-18 四川新视创伟超高清科技有限公司 Dynamic image fusion brightness adjustment method, system and electronic equipment
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN116433887B (en) * 2023-06-12 2023-08-15 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117132913B (en) * 2023-10-26 2024-01-26 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Similar Documents

Publication Publication Date Title
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN105574534B (en) Conspicuousness object detection method based on sparse subspace clustering and low-rank representation
CN109978839B (en) Method for detecting wafer low-texture defects
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN108932518B (en) Shoe print image feature extraction and retrieval method based on visual bag-of-words model
Singh et al. Self-organizing maps for the skeletonization of sparse shapes
CN111797744B (en) Multimode remote sensing image matching method based on co-occurrence filtering algorithm
CN113592923B (en) Batch image registration method based on depth local feature matching
CN114529925B (en) Method for identifying table structure of whole line table
CN113808180B (en) Heterologous image registration method, system and device
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN109741358B (en) Superpixel segmentation method based on adaptive hypergraph learning
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN114492619A (en) Point cloud data set construction method and device based on statistics and concave-convex property
CN115731257A (en) Leaf form information extraction method based on image
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN110458812A (en) A kind of similar round fruit defects detection method based on color description and sparse expression
CN113449784A (en) Image multi-classification method, device, equipment and medium based on prior attribute map
CN115937160A (en) Explosion fireball contour detection method based on convex hull algorithm
CN112101283A (en) Intelligent identification method and system for traffic signs
Kang et al. An adaptive fusion panoramic image mosaic algorithm based on circular LBP feature and HSV color system
CN106934395B (en) Rigid body target tracking method adopting combination of SURF (speeded Up robust features) and color features
CN109977892B (en) Ship detection method based on local saliency features and CNN-SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Qiao Gan

Inventor after: Jiang Xianyang

Inventor after: Wei Bo

Inventor after: Liang Shangqing

Inventor before: Qiao Gan

Inventor before: Jiang Xianyang

CB03 Change of inventor or designer information