Nothing Special   »   [go: up one dir, main page]

CN106960442A - Based on the infrared night robot vision wide view-field three-D construction method of monocular - Google Patents

Based on the infrared night robot vision wide view-field three-D construction method of monocular Download PDF

Info

Publication number
CN106960442A
CN106960442A CN201710117065.8A CN201710117065A CN106960442A CN 106960442 A CN106960442 A CN 106960442A CN 201710117065 A CN201710117065 A CN 201710117065A CN 106960442 A CN106960442 A CN 106960442A
Authority
CN
China
Prior art keywords
infrared
image
view
images
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710117065.8A
Other languages
Chinese (zh)
Inventor
孙韶媛
黄珍
叶国林
高凯珺
姚广顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201710117065.8A priority Critical patent/CN106960442A/en
Publication of CN106960442A publication Critical patent/CN106960442A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种基于单目红外的夜间机器人视觉大视场三维构建方法,该方法包括基于点特征算子进行红外图像的大视场拼接和红外大视场图像的三维重建处理两个步骤。首先将水平扫描式获取的一组红外图像投影到柱面空间坐标系中,在相邻图像的重合区域中提取SIFT特征,根据特征点匹配优化算法估计单应性变换矩阵以进行配准对齐,然后使用多分辨加权算法进行融合得到全景拼接图像。之后对红外大视场图像进行超像素分割以及多尺度特征提取,将分割图像和特征信息输入到已训练的面板参数马尔可夫模型中进行面板参数估计,最后对红外大视场图像的进行深度估计,并构建红外大视场三维模型,从而获得红外大视场三维图像。

The invention relates to a method for three-dimensional construction of a large field of view of a nighttime robot based on monocular infrared. The method includes two steps of large field of view splicing of infrared images and three-dimensional reconstruction of infrared large field of view images based on point feature operators. Firstly, a group of infrared images obtained by horizontal scanning is projected into the cylindrical space coordinate system, SIFT features are extracted in the overlapping areas of adjacent images, and the homography transformation matrix is estimated according to the feature point matching optimization algorithm for registration and alignment. Then use the multi-resolution weighting algorithm to fuse to get the panoramic stitching image. Then perform superpixel segmentation and multi-scale feature extraction on the infrared large field of view image, input the segmented image and feature information into the trained panel parameter Markov model for panel parameter estimation, and finally conduct depth analysis on the infrared large field of view image Estimate and build a 3D model with a large infrared field of view to obtain a 3D image with a large infrared field of view.

Description

基于单目红外的夜间机器人视觉大视场三维构建方法Three-dimensional construction method of large field of view for nighttime robot vision based on monocular infrared

技术领域technical field

本发明涉及一种基于单目红外的夜间机器人视觉大视场三维构建方法,属于夜视红外图像处理领域。The invention relates to a method for three-dimensionally constructing a large visual field of a night robot based on monocular infrared, and belongs to the field of night vision infrared image processing.

背景技术Background technique

由于红外成像系统分辨率等因素的限制,红外成像只能获取前方较窄场景范围的图像信息。通常红外图像的水平视场在40°范围内,存在较大的盲区,难以满足大范围观测场合的需求。大视场全景视觉重构可以弥补这一缺点。全景重构技术采用图像拼接技术对180°或360°方位角的空间场景图像进行拼接获得全景图像。Due to the limitations of the resolution of the infrared imaging system and other factors, infrared imaging can only obtain image information of a narrow scene in front. Usually, the horizontal field of view of infrared images is within 40°, and there is a large blind area, which is difficult to meet the needs of large-scale observation occasions. Large field of view panoramic visual reconstruction can make up for this shortcoming. Panoramic reconstruction technology uses image stitching technology to splice space scene images with 180° or 360° azimuth angles to obtain panoramic images.

红外图像拼接的作用就是保证在不降低分辨率的情况下,通过图像配准将两幅或多幅红外图像拼接成一幅红外大视场图像,该方法能够减少图像间的冗余信息,同时能够扩展观测空间。红外大视场拼接技术一般应用于红外目标跟踪、识别等军事领域。The function of infrared image stitching is to ensure that two or more infrared images are stitched into one infrared large field of view image through image registration without reducing the resolution. This method can reduce redundant information between images and expand observation space. Infrared large field of view stitching technology is generally used in military fields such as infrared target tracking and recognition.

红外大视场图像扩大了场景范围,有利于操作人员全面了解场景信息,但是人类视觉特性告诉我们,人们在观察、感知世界的时候,不仅要通过物体表面特征进行识别,还要通过物体空间的深度信息来定位目标物体的位置及其大小。研究表明,三维图像的空间感和深度信息能够更加逼真地反映真实场景。因而对缺乏深度感的红外图像进行三维显示具有重要意义。Infrared large-field-of-view images expand the scope of the scene, which is helpful for operators to fully understand the scene information. However, the characteristics of human vision tell us that when people observe and perceive the world, they must not only recognize the surface features of objects, but also recognize them through the space of objects. Depth information is used to locate the position and size of the target object. Studies have shown that the spatial sense and depth information of 3D images can reflect real scenes more realistically. Therefore, it is of great significance to perform three-dimensional display on infrared images that lack depth perception.

红外图像三维重建对场景中的目标物体进行深度估计,能够反映出红外图像中各景物的远近关系,有助于对场景内容的理解。将单幅红外图像三维重建扩展到红外大视场三维重建,可以以立体感、宽视野的形式显示场景环境,有助于全面了解整个场景以及场景中各目标的深度位置关系,非常适于视觉导航、移动监控以及远程指挥等场合。The 3D reconstruction of infrared image can estimate the depth of the target object in the scene, which can reflect the distance relationship of each scene in the infrared image, which is helpful for the understanding of the scene content. Extending the 3D reconstruction of a single infrared image to the 3D reconstruction of a large infrared field of view can display the scene environment in the form of a three-dimensional sense and a wide field of view, which is helpful for a comprehensive understanding of the entire scene and the depth position relationship of each target in the scene, which is very suitable for vision Navigation, mobile monitoring and remote command and other occasions.

目前国内外对于红外图像拼接通常采用基于特征的方法,利用两幅图像中的特征点,如边界点、拐点、角点等。目前常用特征算子有:Harris特征算子、SIFT(Scale-invariant feature transform)/SURF算子,以及ORB算子。Harris角点检测算子选择局部灰度变化最明显的点作为特征点,但是不满足多尺度不变性。ORB是一种局部不变特征描述子,对照片图像的平移、旋转、缩放等变换具有不变性。SIFT、SURF特征算子提取图像中的局部性特征具有尺度不变性,且对于光线、噪声、微视角改变具有很好的鲁棒性。SIFT算子能够有效地提取红外图像中的特征点,并且特征点分布均匀。At present, feature-based methods are usually used for infrared image stitching at home and abroad, using feature points in two images, such as boundary points, inflection points, and corner points. Currently commonly used feature operators include: Harris feature operator, SIFT (Scale-invariant feature transform)/SURF operator, and ORB operator. The Harris corner detection operator selects the point with the most obvious change in local gray level as the feature point, but it does not satisfy the multi-scale invariance. ORB is a local invariant feature descriptor, which is invariant to transformations such as translation, rotation, and scaling of photo images. The SIFT and SURF feature operators extract the local features in the image with scale invariance, and have good robustness to light, noise, and micro viewing angle changes. The SIFT operator can effectively extract the feature points in the infrared image, and the feature points are evenly distributed.

近年来,单幅图像建模的研究已成为热点,但是一幅图像无法完全重构出对应的三维模型,因此许多研究学者提出利用图像中已知的几何信息进行三维重建。Debevec等人利用物体的几何形状信息,实现结构化的场景交互式建模。ZHANG等提出利用一些约束条件,如物体表面的位置、轮廓、皱痕等生成三维表面。也有学者以射影几何为出发点,提出将齐次坐标的向量运算特性应用到交点拟合之中以实现单幅图像三维重构的方法。针对红外图像信噪比低,表面纹理非常不清晰的情况下,沈振一等提出基于面板参数马尔可夫随机场模型的单目红外图像三维重建算法,该算法对红外图像三维重建具有较好的适应性。In recent years, the research on single image modeling has become a hot spot, but a single image cannot completely reconstruct the corresponding 3D model, so many researchers propose to use the known geometric information in the image for 3D reconstruction. Debevec et al. utilize the geometric shape information of objects to realize structured scene interactive modeling. ZHANG et al. proposed to use some constraints, such as the position, contour, and wrinkles of the object surface, to generate a three-dimensional surface. Some scholars, starting from projective geometry, proposed a method of applying the vector operation characteristics of homogeneous coordinates to intersection point fitting to achieve 3D reconstruction of a single image. In view of the low signal-to-noise ratio of infrared images and the very unclear surface texture, Shen Zhenyi et al. proposed a 3D reconstruction algorithm for monocular infrared images based on the panel parameter Markov random field model. This algorithm has a good effect on 3D reconstruction of infrared images. adaptability.

发明内容Contents of the invention

本发明的目的是利用超像素分割与马尔可夫场相结合的方法获得场景中各小超像素面板之间的结构信息和深度信息,得到大视场的深度估计图并且实现红外图像大视场三维重建。The purpose of the present invention is to use the method of combining superpixel segmentation and Markov field to obtain the structural information and depth information between the small superpixel panels in the scene, obtain the depth estimation map of the large field of view and realize the large field of view of the infrared image Three-dimensional reconstruction.

为了达到上述目的,本发明的技术方案是提供了一种基于单目红外的夜间机器人视觉大视场三维构建方法,其特征在于,包括以下步骤:In order to achieve the above object, the technical solution of the present invention is to provide a three-dimensional construction method based on monocular infrared nighttime robot vision large field of view, which is characterized in that it includes the following steps:

步骤1、在同一水平面内旋转拍摄一系列180°范围内的红外图像,得到序列图像;Step 1. Rotate and shoot a series of infrared images within the range of 180° in the same horizontal plane to obtain sequence images;

步骤2、通过柱面投影变换公式将序列图像投影到柱面空间坐标系中;Step 2, project the sequence image into the cylindrical space coordinate system through the cylindrical projection transformation formula;

步骤3、在柱面空间坐标系中,选取序列图像中的第一帧红外图像作为参考图像,通过LM算法优化序列图像中其他红外图像到参考图像的变换矩阵,使得多幅图像拼接时的累计误差最小,其中,序列图像中任意相邻的两幅红外图像之间的变换矩阵通过以下步骤得到:Step 3. In the cylindrical space coordinate system, select the first infrared image in the sequence image as the reference image, and optimize the transformation matrix from other infrared images in the sequence image to the reference image through the LM algorithm, so that the cumulative The error is the smallest, wherein, the transformation matrix between any two adjacent infrared images in the sequence image is obtained by the following steps:

步骤3.1、通过相位相关法计算两幅相邻红外图像之间的位移量,以确定其重合区域,在重合区域中提取SIFT特征点,同时根据图像水平位移量和渐入渐出法,生成加权图;Step 3.1. Calculate the displacement between two adjacent infrared images by the phase correlation method to determine their overlapping area, extract SIFT feature points in the overlapping area, and generate weighted values according to the horizontal displacement of the image and the gradual in and out method picture;

步骤3.2、对步骤3.1提取的SIFT特征点,首先采用相似性度量的方法,进行特征点粗匹配,寻找相邻红外图像间的关键匹配对,之后采用RANSAC(Random Sample Consensus)鲁棒性估计算法消除伪匹配点,利用正确的匹配点对估计单应性矩阵参数,以确定相邻红外图像之间的变换矩阵;Step 3.2. For the SIFT feature points extracted in step 3.1, first use the method of similarity measurement to perform rough matching of feature points to find key matching pairs between adjacent infrared images, and then use the RANSAC (Random Sample Consensus) robustness estimation algorithm Eliminate false matching points and use correct matching point pairs to estimate homography matrix parameters to determine the transformation matrix between adjacent infrared images;

步骤4、根据步骤3得到的变换矩阵将序列图像中的所有红外图像进行配准对齐,然后采用多分辨率方法分解红外图像,在多尺度空间采用步骤3.1得到的加权图进行加权融合,从而得到大视场红外拼接图像;Step 4. Register and align all the infrared images in the sequence images according to the transformation matrix obtained in step 3, and then use a multi-resolution method to decompose the infrared images, and use the weighted image obtained in step 3.1 to carry out weighted fusion in the multi-scale space, thereby obtaining Large field of view infrared stitching image;

步骤5、对大视场红外拼接图像进行三维重建,构建红外大视场三维模型。Step 5. Perform 3D reconstruction on the large-field-of-view infrared mosaic image, and construct a 3D model of the infrared large-field-of-view.

优选地,所述步骤5包括以下步骤:Preferably, said step 5 includes the following steps:

步骤5.1、对大视场红外拼接图像进行超像素分割以及多尺度特征提取;Step 5.1, performing superpixel segmentation and multi-scale feature extraction on the large field of view infrared stitching image;

步骤5.2、将步骤5.1得到的分割图像和经过多尺度特征提取得到的特征信息输入到已训练的面板参数马尔可夫模型中,进行面板参数估计;Step 5.2, input the segmented image obtained in step 5.1 and the feature information obtained through multi-scale feature extraction into the trained panel parameter Markov model, and perform panel parameter estimation;

步骤5.3、根据面板参数马尔可夫模型,对大视场红外拼接图像进行深度估计,并构建红外大视场三维模型。Step 5.3, according to the panel parameter Markov model, perform depth estimation on the large field of view infrared mosaic image, and construct an infrared large field of view three-dimensional model.

本发明根据现有研究的不足,针对红外图像的具体特点,在参考单幅图像的三维重建研究的基础上,提出一种适用于单目红外的夜间机器人视觉大视场三维构建方法,以达到将单幅红外图像三维重建扩展到红外大视场三维重建,全面了解整个场景以及场景中各目标的深度位置关系的目标。According to the deficiencies of existing research, the present invention aims at the specific characteristics of infrared images, and on the basis of referring to the research on three-dimensional reconstruction of a single image, proposes a three-dimensional construction method suitable for monocular infrared nighttime robot vision with a large field of view, so as to achieve Extend the 3D reconstruction of a single infrared image to the 3D reconstruction of a large infrared field of view, and fully understand the target of the entire scene and the depth position relationship of each target in the scene.

通过本发明得到的红外大视场图像扩大了场景范围,有利于操作人员全面了解场景信息,红外大视场三维重建可以立体感、宽视野的形式显示场景环境,有助于全面了解整个场景以及场景中各目标的深度位置关系。The infrared large field of view image obtained by the present invention expands the scope of the scene, which is beneficial for the operator to fully understand the scene information. The three-dimensional reconstruction of the infrared large field of view can display the scene environment in the form of a three-dimensional sense and a wide field of view, which is helpful for a comprehensive understanding of the entire scene and The depth position relationship of each object in the scene.

附图说明Description of drawings

图1:基于点特征算子的红外图像的大视场拼接算法流程图;Figure 1: Flow chart of the large field of view stitching algorithm for infrared images based on point feature operators;

图2:红外大视场图像的三维重建处理流程图;Figure 2: Flowchart of 3D reconstruction processing of infrared large field of view images;

图3:原红外图像序列及红外大视场图像的结果图;Figure 3: The results of the original infrared image sequence and infrared large field of view image;

图4:面板参数示意图;Figure 4: Schematic diagram of panel parameters;

图5:本发明红外大视场图像三维重建的结果图,其中,图5(a)为红外大视场图像,图5(b)为大视场红外图像的深度图,图5(c)-图5(e)为三个不同视角的大视场三维重建结果。Fig. 5: The result figure of three-dimensional reconstruction of the infrared large field of view image of the present invention, wherein, Fig. 5 (a) is the infrared large field of view image, Fig. 5 (b) is the depth map of the large field of view infrared image, Fig. 5 (c) - Figure 5(e) shows the results of three-dimensional reconstruction of large field of view from three different viewing angles.

具体实施方式detailed description

为使本发明更明显易懂,兹以优选实施例,并配合附图作详细说明如下。In order to make the present invention more comprehensible, preferred embodiments are described in detail below with accompanying drawings.

如图1所示,本发明采用相位相关法估计待拼接图像位移量,确定重合区域。接着在重合区域中采用SIFT特征算子提取特征,将全局特征匹配转换成局部重合区域匹配,从而大大缩减了特征提取和匹配时间,同时提高了匹配点利用率。利用图像位移量生成加权图,在多尺度空间合成图像,能够消除模糊和拼接缝隙,从而实现无缝拼接。对于多幅红外图像拼接,对各红外图像到参考图像的变换矩阵进行优化调整,以消除级联变换导致的累计误差。As shown in FIG. 1 , the present invention uses the phase correlation method to estimate the displacement of the image to be stitched and determine the overlapping area. Then, the SIFT feature operator is used to extract features in the overlapping area, and the global feature matching is converted into local overlapping area matching, which greatly reduces the feature extraction and matching time, and improves the utilization of matching points. Using the image displacement to generate a weighted map, the image is synthesized in a multi-scale space, which can eliminate blurring and stitching gaps, thereby achieving seamless stitching. For multiple infrared image stitching, the transformation matrix from each infrared image to the reference image is optimized and adjusted to eliminate the cumulative error caused by the cascade transformation.

如图2所示,本发明基于PP-MRF面板参数估计的方法,通过超像素分割得到红外大视场图像在纹理和亮度上相近的一系列小的区域面板,然后估计面板参数马尔可夫模型,获得场景中各小超像素面板之间的结构信息和深度信息以进行红外大视场三维重建。As shown in Figure 2, the present invention is based on the method of PP-MRF panel parameter estimation, obtains a series of small area panels with similar texture and brightness of the infrared large field of view image through superpixel segmentation, and then estimates the panel parameter Markov model , to obtain the structural information and depth information between the small superpixel panels in the scene for 3D reconstruction of infrared large field of view.

以下分别对上述关键技术进行详细说明。The above key technologies will be described in detail below respectively.

1、红外图像SIFT特征点提取1. Infrared image SIFT feature point extraction

相位相关法在频域中计算两幅图像间的互功率谱,再经过傅里叶反变换到空间域中得到冲激响应函数最大值的位置即为图像间的位移量,如式(1)所示:The phase correlation method calculates the cross-power spectrum between two images in the frequency domain, and then transforms it into the space domain through inverse Fourier transform to obtain the maximum value of the impulse response function, which is the displacement between the images, as shown in formula (1) Shown:

式(1)中,F1(ε,η)表示图像1的频域变换,表示图像2的频域变换,ε表示频率变量,η表示频率变量,x0表示垂直位移量,y0表示水平位移量,IFT表示傅里叶反变换,δ(x-x0,y-y0)表示冲激响应函数,其最大值的位置(x0,y0)即为图像的位移量。In formula (1), F 1 (ε, η) represents the frequency domain transformation of image 1, Represents the frequency domain transform of image 2, ε represents the frequency variable, η represents the frequency variable, x 0 represents the vertical displacement, y 0 represents the horizontal displacement, IFT represents the inverse Fourier transform, δ(xx 0 , yy 0 ) represents the impact The excitation response function, the position of its maximum value (x 0 , y 0 ) is the displacement of the image.

通过图像间的位移量可以确定重合区域,在重合区域中,提取SIFT特征点,大大缩短图像配准时间。SIFT(Scale-Invariant Feature Transform)方法,是一种检测并描述图像中局部特征的算法,通过高斯卷积核形成多尺度空间,并且在尺度空间寻找极值点,提取极值点的位置、尺度及旋转不变量等特征。因此,SIFT特征点不仅具有尺度不变性,而且对于光线、噪声、微视角改变具有很好的鲁棒性。The overlapping area can be determined by the displacement between images. In the overlapping area, SIFT feature points are extracted, which greatly shortens the image registration time. The SIFT (Scale-Invariant Feature Transform) method is an algorithm that detects and describes local features in an image. It uses a Gaussian convolution kernel to form a multi-scale space, and searches for extreme points in the scale space to extract the position and scale of the extreme points. and rotation invariant properties. Therefore, SIFT feature points not only have scale invariance, but also have good robustness to light, noise, and micro viewing angle changes.

SIFT特征点提取具体步骤包括:1、生成多尺度空间,并检测尺度空间极值点;2、精确定位极值点;3、确定关键点主方向;4、生成关键点描述子。The specific steps of SIFT feature point extraction include: 1. Generate multi-scale space and detect extreme points in scale space; 2. Precisely locate extreme points; 3. Determine the main direction of key points; 4. Generate key point descriptors.

SIFT特征点的提取由高斯差(Difference of Gaussians,DoG)尺度空间的局部极值确定。这里的局部极值是指一个3×3×3邻域范围内的极大值(或极小值)元素。提取后,对局部极值进行三维二次函数曲线拟合以获取关键点的精确位置并且进行滤波以消除低对比度响应,最后对关键点的邻域窗口内采样,并用统计邻域像素的梯度方向,以确定关键点的主方向。The extraction of SIFT feature points is determined by the local extremum of the Difference of Gaussians (DoG) scale space. The local extremum here refers to the maximum (or minimum) elements within a 3×3×3 neighborhood. After extraction, the local extremum is fitted with a three-dimensional quadratic function curve to obtain the precise position of the key point and filtered to eliminate the low-contrast response. Finally, the neighborhood window of the key point is sampled, and the gradient direction of the statistical neighborhood pixel is used , to determine the main direction of the keypoint.

SIFT描述子是图像梯度的三维空间直方图,用以表征关键点的外观。每一像素的梯度都作为三维基本特征向量的一个样点,由像素位置及梯度方向组成。每个样点的权重由梯度范数决定,累加进三维直方图h,组成区域的SIFT描述子。方向量化为八个区间。然后在每4×4的小块上计算8个方向的梯度方向,即可形成一个种子点,实际计算中对每个关键点使用4×4共16个种子点来描述,以增强匹配的稳健性,这样一个关键点就可以形成128维的SIFT特征向量。The SIFT descriptor is a three-dimensional spatial histogram of image gradients to characterize the appearance of keypoints. The gradient of each pixel is taken as a sample point of the three-dimensional basic feature vector, which is composed of the pixel position and the gradient direction. The weight of each sample point is determined by the gradient norm, which is accumulated into the three-dimensional histogram h to form the SIFT descriptor of the region. Orientation is quantized into eight intervals. Then calculate the gradient direction of 8 directions on each 4×4 small block to form a seed point. In the actual calculation, a total of 16 seed points of 4×4 are used to describe each key point to enhance the robustness of matching. Such a key point can form a 128-dimensional SIFT feature vector.

2、基于LM算法的红外图像配准2. Infrared image registration based on LM algorithm

对待拼接图像特征点,首先采用相似性度量的方法,进行特征点粗匹配。经过特征点对的匹配过程后,就可以根据点对的坐标计算图像间变换模型的参数。RANSAC方法具有较好的鲁棒性,能够有效剔除外点即伪匹配点,同时利用内点对单应性矩阵进行参数估计。To deal with the feature points of the spliced image, the method of similarity measurement is firstly used to perform rough matching of the feature points. After the matching process of feature point pairs, the parameters of the transformation model between images can be calculated according to the coordinates of the point pairs. The RANSAC method has good robustness and can effectively eliminate outliers, that is, false matching points, and use inliers to estimate the parameters of the homography matrix.

对于多幅红外图像拼接,需要将所有待拼接红外图像变换到图像序列中的一个参考图像,通常采用级联的方法求解远离参考图像的红外图像到参考图像的变换矩阵,但是这种连接它们之间的变换会导致累计误差,使得最终全景红外图像出现较大偏差和重影。因此,需要拼接过程中对各红外图像到参考图像的变换矩阵进行优化调整。For multiple infrared image stitching, it is necessary to transform all the infrared images to be stitched into a reference image in the image sequence. Usually, a cascade method is used to solve the transformation matrix from the infrared image far away from the reference image to the reference image, but this connection between them The transformation between them will lead to cumulative errors, which will cause large deviation and ghosting in the final panoramic infrared image. Therefore, it is necessary to optimize and adjust the transformation matrix from each infrared image to the reference image during the splicing process.

实验中,选取第一帧红外图像作为参考图像。假设Hij表示红外图像Ii到相邻红外图像Ij的变换矩阵,且红外图像Ij到参考图像的变换矩阵为Hj,则Ii到参考图像的变换矩阵Hi=HijHj。通过优化各红外图像到参考图像的变换矩阵,使得多幅图像拼接时的累计误差最小。In the experiment, the first infrared image is selected as the reference image. Suppose H ij represents the transformation matrix from infrared image I i to adjacent infrared image I j , and the transformation matrix from infrared image I j to reference image is H j , then the transformation matrix from I i to reference image H i =H ij H j . By optimizing the transformation matrix from each infrared image to the reference image, the cumulative error in stitching multiple images is minimized.

给定一组匹配点分别表示图像Ii和Ij中的第k,l个特征点,则对应点相对参考图像变换后的误差由式(2)计算:Given a set of matching points Represent the kth and lth feature points in images I i and I j respectively, then the error of the corresponding point after transformation relative to the reference image Calculated by formula (2):

式(2)中,表示从图像Ij到图像Ii的投影。因此,式(2)可以直接计算红外图像Ii和相邻图像Ij上的匹配点对经过参考图像变换后的距离。将全部的特征匹配对按式子求和,可以得到式(3):In formula (2), express Projection from image Ij to image Ii . Therefore, formula (2) can directly calculate the distance between the infrared image I i and the matching point pairs on the adjacent image I j after the reference image transformation. Summing up all feature matching pairs according to the formula, formula (3) can be obtained:

式(3)中,e表示误差总和,S(i,j)表示红外图像Ii和Ij之间特征匹配对的集合,f(x)是一个误差函数,可以用式(4)表示:In formula (3), e represents the sum of errors, S(i, j) represents the set of feature matching pairs between infrared images I i and I j , and f(x) is an error function, which can be expressed by formula (4):

式(4)中,以σ=∞来初始化,最后取σ=2个像素。这是一个非线性最小均方问题,因此使用LM(Levenberg-Marquardt)算法进行求解。通过最小化误差值来调整红外图像到参考图像的单应性矩阵,从而实现对Hj的调整。In formula (4), initialize with σ=∞, and finally take σ=2 pixels. This is a nonlinear least mean square problem, so the LM (Levenberg-Marquardt) algorithm is used to solve it. The adjustment of H j is achieved by adjusting the homography matrix of the infrared image to the reference image by minimizing the error value.

3、红外图像大视场拼接3. Large field of view stitching of infrared images

经过配准对齐之后,采用多分辨率的方法,将图像分解到多尺度空间,在各尺度空间中采用加权融合算子进行融合,然后通过图像重构得到完整的无缝隙的合成图像。假设待拼接图像分别为I1、I2,R表示两图像的重合区域,则有:After registration and alignment, the multi-resolution method is used to decompose the image into multi-scale spaces, and the weighted fusion operator is used to fuse in each scale space, and then a complete seamless composite image is obtained through image reconstruction. Assuming that the images to be spliced are I 1 and I 2 , and R represents the overlapping area of the two images, then:

LI(x,y,σ)=GI(x,y,kσ)-GI(x,y,σ) (6)L I (x, y, σ) = G I (x, y, kσ) - G I (x, y, σ) (6)

GI(x,y,σ)=I(x,y)*G(x,y,σ) (7)G I (x, y, σ) = I(x, y)*G(x, y, σ) (7)

式(5)、(6)、(7)中,L、G分别表示拉普拉斯金字塔和高斯金字塔,σ为对应尺度空间,k为乘积因子,取C(x,y,σ)表示尺度空间σ中,采用加权算子所得融合金字塔图像,Wi表示图像i的加权图。对C(x,y,σ)进行图像重构,即可得融合图像。In formulas (5), (6), and (7), L and G represent Laplacian pyramid and Gaussian pyramid respectively, σ is the corresponding scale space, k is the multiplication factor, and take C(x, y, σ) represents the fused pyramid image obtained by using the weighted operator in the scale space σ, and W i represents the weighted image of image i. Perform image reconstruction on C(x, y, σ) to obtain a fused image.

为了保持实际场景中各对象之间的空间约束关系,本实施例先将原红外图像序列映射到统一的柱面坐标空间上得到柱面图像序列,再按照前面介绍的方式1-3拼接得到大视场红外图像,结果如图3所示。In order to maintain the spatial constraint relationship between objects in the actual scene, this embodiment first maps the original infrared image sequence to a unified cylindrical coordinate space to obtain a cylindrical image sequence, and then stitches the large The infrared image of the field of view, the result is shown in Figure 3.

4、超像素分割与多尺度特征提取4. Superpixel segmentation and multi-scale feature extraction

超像素分割方法能够较好地描述区域信息,并对图像的噪声、遮挡和阴影等具有较好的鲁棒性。每个超像素区域内的像素点都具有相似的属性,并且对这些超像素区域面板建立马尔可夫模型从而估计出图像深度信息。超像素分割方法将红外图像进行分割成很小的像素块,然后对每个像素块进行特征提取,以得到区域内的形状、亮度和纹理,以及像素块与周围超像素块间的连接关系特征。本实施例采用Law’s掩膜来对超像素块进行多尺度特征提取。The superpixel segmentation method can better describe the regional information, and has better robustness to the noise, occlusion and shadow of the image. The pixels in each superpixel region have similar attributes, and a Markov model is established for these superpixel region panels to estimate image depth information. The superpixel segmentation method divides the infrared image into small pixel blocks, and then extracts the features of each pixel block to obtain the shape, brightness and texture in the region, as well as the connection relationship between the pixel block and the surrounding super pixel blocks. . In this embodiment, Law's mask is used to perform multi-scale feature extraction on superpixel blocks.

通过上述方法提取得到的特征较好的包含了超像素本身的纹理、灰度、方向梯度、位置和形状等特征,同时也包含了相邻超像素间的关系特征,能够很好的反映图像的局部特征和全局特征。The features extracted by the above method better include the texture, grayscale, directional gradient, position and shape of the superpixel itself, and also include the relationship between adjacent superpixels, which can well reflect the image quality. Local features and global features.

5、基于面板参数马尔可夫模型的大视场三维重建5. Large field of view 3D reconstruction based on panel parameter Markov model

通过特征提取后,再进行相应的面板参数估计,推断超像素面板的位置和方向。After feature extraction, the corresponding panel parameter estimation is performed to infer the position and orientation of the superpixel panel.

首先面板参数的定义为,在超像素平面面板上的任何一点满足αTq=1,面板参数示意图如图4所示,其中1/|α|表示从摄像头中心到面板平面上的最短距离,Ri表示从摄像头中心到面板中某像素点i的单位方向向量,则α为点i到摄像头中心的距离。First panel parameters is defined as, at any point on the superpixel plane panel Satisfying α T q = 1, the schematic diagram of the panel parameters is shown in Figure 4, where 1/|α| represents the shortest distance from the center of the camera to the plane of the panel, and R i represents the unit direction from the center of the camera to a certain pixel point i in the panel vector, then α is the distance from point i to the center of the camera.

为了对单目红外图像进行三维重建,还需要考虑超像素面板间的结构特征,如邻接、共面、共线等图像特性,建立马尔可夫模型:In order to perform three-dimensional reconstruction of monocular infrared images, it is also necessary to consider the structural characteristics between superpixel panels, such as image characteristics such as adjacency, coplanarity, and collinearity, and establish a Markov model:

式(8)中,αi为超像素i的面板参数,假设超像素块i共包含Si像素点,表示超像素i中像素点Si的特征,表示摄像机中心位置到超像素i上每个像素点Si的单位方向向量的集合,v表示从局部特征来推理预测面板参数的可信度,Z表示归一化常数。In formula (8), α i is the panel parameter of superpixel i, assuming that superpixel block i contains Si pixels in total, Represents the feature of pixel S i in superpixel i, Indicates the unit direction vector from the camera center position to each pixel S i on superpixel i A collection of , v represents the credibility of predicting panel parameters from local features, and Z represents a normalization constant.

上式中第一项条件概率密度函数f1i|Xi,vi,Ri;θ)表示面板参数αi与像素点局部特征Xi之间进行的建模,θ为待整定的参数。第二项f2i,αj|yi,j,Ri,Rj)主要针对超像素i,j间存在闭合曲线边界时的建模,此时需要考虑面板间的邻接性、共面和共线性。设像素si,sj分别来自超像素i,j,则f2有如下定义:The first conditional probability density function f 1i |X i , v i , R i ; θ) in the above formula represents the modeling between the panel parameter α i and the local feature X i of the pixel point, and θ is the parameters. The second term f 2i , α j |y i, j, R i , R j ) is mainly aimed at modeling when there is a closed curve boundary between superpixels i and j. At this time, the adjacency between panels, Coplanarity and collinearity. Suppose pixels s i and s j come from superpixels i and j respectively, then f 2 has the following definition:

通过不同的h(·)函数和像素对{si,sj}来判断超像素块之间是否存在邻接性、共面性和共线性特征。Through different h(·) functions and pixel pairs {s i , s j }, it is judged whether there are adjacency, coplanarity and collinearity features between superpixel blocks.

PP-MRF模型如上式所示,采用来自康奈尔大学计算机学院官方网站上的训练图像和对应的深度图像对PP-MRF模型进行学习训练,得到模型的参数θ。本实施例将直接采用已训练好的PP-MRF模型对红外大视场图像进行深度估计和三维重建。The PP-MRF model is shown in the above formula. The training image and the corresponding depth image from the official website of the School of Computer Science, Cornell University are used to learn and train the PP-MRF model to obtain the parameter θ of the model. In this embodiment, the trained PP-MRF model will be used directly to perform depth estimation and three-dimensional reconstruction on infrared large field of view images.

Claims (3)

1. a kind of night robot vision wide view-field three-D construction method infrared based on monocular, it is characterised in that including following Step:
Step 1, the infrared image in same level in the range of a series of 180 ° of rotary taking, obtain sequence image;
Step 2, by cylindrical surface projecting transformation for mula sequence image is projected in cylinder space coordinates;
Step 3, in cylinder space coordinates, choose sequence image in the first frame infrared image as reference picture, pass through In LM algorithm optimization sequence images other infrared images to reference picture transformation matrix, wherein, arbitrary neighborhood in sequence image Two width infrared images between transformation matrix obtained by following steps:
Step 3.1, by phase correlation method calculate two adjacent infrared images between displacement, to determine its overlapping region, SIFT feature is extracted in overlapping region, while according to image level displacement and being fade-in and gradually going out method, weighted graph is generated;
Step 3.2, the SIFT feature extracted to step 3.1, first using the method for similarity measurement, carry out thick of characteristic point Match somebody with somebody, find the crucial matching pair between adjacent infrared image, pseudo- match point, profit are eliminated using RANSAC Robust estimations algorithm afterwards Homography matrix parameter is estimated with correct matching double points, to determine the transformation matrix between adjacent infrared image;
All infrared images in sequence image are carried out registering alignment by step 4, the transformation matrix obtained according to step 3, then Infrared image is decomposed using multiresolution method, the weighted graph obtained in multiscale space using step 3.1 is weighted fusion, So as to obtain big visual field infrared mosaic image;
Step 5, to big visual field infrared mosaic image carry out three-dimensional reconstruction, build infrared large visual field threedimensional model.
2. a kind of night robot vision wide view-field three-D construction method infrared based on monocular as claimed in claim 1, its It is characterised by, the step 5 comprises the following steps:
Step 5.1, super-pixel segmentation and Multi resolution feature extraction are carried out to big visual field infrared mosaic image;
Step 5.2, the segmentation figure picture that step 5.1 is obtained and the characteristic information obtained by Multi resolution feature extraction are input to In the panel parameter Markov model of training, panel parameter estimation is carried out;
Step 5.3, according to panel parameter Markov model, estimation of Depth is carried out to big visual field infrared mosaic image, and build Infrared large visual field threedimensional model.
3. a kind of night robot vision wide view-field three-D construction method infrared based on monocular as claimed in claim 1, its It is characterised by, in the step 3, to infrared image IiAnd its adjacent infrared image IjFor, optimize infrared image IjTo ginseng Examine the transformation matrix H of imagejMethod be:
Step 3-1, hypothesis HijRepresent infrared image IiTo adjacent infrared image IjTransformation matrix, and infrared image IjTo reference The transformation matrix of image is Hj, then infrared image IiTo the transformation matrix H of reference picturei=HijHj
Step 3-2, given one group of match pointThat is characteristic matching pair,Infrared image I is represented respectivelyiWith it is infrared Image IjIn kth, l SIFT feature, then corresponding points with respect to reference picture convert after errorCalculated by following formula:
r i j k = u i k - u i j k = | | u i k - H i - 1 H j u j l | |
In formula,RepresentFrom image IjTo image IiProjection;
Step 3-3, by whole characteristic matchings to being summed by formula, can obtain:
e = Σ k ∈ S ( i , j ) f ( r i j k )
In formula, e represents sum of the deviations, and S (i, j) represents infrared image IiWith infrared image IjBetween characteristic matching pair set, f (x) it is an error function, is expressed as:
In formula, initialized, solved using LM algorithms with mouth=∞, by minimizing Error amount adjusts infrared image to the homography matrix of reference picture, so as to realize to HjAdjustment.
CN201710117065.8A 2017-03-01 2017-03-01 Based on the infrared night robot vision wide view-field three-D construction method of monocular Pending CN106960442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710117065.8A CN106960442A (en) 2017-03-01 2017-03-01 Based on the infrared night robot vision wide view-field three-D construction method of monocular

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710117065.8A CN106960442A (en) 2017-03-01 2017-03-01 Based on the infrared night robot vision wide view-field three-D construction method of monocular

Publications (1)

Publication Number Publication Date
CN106960442A true CN106960442A (en) 2017-07-18

Family

ID=59470787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710117065.8A Pending CN106960442A (en) 2017-03-01 2017-03-01 Based on the infrared night robot vision wide view-field three-D construction method of monocular

Country Status (1)

Country Link
CN (1) CN106960442A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767410A (en) * 2017-10-27 2018-03-06 中国电子科技集团公司第三研究所 The multi-band image method for registering of the multispectral system acquisition of polyphaser parallel optical axis
CN107833181A (en) * 2017-11-17 2018-03-23 沈阳理工大学 A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method and device, readable storage medium and computer equipment
CN109598674A (en) * 2017-09-30 2019-04-09 杭州海康威视数字技术股份有限公司 A kind of image split-joint method and device
CN109685838A (en) * 2018-12-10 2019-04-26 上海航天控制技术研究所 Image elastic registrating method based on super-pixel segmentation
CN109978760A (en) * 2017-12-27 2019-07-05 杭州海康威视数字技术股份有限公司 A kind of image split-joint method and device
CN110443093A (en) * 2019-07-31 2019-11-12 安徽大学 One kind is towards intelligentized infrared digital panorama system and its warehouse management method
CN110533589A (en) * 2019-07-18 2019-12-03 上海大学 A kind of threedimensional model joining method based on zoom micro-image sequence
CN110544202A (en) * 2019-05-13 2019-12-06 燕山大学 A parallax image stitching method and system based on template matching and feature clustering
CN110750874A (en) * 2019-09-26 2020-02-04 长沙理工大学 A method for life prediction of retired power battery
CN110827392A (en) * 2018-08-31 2020-02-21 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device with good scene usability
CN111223073A (en) * 2019-12-24 2020-06-02 乐软科技(北京)有限责任公司 A virtual detection system
CN111861866A (en) * 2020-06-30 2020-10-30 国网电力科学研究院武汉南瑞有限责任公司 A panorama reconstruction method of substation equipment inspection image
CN111862179A (en) * 2019-04-12 2020-10-30 北京城市网邻信息技术有限公司 Three-dimensional object modeling method and apparatus, image processing device, and medium
CN112102169A (en) * 2020-09-15 2020-12-18 合肥英睿系统技术有限公司 Infrared image splicing method and device and storage medium
CN112270755A (en) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 Three-dimensional scene construction method and device, storage medium and electronic equipment
CN112669355A (en) * 2021-01-05 2021-04-16 北京信息科技大学 Method and system for splicing and fusing focusing stack data based on RGB-D super-pixel segmentation
CN113066173A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Three-dimensional model construction method and device and electronic equipment
CN114037755A (en) * 2021-09-26 2022-02-11 国网江西省电力有限公司电力科学研究院 Instrument panel key data accurate positioning method based on scale invariant feature transformation
CN114072837A (en) * 2020-06-29 2022-02-18 深圳市大疆创新科技有限公司 Infrared image processing method, device, equipment and storage medium
CN115311293A (en) * 2022-10-12 2022-11-08 南通东鼎彩印包装厂 Rapid matching method for printed matter pattern
WO2023173572A1 (en) * 2022-03-17 2023-09-21 浙江大学 Real-time panoramic imaging method and device for underwater cleaning robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1975323A (en) * 2006-12-19 2007-06-06 南京航空航天大学 Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
CN101068016A (en) * 2007-06-11 2007-11-07 浙江大学 A photoelectric system that realizes seamless splicing of multiple CCDs
CN101320048A (en) * 2008-06-30 2008-12-10 河海大学 Large-field-of-view vehicle speed measuring device with fan-shaped arrangement of multiple charge-coupled device image sensors
CN102708569A (en) * 2012-05-15 2012-10-03 东华大学 Monocular infrared image depth estimating method on basis of SVM (Support Vector Machine) model
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points
CN105678721A (en) * 2014-11-20 2016-06-15 深圳英飞拓科技股份有限公司 Method and device for smoothing seams of panoramic stitched image
CN105741375A (en) * 2016-01-20 2016-07-06 华中师范大学 Large-visual-field binocular vision infrared imagery checking method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1975323A (en) * 2006-12-19 2007-06-06 南京航空航天大学 Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
CN101068016A (en) * 2007-06-11 2007-11-07 浙江大学 A photoelectric system that realizes seamless splicing of multiple CCDs
CN101320048A (en) * 2008-06-30 2008-12-10 河海大学 Large-field-of-view vehicle speed measuring device with fan-shaped arrangement of multiple charge-coupled device image sensors
CN102708569A (en) * 2012-05-15 2012-10-03 东华大学 Monocular infrared image depth estimating method on basis of SVM (Support Vector Machine) model
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points
CN105678721A (en) * 2014-11-20 2016-06-15 深圳英飞拓科技股份有限公司 Method and device for smoothing seams of panoramic stitched image
CN105741375A (en) * 2016-01-20 2016-07-06 华中师范大学 Large-visual-field binocular vision infrared imagery checking method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SAXENA A等: "Learning 3-D Scene Structure from a Single Still Image", 《IEEE TRANSACTION ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
兰红 等: "面向全景拼接的图像配准技术研究及应用", 《计算机工程与科学》 *
吴凡路 等: "嫦娥三号全景相机图像全景镶嵌方法的研究", 《光学学报》 *
沈振一 等: "基于PP-MRF模型的单目车载红外图像三维重建", 《东华大学学报(自然科学版)》 *
霍春宝 等: "SIFT特征匹配的显微全景图拼接", 《辽宁工程技术大学学报(自然科学版)》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598674A (en) * 2017-09-30 2019-04-09 杭州海康威视数字技术股份有限公司 A kind of image split-joint method and device
CN107767410A (en) * 2017-10-27 2018-03-06 中国电子科技集团公司第三研究所 The multi-band image method for registering of the multispectral system acquisition of polyphaser parallel optical axis
CN107833181A (en) * 2017-11-17 2018-03-23 沈阳理工大学 A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
CN107833181B (en) * 2017-11-17 2023-04-11 沈阳理工大学 Three-dimensional panoramic image generation method based on zoom stereo vision
CN109978760A (en) * 2017-12-27 2019-07-05 杭州海康威视数字技术股份有限公司 A kind of image split-joint method and device
CN109978760B (en) * 2017-12-27 2023-05-02 杭州海康威视数字技术股份有限公司 Image stitching method and device
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method and device, readable storage medium and computer equipment
CN109166077B (en) * 2018-08-17 2023-04-18 广州视源电子科技股份有限公司 Image alignment method and device, readable storage medium and computer equipment
CN110827392B (en) * 2018-08-31 2023-03-24 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device
CN110827392A (en) * 2018-08-31 2020-02-21 金钱猫科技股份有限公司 Monocular image three-dimensional reconstruction method, system and device with good scene usability
CN109685838B (en) * 2018-12-10 2023-06-09 上海航天控制技术研究所 Image elastic registration method based on super-pixel segmentation
CN109685838A (en) * 2018-12-10 2019-04-26 上海航天控制技术研究所 Image elastic registrating method based on super-pixel segmentation
CN111862179B (en) * 2019-04-12 2022-02-18 北京城市网邻信息技术有限公司 Three-dimensional object modeling method and apparatus, image processing device, and medium
CN111862179A (en) * 2019-04-12 2020-10-30 北京城市网邻信息技术有限公司 Three-dimensional object modeling method and apparatus, image processing device, and medium
CN110544202B (en) * 2019-05-13 2022-06-07 燕山大学 A method and system for disparity image stitching based on template matching and feature clustering
CN110544202A (en) * 2019-05-13 2019-12-06 燕山大学 A parallax image stitching method and system based on template matching and feature clustering
CN110533589A (en) * 2019-07-18 2019-12-03 上海大学 A kind of threedimensional model joining method based on zoom micro-image sequence
CN110443093A (en) * 2019-07-31 2019-11-12 安徽大学 One kind is towards intelligentized infrared digital panorama system and its warehouse management method
CN110443093B (en) * 2019-07-31 2022-07-29 安徽大学 Intelligent-oriented infrared digital panoramic system and warehouse management method thereof
CN110750874B (en) * 2019-09-26 2020-07-28 长沙理工大学 Retired power battery life prediction method
CN110750874A (en) * 2019-09-26 2020-02-04 长沙理工大学 A method for life prediction of retired power battery
CN111223073A (en) * 2019-12-24 2020-06-02 乐软科技(北京)有限责任公司 A virtual detection system
CN114072837A (en) * 2020-06-29 2022-02-18 深圳市大疆创新科技有限公司 Infrared image processing method, device, equipment and storage medium
CN111861866A (en) * 2020-06-30 2020-10-30 国网电力科学研究院武汉南瑞有限责任公司 A panorama reconstruction method of substation equipment inspection image
CN112102169A (en) * 2020-09-15 2020-12-18 合肥英睿系统技术有限公司 Infrared image splicing method and device and storage medium
CN112270755A (en) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 Three-dimensional scene construction method and device, storage medium and electronic equipment
CN112270755B (en) * 2020-11-16 2024-04-05 Oppo广东移动通信有限公司 Three-dimensional scene construction method and device, storage medium and electronic equipment
CN112669355A (en) * 2021-01-05 2021-04-16 北京信息科技大学 Method and system for splicing and fusing focusing stack data based on RGB-D super-pixel segmentation
CN112669355B (en) * 2021-01-05 2023-07-25 北京信息科技大学 Method and system for focusing stack data splicing and fusion based on RGB-D superpixel segmentation
CN113066173A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Three-dimensional model construction method and device and electronic equipment
CN114037755A (en) * 2021-09-26 2022-02-11 国网江西省电力有限公司电力科学研究院 Instrument panel key data accurate positioning method based on scale invariant feature transformation
WO2023173572A1 (en) * 2022-03-17 2023-09-21 浙江大学 Real-time panoramic imaging method and device for underwater cleaning robot
CN115311293A (en) * 2022-10-12 2022-11-08 南通东鼎彩印包装厂 Rapid matching method for printed matter pattern

Similar Documents

Publication Publication Date Title
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107093205B (en) A kind of three-dimensional space building window detection method for reconstructing based on unmanned plane image
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
CN105139412A (en) Hyperspectral image corner detection method and system
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
EP1016038A2 (en) Method and apparatus for performing global image alignment using any local match measure
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
Wan et al. Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN106204507B (en) Unmanned aerial vehicle image splicing method
Ruan et al. Image stitching algorithm based on SURF and wavelet transform
CN101661623B (en) Three-dimensional tracking method of deformable body based on linear programming
CN115272450A (en) Target positioning method based on panoramic segmentation
Chen et al. Image stitching algorithm research based on OpenCV
CN111047513B (en) Robust image alignment method and device for cylindrical panorama stitching
CN113223066A (en) Multi-source remote sensing image matching method and device based on characteristic point fine tuning
Liu et al. Grid: Guided refinement for detector-free multimodal image matching
WO2022253043A1 (en) Facial deformation compensation method for facial depth image, and imaging apparatus and storage medium
Ran et al. High-precision human body acquisition via multi-view binocular stereopsis
CN118918099A (en) Computer vision measuring method for vibration of label-free structure
CN109215122B (en) A street view three-dimensional reconstruction system and method, intelligent car
Shunzhi et al. Image feature detection algorithm based on the spread of hessian source
Feng et al. Registration of multitemporal GF-1 remote sensing images with weighting perspective transformation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170718

WD01 Invention patent application deemed withdrawn after publication