Nothing Special   »   [go: up one dir, main page]

CN107392950A - A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection - Google Patents

A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection Download PDF

Info

Publication number
CN107392950A
CN107392950A CN201710631310.7A CN201710631310A CN107392950A CN 107392950 A CN107392950 A CN 107392950A CN 201710631310 A CN201710631310 A CN 201710631310A CN 107392950 A CN107392950 A CN 107392950A
Authority
CN
China
Prior art keywords
mrow
msup
mtd
msub
munder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710631310.7A
Other languages
Chinese (zh)
Inventor
卢迪
张美玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201710631310.7A priority Critical patent/CN107392950A/en
Publication of CN107392950A publication Critical patent/CN107392950A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本发明一种基于弱纹理检测的跨尺度代价聚合立体匹配方法属于计算机视觉领域,尤其涉及对弱纹理图像的立体匹配方法,包括以下步骤:输入两幅彩色图像,所述的两幅彩色图像分别为左图像和右图像,利用左图像的梯度信息对图片进行弱纹理检测和分割;根据左图像和右图像的颜色信息和梯度信息计算匹配代价;以上述中的弱纹理检测和分割结果为基准,进行基于高斯滤波的内尺度和跨尺度代价聚合;采用嬴者全取策略计算视差;采用左右一致性检测和基于自适应权重的方法对视差精化,输出视差图像。本发明实现了在保证纹理区域匹配正确率的前提下,提高弱纹理区域匹配正确率,获得更好的视差图的技术目的。

The invention relates to a stereo matching method based on cross-scale cost aggregation based on weak texture detection, which belongs to the field of computer vision, and particularly relates to a stereo matching method for weak texture images, comprising the following steps: input two color images, and the two color images are respectively For the left image and the right image, use the gradient information of the left image to perform weak texture detection and segmentation on the picture; calculate the matching cost according to the color information and gradient information of the left image and the right image; use the above weak texture detection and segmentation results as a benchmark , perform intra-scale and cross-scale cost aggregation based on Gaussian filtering; use the winner-take-all strategy to calculate the disparity; use left-right consistency detection and a method based on adaptive weights to refine the disparity, and output the disparity image. The present invention achieves the technical purpose of improving the matching accuracy of weak texture areas and obtaining better parallax images under the premise of ensuring the matching accuracy of texture areas.

Description

一种基于弱纹理检测的跨尺度代价聚合立体匹配方法A Cross-Scale Cost Aggregation Stereo Matching Method Based on Weak Texture Detection

技术领域technical field

本发明一种基于弱纹理检测的跨尺度代价聚合立体匹配方法属于计算机视觉领域,尤其涉及对弱纹理图像的立体匹配方法。A cross-scale cost aggregation stereo matching method based on weak texture detection in the present invention belongs to the field of computer vision, and in particular relates to a stereo matching method for weak texture images.

背景技术Background technique

双目立体视觉(BinocularStereoVision)是计算机视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。而三维信息获取的好坏主要取决于立体匹配所得视差图正确率的高低。目前立体匹配存在的问题主要有光照不均、曝光过度等外界因素,以及图片本身会存在遮挡、弱纹理、重复纹理等计算机难以区分的图片本身特征。虽然大量学者已经对立体匹配研究多年,但对于弱纹理区域的匹配仍是图像处理领域的一个难点。如何保证纹理区域匹配正确率的前提下,提高弱纹理区域匹配正确率,获得更好的视差图是一重大问题。Binocular Stereo Vision (Binocular Stereo Vision) is an important form of computer vision. It is based on the principle of parallax and uses imaging equipment to obtain two images of the measured object from different positions, and obtains by calculating the position deviation between corresponding points of the image. A method for 3D geometric information of an object. The quality of 3D information acquisition mainly depends on the accuracy of the disparity map obtained by stereo matching. At present, the problems of stereo matching mainly include external factors such as uneven illumination and overexposure, as well as the characteristics of the picture itself that are difficult for computers to distinguish, such as occlusion, weak texture, and repeated texture. Although a large number of scholars have studied stereo matching for many years, the matching of weak texture areas is still a difficult point in the field of image processing. How to improve the matching accuracy of the weak texture area and obtain a better disparity map under the premise of ensuring the matching accuracy of the texture area is a major problem.

发明内容Contents of the invention

本发明提供了一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,能够在保证纹理区域匹配正确率的前提下,提高弱纹理区域匹配正确率,获得更好的视差图。The present invention provides a cross-scale cost aggregation stereo matching method based on weak texture detection, which can improve the matching accuracy of weak texture regions and obtain better disparity maps under the premise of ensuring the matching accuracy of texture regions.

本发明的目的是这样实现的:The purpose of the present invention is achieved like this:

一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,包括以下步骤:A cross-scale cost aggregation stereo matching method based on weak texture detection, comprising the following steps:

步骤a、输入两幅彩色图像,所述的两幅彩色图像分别为左图像和右图像,利用左图像的梯度信息对图片进行弱纹理检测和分割;Step a, input two color images, described two color images are left image and right image respectively, utilize the gradient information of left image to carry out weak texture detection and segmentation to picture;

步骤b、根据左图像和右图像的颜色信息和梯度信息计算匹配代价;Step b, calculating the matching cost according to the color information and gradient information of the left image and the right image;

步骤c、以步骤a中的弱纹理检测和分割结果为基准,进行基于高斯滤波的内尺度和跨尺度代价聚合;Step c, based on the weak texture detection and segmentation results in step a, perform Gaussian filter-based intra-scale and cross-scale cost aggregation;

步骤d、采用嬴者全取策略计算视差;Step d, using the winner-take-all strategy to calculate the disparity;

步骤e、采用左右一致性检测和基于自适应权重的方法对视差精化,输出视差图像。In step e, the parallax is refined by using left-right consistency detection and an adaptive weight-based method, and a parallax image is output.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤a中对图片进行弱纹理检测和分割具体为:In the described method of cross-scale cost aggregation stereo matching based on weak texture detection, the weak texture detection and segmentation of the picture in the step a are specifically as follows:

计算左图像坐标为(x,y)处像素点的梯度值为g(x,y),并与梯度阈值gT比较,判断是否为弱纹理区域,其计算公式为:Calculate the gradient value g(x,y) of the pixel point at the left image coordinates (x,y), and compare it with the gradient threshold g T to determine whether it is a weak texture area. The calculation formula is:

g(x,y)<gT g(x,y)<g T

式中:N(x,y)表示以像素(x,y)为中心的窗口,M表示窗口中像素的个数,I(x,y)表示像素的灰度值。In the formula: N(x, y) represents the window centered on the pixel (x, y), M represents the number of pixels in the window, and I(x, y) represents the gray value of the pixel.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤b中计算匹配代价具体为:In the described method of stereo matching based on cross-scale cost aggregation based on weak texture detection, the calculation of the matching cost in the step b is specifically:

计算立体彩色图像对左图像IL和右图像IR的匹配代价C(p,d),其计算公式为:Calculate the matching cost C(p,d) of the stereoscopic color image to the left image I L and the right image I R , and its calculation formula is:

C(p,d)=(1-α)·CAD(p,d)+α·(Cgrad_x(p,d)+Cgrad_y(p,d))C(p,d)=(1-α)·C AD (p,d)+α·(C grad_x (p,d)+C grad_y (p,d))

式中:p是左图像中一点,式中i=R,G,B分别代表彩色图像的三个通道,TAD和Tgrad分别代表颜色和梯度的截断阈值;分别表示图片在x、y方向的梯度算子;α是颜色差和梯度差之间的平衡因子。In the formula: p is a point in the left image, i=R, G, B represent the three channels of the color image respectively in the formula, T AD and T grad represent the truncation threshold of color and gradient respectively; Represents the gradient operator of the picture in the x and y directions respectively; α is the balance factor between the color difference and the gradient difference.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤c中代价聚合具体为:In the described method of cross-scale cost aggregation stereo matching based on weak texture detection, the cost aggregation in step c is specifically:

其中,表示聚合后的匹配代价,z为期望的优化目标值,W是高斯滤波核,N是像素p的邻域窗口,q是p的邻域像素点;s∈{0,1,...,S}为尺度参数,s=0时,C0代表图像原始尺度匹配代价;代表图像的S+1个尺度的聚合代价;in, Indicates the matching cost after aggregation, z is the desired optimization target value, W is the Gaussian filter kernel, N is the neighborhood window of pixel p, and q is the neighborhood pixel of p; s∈{0,1,..., S} is the scale parameter, when s=0, C 0 represents the original scale matching cost of the image; Represents the aggregation cost of S+1 scales of the image;

式中,λ为正则化因子,表示公式(11)的优化目标函数,令有:In the formula, λ is the regularization factor, use Represents the optimization objective function of formula (11), let have:

其中,Thigh和Tlow分别代表前文中检测出来的纹理区域和弱纹理区域;C1和C1/2分别代表原图像尺度和二分之一尺度的匹配代价,用不同大小的窗口进行高斯滤波,融合后得到最终的匹配代价。Among them, T high and T low represent the texture area and weak texture area detected in the previous article respectively; C 1 and C 1/2 represent the matching cost of the original image scale and one-half scale respectively, and use different sizes of windows for Gaussian After filtering, the final matching cost is obtained after fusion.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤e中视差精化具体为:In the described method of cross-scale cost aggregation stereo matching based on weak texture detection, the parallax refinement in the step e is specifically:

|D'L(P)-D'R(P-D'R(P))|<δ|D' L (P)-D' R (P-D' R (P))|<δ

DLRC(P)=min(D'(PL),D'(PR))D LRC (P)=min(D'(PL),D'(PR))

其中,视差图中的一点p的左图视差值D'L(p)和右图视差值D'R(p-D'L(p)),δ为LRC的阈值;D′(PL)为左侧的第一个非遮挡点的视差值,D(PR)为右侧的第一个非遮挡点的视差值;WBpq(IL)为左图像的函数,Δcpq和Δspq分别为左图像中点p与q的色彩差异和空间欧式距离,分别为色彩差异和距离差异的调节参数;Dw(p)滤波后的图像。Among them, the left image disparity value D' L (p) and the right image disparity value D' R (p-D' L (p)) of a point p in the disparity map, δ is the threshold of LRC; D'(PL) is The parallax value of the first non-occluded point on the left, D(PR) is the parallax value of the first non-occluded point on the right; WB pq (I L ) is a function of the left image, Δc pq and Δs pq Respectively, the color difference and spatial Euclidean distance of points p and q in the left image, with are the adjustment parameters of color difference and distance difference respectively; D w (p) is the filtered image.

有益效果:Beneficial effect:

本发明一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,本发明提出一种新的立体匹配算法,针对图片区域是否属于弱纹理,选取恰当的匹配方法,从而提高立体匹配的正确率,获得更好的视差图。The present invention is a cross-scale cost aggregation stereo matching method based on weak texture detection. The present invention proposes a new stereo matching algorithm, which selects an appropriate matching method for whether the image area belongs to weak texture, thereby improving the accuracy of stereo matching. Get better disparity maps.

应用本实施方式算法处理的立体匹配图像对,在图片的纹理区域和弱纹理区域都可以取得较好的效果,错误匹配率有所降低(比不进行弱纹理区域分割算法低5%)。说明本实施方式算法可以在保证纹理区域匹配正确率的前提下,提高弱纹理区域匹配正确率,获得更好的视差图。The stereo matching image pair processed by the algorithm of this embodiment can achieve better results in both the texture area and the weak texture area of the picture, and the error matching rate is reduced (5% lower than that without the weak texture area segmentation algorithm). It shows that the algorithm of this embodiment can improve the matching accuracy of the weak texture area on the premise of ensuring the matching accuracy of the texture area, and obtain a better disparity map.

附图说明Description of drawings

图1是一种基于弱纹理检测的跨尺度代价聚合立体匹配方法流程图。Figure 1 is a flowchart of a stereo matching method based on cross-scale cost aggregation based on weak texture detection.

图2是Bowling1视差图。Figure 2 is the Bowling1 disparity map.

图3是Lampshade1视差图。Figure 3 is the Lampshade1 disparity map.

图4是Monopoly视差图。Figure 4 is a Monopoly disparity map.

图5是Plastic视差图。Figure 5 is a Plastic disparity map.

具体实施方式detailed description

下面结合附图对本发明具体实施方式作进一步详细描述。The specific embodiments of the present invention will be further described in detail below in conjunction with the accompanying drawings.

具体实施例一Specific embodiment one

一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,如图1所示,包括以下步骤:A cross-scale cost aggregation stereo matching method based on weak texture detection, as shown in Figure 1, includes the following steps:

步骤a、输入两幅彩色图像,所述的两幅彩色图像分别为左图像和右图像,利用左图像的梯度信息对图片进行弱纹理检测和分割;Step a, input two color images, described two color images are left image and right image respectively, utilize the gradient information of left image to carry out weak texture detection and segmentation to picture;

步骤b、根据左图像和右图像的颜色信息和梯度信息计算匹配代价;Step b, calculating the matching cost according to the color information and gradient information of the left image and the right image;

步骤c、以步骤a中的弱纹理检测和分割结果为基准,进行基于高斯滤波的内尺度和跨尺度代价聚合;Step c, based on the weak texture detection and segmentation results in step a, perform Gaussian filter-based intra-scale and cross-scale cost aggregation;

步骤d、采用嬴者全取策略计算视差;Step d, using the winner-take-all strategy to calculate the disparity;

步骤e、采用左右一致性检测和基于自适应权重的方法对视差精化,输出视差图像。In step e, the parallax is refined by using left-right consistency detection and an adaptive weight-based method, and a parallax image is output.

按照以上步骤,选择四张图片进行对比,如图2、图3、图4和图5所示,According to the above steps, select four pictures for comparison, as shown in Figure 2, Figure 3, Figure 4 and Figure 5,

图2中,图2(a)是Bowling1原图像左图;图2(b)是Bowling1真实视差图;图2(c)是Bowling1弱纹理检测结果;图2(d)是Bowling1最终视差图;图2(e)是Bowling1不进行弱纹理检测的视差图。In Figure 2, Figure 2(a) is the left image of the original image of Bowling1; Figure 2(b) is the real disparity map of Bowling1; Figure 2(c) is the weak texture detection result of Bowling1; Figure 2(d) is the final disparity map of Bowling1; Figure 2(e) is the disparity map of Bowling1 without weak texture detection.

图3中,图3(a)是Lampshade1原图像左图;图3(b)是Lampshade1真实视差图;图3(c)是Lampshade1弱纹理检测结果;图3(d)是Lampshade1最终视差图;图3(e)是Lampshade1不进行弱纹理检测的视差图。In Figure 3, Figure 3(a) is the left image of the original image of Lampshade1; Figure 3(b) is the real disparity map of Lampshade1; Figure 3(c) is the weak texture detection result of Lampshade1; Figure 3(d) is the final disparity map of Lampshade1; Figure 3(e) is the disparity map of Lampshade1 without weak texture detection.

图4中,图4(a)是Monopoly原图像左图;图4(b)是Monopoly真实视差图;图4(c)是Monopoly弱纹理检测结果;图4(d)是Monopoly最终视差图;图4(e)是Monopoly不进行弱纹理检测的视差图。In Figure 4, Figure 4(a) is the left image of the original Monopoly image; Figure 4(b) is the real disparity map of Monopoly; Figure 4(c) is the detection result of Monopoly weak texture; Figure 4(d) is the final disparity map of Monopoly; Figure 4(e) is the disparity map of Monopoly without weak texture detection.

图5中,图5(a)是Plastic原图像左图;图5(b)是Plastic真实视差图;图5(c)是Plastic弱纹理检测结果;图5(d)是Plastic最终视差图;图5(e)是Plastic不进行弱纹理检测的视差图。In Figure 5, Figure 5(a) is the left image of the original Plastic image; Figure 5(b) is the real disparity map of Plastic; Figure 5(c) is the weak texture detection result of Plastic; Figure 5(d) is the final disparity map of Plastic; Figure 5(e) is the disparity map of Plastic without weak texture detection.

从视觉效果上对图2(a)~图2(e),图3(a)~图3(e),图4(a)~图4(e),图5(a)~图5(e)中的视差图进行主观评价。图2~图5的(c)中黑色部分表示检测出的弱纹理区域,白色部分表示纹理区域。对照视差图,可以看出在弱纹理区域,应用本实施方式算法得到的视差图结果要比不进行弱纹理检测的算法视差图效果要好得多。From the perspective of visual effects, Fig. 2(a) ~ Fig. 2(e), Fig. 3(a) ~ Fig. 3(e), Fig. 4(a) ~ Fig. 4(e), Fig. 5(a) ~ Fig. 5( Disparity maps in e) for subjective evaluation. In (c) of FIG. 2 to FIG. 5 , the black part indicates the detected weak texture area, and the white part indicates the texture area. Comparing the disparity map, it can be seen that in the weak texture area, the result of the disparity map obtained by applying the algorithm of this embodiment is much better than that of the algorithm without weak texture detection.

从客观评价指标上对本发明方法进行评价。The method of the present invention is evaluated from the objective evaluation index.

表1给出了应用两种算法处理middlebury图像集4幅弱纹理区域明显的图像对的错误匹配率。Table 1 shows the error matching rate of the four image pairs with obvious weak texture areas in the middlebury image set by applying two algorithms.

表1Table 1

由表1可以看出,在两种算法处理立体匹配图像对的测试结果中,应用本实施方式算法处理的图像对比不进行弱纹理检测与分割的算法错误匹配率降低了5%。说明本实施方式算法可以在保证纹理区域匹配正确率的前提下,提高弱纹理区域匹配正确率,获得更好的视差图。It can be seen from Table 1 that in the test results of stereo matching image pairs processed by the two algorithms, the error matching rate of the image processed by the algorithm of this embodiment is reduced by 5% compared with the algorithm without weak texture detection and segmentation. It shows that the algorithm of this embodiment can improve the matching accuracy of the weak texture area on the premise of ensuring the matching accuracy of the texture area, and obtain a better disparity map.

具体实施例二Specific embodiment two

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤a中对图片进行弱纹理检测和分割具体为:In the described method of cross-scale cost aggregation stereo matching based on weak texture detection, the weak texture detection and segmentation of the picture in the step a are specifically as follows:

计算左图像坐标为(x,y)处像素点的梯度值为g(x,y),并与梯度阈值gT比较,判断是否为弱纹理区域,其计算公式为:Calculate the gradient value g(x,y) of the pixel point at the left image coordinates (x,y), and compare it with the gradient threshold g T to determine whether it is a weak texture area. The calculation formula is:

g(x,y)<gT g(x,y)<g T

式中:N(x,y)表示以像素(x,y)为中心的窗口,M表示窗口中像素的个数,I(x,y)表示像素的灰度值。In the formula: N(x, y) represents the window centered on the pixel (x, y), M represents the number of pixels in the window, and I(x, y) represents the gray value of the pixel.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤b中计算匹配代价具体为:In the described method of stereo matching based on cross-scale cost aggregation based on weak texture detection, the calculation of the matching cost in the step b is specifically:

计算立体彩色图像对左图像IL和右图像IR的匹配代价C(p,d),其计算公式为:Calculate the matching cost C(p,d) of the stereoscopic color image to the left image I L and the right image I R , and its calculation formula is:

C(p,d)=(1-α)·CAD(p,d)+α·(Cgrad_x(p,d)+Cgrad_y(p,d))C(p,d)=(1-α)·C AD (p,d)+α·(C grad_x (p,d)+C grad_y (p,d))

式中:p是左图像中一点,式中i=R,G,B分别代表彩色图像的三个通道,TAD和Tgrad分别代表颜色和梯度的截断阈值;分别表示图片在x、y方向的梯度算子;α是颜色差和梯度差之间的平衡因子。In the formula: p is a point in the left image, i=R, G, B represent the three channels of the color image respectively in the formula, T AD and T grad represent the truncation threshold of color and gradient respectively; Represents the gradient operator of the picture in the x and y directions respectively; α is the balance factor between the color difference and the gradient difference.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤c中代价聚合具体为:In the described method of cross-scale cost aggregation stereo matching based on weak texture detection, the cost aggregation in step c is specifically:

其中,表示聚合后的匹配代价,z为期望的优化目标值,W是高斯滤波核,N是像素p的邻域窗口,q是p的邻域像素点;s∈{0,1,...,S}为尺度参数,s=0时,C0代表图像原始尺度匹配代价;代表图像的S+1个尺度的聚合代价;in, Indicates the matching cost after aggregation, z is the desired optimization target value, W is the Gaussian filter kernel, N is the neighborhood window of pixel p, and q is the neighborhood pixel of p; s∈{0,1,..., S} is the scale parameter, when s=0, C 0 represents the original scale matching cost of the image; Represents the aggregation cost of S+1 scales of the image;

式中,λ为正则化因子,表示公式(11)的优化目标函数,令有:In the formula, λ is the regularization factor, use Represents the optimization objective function of formula (11), let have:

其中,Thigh和Tlow分别代表前文中检测出来的纹理区域和弱纹理区域;C1和C1/2分别代表原图像尺度和二分之一尺度的匹配代价,用不同大小的窗口进行高斯滤波,融合后得到最终的匹配代价。Among them, T high and T low represent the texture area and weak texture area detected in the previous article respectively; C 1 and C 1/2 represent the matching cost of the original image scale and one-half scale respectively, and use different sizes of windows for Gaussian After filtering, the final matching cost is obtained after fusion.

所述的一种基于弱纹理检测的跨尺度代价聚合立体匹配方法,所述步骤e中视差精化具体为:In the described method of cross-scale cost aggregation stereo matching based on weak texture detection, the parallax refinement in the step e is specifically:

|D'L(P)-D'R(P-D'R(P))|<δ|D' L (P)-D' R (P-D' R (P))|<δ

DLRC(P)=min(D'(PL),D'(PR))D LRC (P)=min(D'(PL),D'(PR))

其中,视差图中的一点p的左图视差值D'L(p)和右图视差值D'R(p-D'L(p)),δ为LRC的阈值;D′(PL)为左侧的第一个非遮挡点的视差值,D′(PR)为右侧的第一个非遮挡点的视差值;WBpq(IL)为左图像的函数,Δcpq和Δspq分别为左图像中点p与q的色彩差异和空间欧式距离,分别为色彩差异和距离差异的调节参数;Dw(p)滤波后的图像。Among them, the left image disparity value D' L (p) and the right image disparity value D' R (p-D' L (p)) of a point p in the disparity map, δ is the threshold of LRC; D'(PL) is The parallax value of the first non-occluded point on the left, D′(PR) is the parallax value of the first non-occluded point on the right; WB pq (I L ) is the function of the left image, Δc pq and Δs pq are the color difference and spatial Euclidean distance between points p and q in the left image respectively, with are the adjustment parameters of color difference and distance difference respectively; D w (p) is the filtered image.

Claims (5)

1. A cross-scale cost aggregation stereo matching method based on weak texture detection is characterized by comprising the following steps:
step a, inputting two color images, wherein the two color images are a left image and a right image respectively, and performing weak texture detection and segmentation on the images by using gradient information of the left image;
b, calculating matching cost according to the color information and the gradient information of the left image and the right image;
step c, taking the weak texture detection and segmentation result in the step a as a reference, and carrying out inner scale and cross-scale cost aggregation based on Gaussian filtering;
d, calculating the parallax by adopting a win person total-taking strategy;
and e, adopting a left-right consistency detection method and a self-adaptive weight-based method to refine the parallax and output a parallax image.
2. The method for cross-scale cost aggregation stereo matching based on weak texture detection as claimed in claim 1, wherein the weak texture detection and segmentation of the picture in the step a specifically comprises:
calculating the gradient value g (x, y) of the pixel point at the coordinate (x, y) of the left image and the gradient threshold value gTComparing and judging whether the texture is a weak texture area, wherein the calculation formula is as follows:
g(x,y)<gT
<mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <mo>&amp;CenterDot;</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>N</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </munder> <mrow> <mo>(</mo> <mo>|</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>-</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>+</mo> <mo>|</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> <mo>-</mo> <mi>I</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> <mo>|</mo> <mo>)</mo> </mrow> </mrow>
in the formula, N (x, y) represents a window with a pixel (x, y) as the center, M represents the number of pixels in the window, and I (x, y) represents the gray scale value of the pixel.
3. The method for cross-scale cost aggregation stereo matching based on weak texture detection as claimed in claim 1, wherein the calculating the matching cost in the step b specifically comprises:
calculating a left image I of a stereoscopic color image pairLAnd a right image IRThe matching cost C (p, d) is calculated by the formula:
C(p,d)=(1-α)·CAD(p,d)+α·(Cgrad_x(p,d)+Cgrad_y(p,d))
<mrow> <msub> <mi>C</mi> <mrow> <mi>A</mi> <mi>D</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mi>R</mi> <mo>,</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> </mrow> </munder> <mo>|</mo> <msubsup> <mi>I</mi> <mi>L</mi> <mi>i</mi> </msubsup> <mo>(</mo> <mi>p</mi> <mo>)</mo> <mo>-</mo> <msubsup> <mi>I</mi> <mi>R</mi> <mi>i</mi> </msubsup> <mo>(</mo> <mrow> <mi>p</mi> <mo>,</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>|</mo> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>D</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow>
Cgrad_x(p,d)=min(|▽xIL(p)-▽xIR(p,d)|,Tgrad)
Cgrad_y(p,d)=min(|▽yIL(p)-▽yIR(p,d)|,Tgrad)
in the formula: p is a point in the left image, where i ═ R, G, B represent the three channels of the color image, respectively, and TADAnd TgradRepresenting the cut-off thresholds for color and gradient, respectively, ▽x、▽yRepresenting the gradient operators of the picture in the x and y directions, respectively, α is a balance factor between the color difference and the gradient difference.
4. The method for cross-scale cost aggregation stereo matching based on weak texture detection as claimed in claim 1, wherein the cost aggregation in the step c is specifically:
<mrow> <mover> <mi>C</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>argmin</mi> <mi>z</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <mi>N</mi> </mrow> </munder> <mi>W</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> <mi>z</mi> <mo>-</mo> <mi>C</mi> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
<mrow> <mover> <mi>C</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <mi>N</mi> </mrow> </munder> <mi>W</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mi>C</mi> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> </mrow>
<mrow> <mover> <mi>v</mi> <mo>~</mo> </mover> <mo>=</mo> <munder> <mrow> <mi>a</mi> <mi>r</mi> <mi>g</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <msubsup> <mrow> <mo>{</mo> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>}</mo> </mrow> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>S</mi> </msubsup> </munder> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>S</mi> </munderover> <munder> <mo>&amp;Sigma;</mo> <mrow> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>s</mi> </msub> </mrow> </munder> <mi>W</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>-</mo> <msup> <mi>C</mi> <mi>s</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
wherein,representing the aggregated matching cost, wherein z is an expected optimization target value, W is a Gaussian filter kernel, N is a neighborhood window of a pixel p, q is a neighborhood pixel point of p, S ∈ {0, 1.., S } is a scale parameter, and when S is 0, C is the value0Representing the original scale matching cost of the image;an aggregate cost representing S +1 scales of the image;
<mrow> <msup> <mover> <mi>C</mi> <mo>~</mo> </mover> <mi>s</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>s</mi> </msub> </mrow> </munder> <mi>W</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msup> <mi>C</mi> <mi>s</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> </mrow>
<mrow> <mover> <mi>v</mi> <mo>~</mo> </mover> <mo>=</mo> <munder> <mrow> <mi>a</mi> <mi>r</mi> <mi>g</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <msubsup> <mrow> <mo>{</mo> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>}</mo> </mrow> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>S</mi> </msubsup> </munder> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>S</mi> </munderover> <munder> <mo>&amp;Sigma;</mo> <mrow> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>s</mi> </msub> </mrow> </munder> <mi>W</mi> <mo>(</mo> <mrow> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>q</mi> <mi>s</mi> </msup> </mrow> <mo>)</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>-</mo> <msup> <mi>C</mi> <mi>s</mi> </msup> <mo>(</mo> <mrow> <msup> <mi>q</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> </mrow> <mo>)</mo> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>s</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>S</mi> </munderover> <mo>|</mo> <mo>|</mo> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>-</mo> <msup> <mi>z</mi> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>
in the formula, lambda is a regularization factor,by usingAn optimization objective function representing equation (11), andcomprises the following steps:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>)</mo> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>-</mo> <msup> <mi>&amp;lambda;z</mi> <mrow> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mover> <mi>C</mi> <mo>~</mo> </mover> <mi>s</mi> </msup> <mo>(</mo> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> <mo>)</mo> <mo>,</mo> <mi>s</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msup> <mi>&amp;lambda;z</mi> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mn>2</mn> <mi>&amp;lambda;</mi> <mo>)</mo> </mrow> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>-</mo> <msup> <mi>&amp;lambda;z</mi> <mrow> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mover> <mi>C</mi> <mo>~</mo> </mover> <mi>s</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <mi>s</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>S</mi> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msup> <mi>&amp;lambda;z</mi> <mrow> <mi>s</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mn>2</mn> <mi>&amp;lambda;</mi> <mo>)</mo> </mrow> <msup> <mi>z</mi> <mi>s</mi> </msup> <mo>=</mo> <msup> <mover> <mi>C</mi> <mo>~</mo> </mover> <mi>s</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>s</mi> </msup> <mo>,</mo> <msup> <mi>d</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>,</mo> <mi>s</mi> <mo>=</mo> <mi>S</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <mi>A</mi> <mover> <mi>v</mi> <mo>^</mo> </mover> <mo>=</mo> <mover> <mi>v</mi> <mo>~</mo> </mover> </mrow>
<mrow> <mi>A</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>+</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <mn>1</mn> <mo>+</mo> <mn>2</mn> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <mn>1</mn> <mo>+</mo> <mn>2</mn> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <mn>1</mn> <mo>+</mo> <mi>&amp;lambda;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mrow> <mover> <mi>v</mi> <mo>^</mo> </mover> <mo>=</mo> <msup> <mi>A</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mover> <mi>v</mi> <mo>~</mo> </mover> </mrow>
<mrow> <msub> <mover> <mi>C</mi> <mo>~</mo> </mover> <mrow> <mi>f</mi> <mi>i</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>T</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <mi>N</mi> </mrow> </munder> <msub> <mi>W</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>w</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <mi>N</mi> </mrow> </munder> <msub> <mi>W</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <msub> <mi>C</mi> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> </mrow>
wherein, ThighAnd TlowRespectively representing the texture region and the weak texture region detected in the text; c1And C1/2Respectively representing the matching cost of the original image scale and the half scale, carrying out Gaussian filtering by using windows with different sizes, and obtaining the final matching cost after fusion.
5. The method for cross-scale cost aggregation stereo matching based on weak texture detection as claimed in claim 1, wherein the disparity refinement in step e specifically comprises:
|D'L(P)-D'R(P-D'R(P))|<
DLRC(P)=min(D'(PL),D'(PR))
<mrow> <msub> <mi>D</mi> <mi>w</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>q</mi> </munder> <msub> <mi>WB</mi> <mrow> <mi>p</mi> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>D</mi> <mrow> <mi>L</mi> <mi>R</mi> <mi>C</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>WB</mi> <mrow> <mi>p</mi> <mi>q</mi> </mrow> </msub> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msub> <mi>&amp;Delta;c</mi> <mrow> <mi>p</mi> <mi>q</mi> </mrow> </msub> </mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>c</mi> <mn>2</mn> </msubsup> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>&amp;Delta;s</mi> <mrow> <mi>p</mi> <mi>q</mi> </mrow> </msub> </mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>s</mi> <mn>2</mn> </msubsup> </mfrac> <mo>)</mo> </mrow> </mrow>
wherein, the left chart parallax value D 'of one point p in the parallax chart'L(p) and Right View disparity value D'R(p-D'L(p)), a threshold for LRC; d '(PL) is the disparity value for the first non-occluded point on the left, and D' (PR) is the disparity value for the first non-occluded point on the right; WB (wideband weight division multiple Access)pq(IL) As a function of the left image, Δ cpqAnd Δ spqRespectively the color difference and the spatial euclidean distance of points p and q in the left image,andadjustment parameters of color difference and distance difference respectively; dw(p) a filtered image.
CN201710631310.7A 2017-07-28 2017-07-28 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection Pending CN107392950A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710631310.7A CN107392950A (en) 2017-07-28 2017-07-28 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710631310.7A CN107392950A (en) 2017-07-28 2017-07-28 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Publications (1)

Publication Number Publication Date
CN107392950A true CN107392950A (en) 2017-11-24

Family

ID=60342086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710631310.7A Pending CN107392950A (en) 2017-07-28 2017-07-28 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Country Status (1)

Country Link
CN (1) CN107392950A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945222A (en) * 2017-12-15 2018-04-20 东南大学 A kind of new Stereo matching cost calculates and parallax post-processing approach
CN108181319A (en) * 2017-12-12 2018-06-19 陕西三星洁净工程有限公司 A kind of laying dust detecting device and method based on stereoscopic vision
CN108510529A (en) * 2018-03-14 2018-09-07 昆明理工大学 A kind of figure based on adaptive weight cuts solid matching method
CN108596975A (en) * 2018-04-25 2018-09-28 华南理工大学 A kind of Stereo Matching Algorithm for weak texture region
CN108682026A (en) * 2018-03-22 2018-10-19 辽宁工业大学 A kind of binocular vision solid matching method based on the fusion of more Matching units
CN108765486A (en) * 2018-05-17 2018-11-06 长春理工大学 Based on sparse piece of aggregation strategy method of relevant Stereo matching in color
CN109816782A (en) * 2019-02-03 2019-05-28 哈尔滨理工大学 A 3D reconstruction method of indoor scene based on binocular vision
CN109887021A (en) * 2019-01-19 2019-06-14 天津大学 Cross-scale random walk stereo matching method
CN109961417A (en) * 2017-12-26 2019-07-02 广州极飞科技有限公司 Image processing method, device and mobile device control method
CN111191694A (en) * 2019-12-19 2020-05-22 浙江科技学院 Image stereo matching method
CN111508013A (en) * 2020-04-21 2020-08-07 中国科学技术大学 Stereo matching method
CN112070694A (en) * 2020-09-03 2020-12-11 深兰人工智能芯片研究院(江苏)有限公司 Binocular stereo vision disparity map post-processing method and device
WO2021018093A1 (en) * 2019-07-31 2021-02-04 深圳市道通智能航空技术有限公司 Stereo matching method, image processing chip, and moving carrier
CN114565647A (en) * 2022-02-24 2022-05-31 南京理工大学 Stereo matching algorithm based on texture region self-adaptive dynamic cost calculation and aggregation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551035A (en) * 2015-12-09 2016-05-04 深圳市华和瑞智科技有限公司 Stereoscopic vision matching method based on weak edge and texture classification
CN106340036A (en) * 2016-08-08 2017-01-18 东南大学 Binocular stereoscopic vision-based stereo matching method
CN106530336A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Stereo matching algorithm based on color information and graph-cut theory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551035A (en) * 2015-12-09 2016-05-04 深圳市华和瑞智科技有限公司 Stereoscopic vision matching method based on weak edge and texture classification
CN106340036A (en) * 2016-08-08 2017-01-18 东南大学 Binocular stereoscopic vision-based stereo matching method
CN106530336A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Stereo matching algorithm based on color information and graph-cut theory

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张华 等: ""基于跨尺度变窗口代价聚合的快速立体匹配"", 《计算机工程与应用》 *
曹晓倩 等: "基于弱纹理检测及视差图融合的立体匹配", 《仪器仪表学报》 *
林雪: "双目立体视觉中立体匹配技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108181319A (en) * 2017-12-12 2018-06-19 陕西三星洁净工程有限公司 A kind of laying dust detecting device and method based on stereoscopic vision
CN108181319B (en) * 2017-12-12 2020-09-11 陕西三星洁净工程有限公司 Accumulated dust detection device and method based on stereoscopic vision
CN107945222A (en) * 2017-12-15 2018-04-20 东南大学 A kind of new Stereo matching cost calculates and parallax post-processing approach
CN109961417A (en) * 2017-12-26 2019-07-02 广州极飞科技有限公司 Image processing method, device and mobile device control method
CN108510529A (en) * 2018-03-14 2018-09-07 昆明理工大学 A kind of figure based on adaptive weight cuts solid matching method
CN108682026A (en) * 2018-03-22 2018-10-19 辽宁工业大学 A kind of binocular vision solid matching method based on the fusion of more Matching units
CN108682026B (en) * 2018-03-22 2021-08-06 江大白 Binocular vision stereo matching method based on multi-matching element fusion
CN108596975A (en) * 2018-04-25 2018-09-28 华南理工大学 A kind of Stereo Matching Algorithm for weak texture region
CN108596975B (en) * 2018-04-25 2022-03-29 华南理工大学 Stereo matching algorithm for weak texture region
CN108765486A (en) * 2018-05-17 2018-11-06 长春理工大学 Based on sparse piece of aggregation strategy method of relevant Stereo matching in color
CN109887021A (en) * 2019-01-19 2019-06-14 天津大学 Cross-scale random walk stereo matching method
CN109887021B (en) * 2019-01-19 2023-06-06 天津大学 Stereo matching method based on cross-scale random walk
CN109816782A (en) * 2019-02-03 2019-05-28 哈尔滨理工大学 A 3D reconstruction method of indoor scene based on binocular vision
WO2021018093A1 (en) * 2019-07-31 2021-02-04 深圳市道通智能航空技术有限公司 Stereo matching method, image processing chip, and moving carrier
CN111191694A (en) * 2019-12-19 2020-05-22 浙江科技学院 Image stereo matching method
CN111508013B (en) * 2020-04-21 2022-09-06 中国科学技术大学 Stereo matching method
CN111508013A (en) * 2020-04-21 2020-08-07 中国科学技术大学 Stereo matching method
CN112070694A (en) * 2020-09-03 2020-12-11 深兰人工智能芯片研究院(江苏)有限公司 Binocular stereo vision disparity map post-processing method and device
CN114565647A (en) * 2022-02-24 2022-05-31 南京理工大学 Stereo matching algorithm based on texture region self-adaptive dynamic cost calculation and aggregation

Similar Documents

Publication Publication Date Title
CN107392950A (en) A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection
CN107578404B (en) Objective evaluation method of full-reference stereo image quality based on visual salient feature extraction
CN110473217B (en) A Binocular Stereo Matching Method Based on Census Transformation
CN105744256B (en) Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision
CN106504276B (en) Nonlocal Stereo Matching Methods
CN105517677B (en) The post-processing approach and device of depth map/disparity map
CN108596975B (en) Stereo matching algorithm for weak texture region
CN106355570A (en) Binocular stereoscopic vision matching method combining depth characteristics
CN106991693B (en) Binocular Stereo Matching Method Based on Fuzzy Support Weight
CN106651853B (en) Establishment method of 3D saliency model based on prior knowledge and depth weight
CN105976351B (en) Stereo image quality evaluation method based on central offset
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
CN105898278B (en) A kind of three-dimensional video-frequency conspicuousness detection method based on binocular Multidimensional Awareness characteristic
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN107578430A (en) A Stereo Matching Method Based on Adaptive Weight and Local Entropy
CN103955945A (en) Self-adaption color image segmentation method based on binocular parallax and movable outline
CN106780476A (en) A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN106971153A (en) A kind of facial image illumination compensation method
CN110246111A (en) Based on blending image with reinforcing image without reference stereo image quality evaluation method
CN103747240A (en) Fusion color and motion information vision saliency filtering method
CN108010075A (en) A kind of sectional perspective matching process based on multiple features combining
CN105447825A (en) Image defogging method and system
CN104732217A (en) Self-adaptive template size fingerprint direction field calculating method
CN106530336A (en) Stereo matching algorithm based on color information and graph-cut theory
CN104200453A (en) Parallax image correcting method based on image segmentation and credibility

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171124

WD01 Invention patent application deemed withdrawn after publication