CN103106664A - Image matching method for sheltered region based on pixel block - Google Patents
Image matching method for sheltered region based on pixel block Download PDFInfo
- Publication number
- CN103106664A CN103106664A CN2013100607626A CN201310060762A CN103106664A CN 103106664 A CN103106664 A CN 103106664A CN 2013100607626 A CN2013100607626 A CN 2013100607626A CN 201310060762 A CN201310060762 A CN 201310060762A CN 103106664 A CN103106664 A CN 103106664A
- Authority
- CN
- China
- Prior art keywords
- pixel
- pixels
- piece
- ssd2
- ssd1
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
Disclosed is an image matching method for a sheltered region based on a pixel block. The image matching method for the sheltered region based on the pixel block includes the following steps: (1) a sheltered pixel O (k, j) is selected as a to-be matched point in a standard image, a rectangular sheltered pixel block is constructed, a successfully matched pixel x (n, j) adjacent to and at the left end of the sheltered pixel O (k, j) is taken as a center to construct a first sample pixel block, and the parallax value is Dx; a successfully matched pixel y (l , j) adjacent to and at the right end of the sheltered pixel O (k, j) is taken as a center to construct a second sample pixel block, and the parallax value is Dy; (2) a sum of pixel square difference SSD1 of the sheltered pixel block and the first sample pixel block and a sum of pixel square difference SSD2 of the sheltered pixel block and the second sample pixel block are computed; (3) the parallax value of the sheltered pixel O (k, j) is confirmed, when SSD1 is larger than SSD2, d (k, j) is equal to Dy; and when SSD1 is smaller than SSD2, d (k, j) is equal to Dx.
Description
Technical field
The present invention relates to the occlusion area image matching method based on block of pixels, belong to the computer stereo vision field.
Background technology
Computer stereo vision has all been carried out application in a lot of fields, such as mobile robot's vision guided navigation, recognition of face, workpiece modeling etc.Images match is the committed step in computer stereo vision, due to the restriction of moment sensor performance and computing method, wherein also has many open questions.Occlusion issue is exactly one of them difficult problem.In the images match process, because the restriction at visual angle, left and right can form occlusion area, the left margin zone of left objects in images often can not find corresponding point in right image, simultaneously the right margin zone of right image can not find corresponding point in left image, and the noise of image and smooth region also can cause the images match failure.Occluded pixels can not obtain effective parallax value and depth information, will form one or a zone of ignorance after three-dimensional modeling.
Fairly simple treating method is directly the parallax value that it fails to match puts directly to be equaled the parallax value of neighborhood pixels at present.So this method is easy to produce distortion.Further way is that point carries out the similarity judgement with the both sides pixel that the match is successful with it fails to match, is equal to the parallax value of pixel over there according to the parallax value of the similarity size point that determines that it fails to match.This algorithm has theoretic feasibility, but due to noise or texture factor, also can have very large difference on similarity even have the pixel of same disparity value, and therefore this algorithm practicality is not strong, easily produces false judgment.
Summary of the invention
The objective of the invention is in order to solve above-mentioned disparity computation problem, proposition is based on the occlusion area image matching method of block of pixels, this algorithm is based on similarity principle, point forms the occluded pixels piece based on it fails to match, carry out similarity relatively with the sampled pixel piece of both sides, its parallax value equals the parallax value of the most similar with it block of pixels.
The technical solution adopted for the present invention to solve the technical problems is:
Based on the occlusion area image matching method of block of pixels, with benchmark image
f B(
i,
j) in pixel be benchmark, at the reference image
f C(
i,
j) the middle match point of seeking, form disparity map
d(
i,
j), pixel that wherein can not the match is successful, be listed in occluded pixels O (
k,
j), wherein, 0≤
i<M, 0≤
j<N, 0<
k<M, coefficient M are the pixel wide of image, and coefficient N is the pixels tall of image, comprises the following steps:
(1) at described benchmark image
f B(
i,
j) in, selected described occluded pixels O (
k,
j) as point to be matched, and structure rectangle occluded pixels piece O (
k+ 1,
j+ 1), O (
k-1,
j-1), O (
k+ 1,
j-1), O (
k-1,
j+ 1), O (
k+ 1,
j), O (
k-1,
j), O (
k,
j-1), O (
k,
j+ 1), O (
k,
j), with described occluded pixels O (
k,
j) the adjacent left end pixel that the match is successful
x(
n,
j) centered by build the first sampled pixel piece
x(
n+ 1,
j+ 1),
x(
n-1,
j-1),
x(
n+ 1,
j-1),
x(
n-1,
j+ 1),
x(
n+ 1,
j),
x(
n-1,
j),
x(
n,
j-1),
x(
n,
j+ 1),
x(
n,
j), wherein
n<k, described pixel
x(
n,
j) parallax value be Dx; With described occluded pixels O (
k,
j) the adjacent right-hand member pixel that the match is successful
y(
l,
j) centered by build the second sampled pixel piece
y(
l+ 1,
j+ 1),
y(
l-1,
j-1),
y(
l+ 1,
j-1),
y(
l-1,
j+ 1),
y(
l+ 1,
j),
y(
l-1,
j),
y(
l,
j-1),
y(
l,
j+ 1),
y(
l,
j), wherein
k<
l<M, described pixel
y(
l,
j) parallax value be Dy;
(2) calculate described occluded pixels piece and the pixel difference of two squares of the first sampled pixel piece and the pixel difference of two squares and the SSD2 of SSD1 and described occluded pixels piece and the second sampled pixel piece;
(3) according to the size of the described pixel difference of two squares and SSD1 and the pixel difference of two squares and SSD2, determine described occluded pixels O (
k,
j) parallax value: as SSD1〉during SSD2,
d(
k,
j)=Dy; When SSD1<SSD2,
d(
k,
j)=Dx.
The described pixel difference of two squares and SSD1=∑ (O (
k+
ξ,
j+
η)-
x(
n+
ξ,
j+
η))
2, wherein
ξ=1,0,1},
η={ 1,0,1}.
The described pixel difference of two squares and SSD2=∑ (O (
k+
ξ,
j+
η)-
y(
l+
ξ,
j+
η))
2, wherein
ξ=1,0,1},
η={ 1,0,1}.
Implementing good effect of the present invention is: 1, with the foundation of similarity comparison as disparity computation, have advantages of that accuracy is high; 2, with block of pixels as the similarity comparing unit, have advantages of that noise resisting ability is strong.
Description of drawings
Fig. 1 is the occlusion area schematic diagram;
Fig. 2 is the process flow diagram of occluded pixels block matching method.
Embodiment
The invention will be further described below in conjunction with accompanying drawing.
With reference to Fig. 1-2, based on the occlusion area image matching method of block of pixels, with benchmark image
f B(
i,
j) in pixel be benchmark, at the reference image
f C(
i,
j) the middle match point of seeking, form disparity map
d(
i,
j), pixel that wherein can not the match is successful, be listed in occluded pixels O (
k,
j), wherein, 0≤
i<M, 0≤
j<N, 0<
k<M, coefficient M are the pixel wide of image, and coefficient N is the pixels tall of image.
Described occlusion area image matching method comprises the following steps:
(1) at described benchmark image
f B(
i,
j) in, selected described occluded pixels O (
k,
j) as point to be matched, and structure rectangle occluded pixels piece O (
k+ 1,
j+ 1), O (
k-1,
j-1), O (
k+ 1,
j-1), O (
k-1,
j+ 1), O (
k+ 1,
j), O (
k-1,
j), O (
k,
j-1), O (
k,
j+ 1), O (
k,
j), with described occluded pixels O (
k,
j) the adjacent left end pixel that the match is successful
x(
n,
j) centered by build the first sampled pixel piece
x(
n+ 1,
j+ 1),
x(
n-1,
j-1),
x(
n+ 1,
j-1),
x(
n-1,
j+ 1),
x(
n+ 1,
j),
x(
n-1,
j),
x(
n,
j-1),
x(
n,
j+ 1),
x(
n,
j), wherein
n<k, described pixel
x(
n,
j) parallax value be Dx; With described occluded pixels O (
k,
j) the adjacent right-hand member pixel that the match is successful
y(
l,
j) centered by build the second sampled pixel piece
y(
l+ 1,
j+ 1),
y(
l-1,
j-1),
y(
l+ 1,
j-1),
y(
l-1,
j+ 1),
y(
l+ 1,
j),
y(
l-1,
j),
y(
l,
j-1),
y(
l,
j+ 1),
y(
l,
j), wherein
k<
l<M, described pixel
y(
l,
j) parallax value be Dy;
In step (1), set up respectively with described occluded pixels O (
k,
j), pixel
x(
n,
j) and pixel
y(
l,
j) centered by rectangular block of pixels, pixel wherein
x(
n,
j) and pixel
y(
l,
j) be the pixel that the match is successful, its parallax value is respectively Dx and Dy.
(2) calculate described occluded pixels piece and the pixel difference of two squares of the first sampled pixel piece and the pixel difference of two squares and the SSD2 of SSD1 and described occluded pixels piece and the second sampled pixel piece;
In step (2), the similarity of carrying out between block of pixels is calculated.Adopt the pixel difference of two squares and as the foundation of similarity judgement, namely between two block of pixels, corresponding pixel value carries out difference, then asks square, at last with all square value summations.That is:
The described pixel difference of two squares and SSD1=∑ (O (
k+
ξ,
j+
η)-
x(
n+
ξ,
j+
η))
2, wherein
ξ=1,0,1},
η={ 1,0,1}.
The described pixel difference of two squares and SSD2=∑ (O (
k+
ξ,
j+
η)-
y(
l+
ξ,
j+
η))
2, wherein
ξ=1,0,1},
η={ 1,0,1}.
If two block of pixels are more similar, the pixel difference of two squares of gained and just less.
(3) according to the size of the described pixel difference of two squares and SSD1 and the pixel difference of two squares and SSD2, determine described occluded pixels O (
k,
j) parallax value: as SSD1〉during SSD2,
d(
k,
j)=Dy; When SSD1<SSD2,
d(
k,
j)=Dx.
In sum, the present invention adopts the occlusion area image matching method based on block of pixels, and the method is according to similarity principle, and point forms the occluded pixels piece based on it fails to match, carry out similarity relatively with the sampled pixel piece of both sides, its parallax value equals the parallax value of the most similar with it block of pixels.Its good effect be with block of pixels as the similarity comparing unit, have advantages of that noise resisting ability is strong, False Rate is low.
Claims (3)
1. based on the occlusion area image matching method of block of pixels, with benchmark image
f B(
i,
j) in pixel be benchmark, at the reference image
f C(
i,
j) the middle match point of seeking, form disparity map
d(
i,
j), pixel that wherein can not the match is successful, be listed in occluded pixels O (
k,
j), wherein, 0≤
i<M, 0≤
j<N, 0<
k<M, coefficient M are the pixel wide of image, and coefficient N is the pixels tall of image, it is characterized in that: comprise the following steps:
(1) at described benchmark image
f B(
i,
j) in, selected described occluded pixels O (
k,
j) as point to be matched, and structure rectangle occluded pixels piece O (
k+ 1,
j+ 1), O (
k-1,
j-1), O (
k+ 1,
j-1), O (
k-1,
j+ 1), O (
k+ 1,
j), O (
k-1,
j), O (
k,
j-1), O (
k,
j+ 1), O (
k,
j), with described occluded pixels O (
k,
j) the adjacent left end pixel that the match is successful
x(
n,
j) centered by build the first sampled pixel piece
x(
n+ 1,
j+ 1),
x(
n-1,
j-1),
x(
n+ 1,
j-1),
x(
n-1,
j+ 1),
x(
n+ 1,
j),
x(
n-1,
j),
x(
n,
j-1),
x(
n,
j+ 1),
x(
n,
j), wherein
n<k, described pixel
x(
n,
j) parallax value be Dx; With described occluded pixels O (
k,
j) the adjacent right-hand member pixel that the match is successful
y(
l,
j) centered by build the second sampled pixel piece
y(
l+ 1,
j+ 1),
y(
l-1,
j-1),
y(
l+ 1,
j-1),
y(
l-1,
j+ 1),
y(
l+ 1,
j),
y(
l-1,
j),
y(
l,
j-1),
y(
l,
j+ 1),
y(
l,
j), wherein
k<
l<M, described pixel
y(
l,
j) parallax value be Dy;
(2) calculate described occluded pixels piece and the pixel difference of two squares of the first sampled pixel piece and the pixel difference of two squares and the SSD2 of SSD1 and described occluded pixels piece and the second sampled pixel piece;
(3) according to the size of the described pixel difference of two squares and SSD1 and the pixel difference of two squares and SSD2, determine described occluded pixels O (
k,
j) parallax value: as SSD1〉during SSD2,
d(
k,
j)=Dy; When SSD1<SSD2,
d(
k,
j)=Dx.
2. the occlusion area image matching method based on block of pixels as claimed in claim 1 is characterized in that: the described pixel difference of two squares and SSD1=∑ (O (
k+
ξ,
j+
η)-
x(
n+
ξ,
j+
η))
2, wherein
ξ=1,0,1},
η={ 1,0,1}.
3. the occlusion area image matching method based on block of pixels as claimed in claim 1 is characterized in that: the described pixel difference of two squares and SSD2=∑ (O (
k+
ξ,
j+
η)-
y(
l+
ξ,
j+
η))
2, wherein
ξ=1,0,1},
η={ 1,0,1}.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013100607626A CN103106664A (en) | 2013-02-27 | 2013-02-27 | Image matching method for sheltered region based on pixel block |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013100607626A CN103106664A (en) | 2013-02-27 | 2013-02-27 | Image matching method for sheltered region based on pixel block |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103106664A true CN103106664A (en) | 2013-05-15 |
Family
ID=48314491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013100607626A Pending CN103106664A (en) | 2013-02-27 | 2013-02-27 | Image matching method for sheltered region based on pixel block |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103106664A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8189031B2 (en) * | 2006-01-09 | 2012-05-29 | Samsung Electronics Co., Ltd. | Method and apparatus for providing panoramic view with high speed image matching and mild mixed color blending |
CN102567992A (en) * | 2011-11-21 | 2012-07-11 | 刘瑜 | Image matching method of occluded area |
CN102708379A (en) * | 2012-05-09 | 2012-10-03 | 慈溪思达电子科技有限公司 | Stereoscopic vision shielding pixel classification algorithm |
-
2013
- 2013-02-27 CN CN2013100607626A patent/CN103106664A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8189031B2 (en) * | 2006-01-09 | 2012-05-29 | Samsung Electronics Co., Ltd. | Method and apparatus for providing panoramic view with high speed image matching and mild mixed color blending |
CN102567992A (en) * | 2011-11-21 | 2012-07-11 | 刘瑜 | Image matching method of occluded area |
CN102708379A (en) * | 2012-05-09 | 2012-10-03 | 慈溪思达电子科技有限公司 | Stereoscopic vision shielding pixel classification algorithm |
Non-Patent Citations (1)
Title |
---|
陈卫兵: "几种图像相似性度量的匹配性能比较", 《计算机应用》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114862949B (en) | Structured scene visual SLAM method based on dot-line surface characteristics | |
CN108520554B (en) | Binocular three-dimensional dense mapping method based on ORB-SLAM2 | |
CN108038420B (en) | Human behavior recognition method based on depth video | |
CN102609941A (en) | Three-dimensional registering method based on ToF (Time-of-Flight) depth camera | |
CN103400142B (en) | A kind of pedestrian counting method | |
CN102243765A (en) | Multi-camera-based multi-objective positioning tracking method and system | |
CN109493385A (en) | Autonomic positioning method in a kind of mobile robot room of combination scene point line feature | |
CN109389621B (en) | RGB-D target tracking method based on multi-mode depth feature fusion | |
WO2018214086A1 (en) | Method and apparatus for three-dimensional reconstruction of scene, and terminal device | |
CN111160291A (en) | Human eye detection method based on depth information and CNN | |
CN114004900A (en) | Indoor binocular vision odometer method based on point-line-surface characteristics | |
CN103646254A (en) | High-density pedestrian detection method | |
CN104778697A (en) | Three-dimensional tracking method and system based on fast positioning of image dimension and area | |
Zhou et al. | Monet3d: Towards accurate monocular 3d object localization in real time | |
CN110659593A (en) | Urban haze visibility detection method based on improved DiracNet | |
CN107730543B (en) | Rapid iterative computation method for semi-dense stereo matching | |
CN107358624B (en) | Monocular dense instant positioning and map reconstruction method | |
CN104252701B (en) | Correct the method and system of disparity map | |
WO2022120988A1 (en) | Stereo matching method based on hybrid 2d convolution and pseudo 3d convolution | |
CN102708379B (en) | Stereoscopic vision shielding pixel classification algorithm | |
Neverova et al. | 2 1/2 D scene reconstruction of indoor scenes from single RGB-D images | |
CN102779278A (en) | Method and system for extracting outlines | |
Zhang et al. | N2mvsnet: Non-local neighbors aware multi-view stereo network | |
CN103729850B (en) | Method for linear extraction in panorama | |
CN109816710B (en) | Parallax calculation method for binocular vision system with high precision and no smear |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130515 |