CN109840922B - Depth acquisition method and system based on binocular light field camera - Google Patents
Depth acquisition method and system based on binocular light field camera Download PDFInfo
- Publication number
- CN109840922B CN109840922B CN201810097816.9A CN201810097816A CN109840922B CN 109840922 B CN109840922 B CN 109840922B CN 201810097816 A CN201810097816 A CN 201810097816A CN 109840922 B CN109840922 B CN 109840922B
- Authority
- CN
- China
- Prior art keywords
- light field
- depth
- camera
- depth map
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
The invention relates to a depth acquisition method and a depth acquisition system based on a binocular light field camera, which comprise the following steps: shooting a scene by using a field camera to obtain a view and a light field depth map of the scene; shooting the scene by using another camera to obtain another view of the scene, and obtaining a binocular depth map of the scene according to the parallax between the views; shooting a calibration scene with a depth scale by using a light field camera, and normalizing a light field depth map to a real spatial scale to obtain a first real depth map; shooting a calibration scene by using a light field camera, and normalizing the binocular depth map to a real space scale to obtain a second real depth map; obtaining the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change; and fusing the first real depth map and the second real depth map according to the credibility and the Markov random field to obtain a fused depth map. The invention realizes a solution for accurately calculating the scene depth from near to far by fusing the light field depth and the binocular depth.
Description
Technical Field
The invention relates to the field of image processing, in particular to a depth acquisition method and system based on a binocular light field camera.
Background
The prior art can perform dense light field depth estimation for near scenes by using a single light field camera. For example, light field images are focused at different depths by a light field refocusing technique. And then calculating the depth information of the scene according to the relation of the depth of field, or calculating the slope of the line segment of the epipolar plane of the light field to further calculate the depth information. There are also algorithms that combine the two basic methods to compute scene depth more accurately. However, no matter which light field depth algorithm is adopted, the limitation that the view angle resolution of the light field camera is small cannot be got rid of. That is, the depth information obtained by computing a single light field picture can only be accurate at a short distance from the light field camera. In particular, existing microlens array based light field cameras typically have very short baselines. According to the parameters of the second generation light field camera of Lytro corporation, theoretically it can only find scene depths within 6 meters.
Depth estimation using two or more common cameras is a common technique for finding the depth of a scene. This technique is also known as stereo matching algorithm. There are many well-established algorithms for rough estimation of scene depth, such as the SGM (Semi-global Matching) Semi-global Matching algorithm. The scene distance that the algorithm can estimate is related to the pose positions of the two cameras. The two cameras must be on the same horizontal line and the lens orientations are identical. If the two cameras are far away, the distance far away can be effectively estimated, and the depth information near can be lost. Limited by the resolution and shooting accuracy of the cameras, if the depth distance of a scene from near to far needs to be estimated in a large range, a plurality of cameras need to be placed at different intervals, so that different depths in the scene are covered by the calculation range of the algorithm. This solution requires a high equipment cost.
Disclosure of Invention
Under the condition that the depth required by the conventional common camera and the light field camera is respectively good and bad, the invention aims to provide a method for combining the depth required by a plurality of cameras and the depth required by the light field camera, combines the respective advantages of the two cameras and overcomes the respective defects of the two cameras, namely only two light field cameras need to be applied or one light field camera is additionally provided with one common camera, the cost is effectively reduced, the conventional image information is completely utilized and mined, and finally a depth map with better effect and larger range is obtained. In addition, the method can avoid the interference of obstacles on a single lens to the scene solved by the binocular depth algorithm, so that the algorithm can more accurately solve the real scene depth.
Specifically, the invention discloses a depth acquisition method based on a binocular light field camera, which comprises the following steps:
step 4, shooting the calibration scene by using the first light field camera, acquiring a camera view angle of the first light field camera, and normalizing the binocular depth map to a real spatial scale according to the camera view angle to obtain a second real depth map;
step 5, obtaining the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change;
and 6, fusing the first real depth map and the second real depth map according to the credibility and the Markov random field to obtain a fused depth map.
The depth acquisition method based on the binocular light field camera comprises the following steps:
and 7, adopting the color gradient of the fusion depth map as a weighting factor of a smoothing item, and carrying out smoothing and noise suppression treatment on the fusion depth map to obtain a final depth map.
The depth acquisition method based on the binocular light field camera comprises the following steps of:
the first light field camera is erected on the guide rail, the moving direction is vertical to the scale, and the length of the two shooting scales is respectively recorded as l1And l2And the camera moving distance between two times of shooting is l, and the camera view angle beta can be obtained according to a triangular relation:
the depth acquisition method based on the binocular light field camera is characterized in that the field angle of the first light field camera is equal to that of the second light field camera or the traditional digital camera.
The depth acquisition method based on the binocular light field camera comprises the following steps of:
c=1-tan-1(k·dy)
where dy is the gradient value of the light field depth variation and k is the scaling factor.
The invention also provides a depth acquisition system based on the binocular light field camera, which comprises the following components:
the light field depth acquisition module is used for shooting a scene to be detected by using a first light field camera to obtain a first view and a light field depth map of the scene to be detected;
the binocular depth acquisition module is used for shooting the scene to be detected by using a second light field camera or a traditional digital camera to obtain a second view of the scene to be detected, and obtaining a binocular depth map of the scene to be detected according to the parallax between the first view and the second view;
the light field depth calibration module is used for shooting a calibration scene with a depth scale by using the first light field camera and normalizing the light field depth map to a real space scale to obtain a first real depth map;
the binocular depth calibration module is used for shooting the calibration scene by using the first light field camera, acquiring the camera view angle of the first light field camera, and normalizing the binocular depth map to a real space scale according to the camera view angle to obtain a second real depth map;
and the depth map fusion module is used for acquiring the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change, and fusing the first real depth map and the second real depth map according to the reliability and the Markov random field to obtain a fusion depth map.
This degree of depth acquisition system based on binocular light field camera includes wherein:
and the noise suppression module adopts the color gradient of the fused depth map as a weighting factor of a smoothing item to carry out smoothing noise suppression treatment on the fused depth map so as to obtain a final depth map.
This degree of depth acquisition system based on binocular light field camera, wherein this binocular depth calibration module includes:
the first light field camera is erected on the guide rail, the moving direction is vertical to the scale, and the length of the two shooting scales is respectively recorded as l1And l2And the camera moving distance between two times of shooting is l, and the camera view angle beta can be obtained according to a triangular relation:
the binocular light field camera based depth acquisition system wherein the field of view of the first light field camera is equal to the field of view of the second light field camera or the conventional digital camera.
The depth acquisition system based on the binocular light field camera comprises the following steps of:
c=1-tan-1(k·dy)
where dy is the gradient value of the light field depth variation and k is the scaling factor.
The method can fully exert the advantage of the light field camera for calculating the depth of the close scene, and realize a set of solution scheme capable of accurately calculating the depth of the scene from near to far in a specified depth range by combining the capability of the traditional binocular vision algorithm for solving the depth of the distant scene. The scheme can reduce the cost of actually building the corresponding system, reduce the volume of the system and improve the precision of the result.
Drawings
FIG. 1 is a block flow diagram of the steps of the present invention;
FIG. 2 is a schematic diagram of a fusion depth effect according to an embodiment of the present invention;
FIG. 3 is a schematic view of the placement of the camera rails;
FIG. 4 is a characteristic diagram of equivalent imaging of a light field camera;
FIG. 5 is a schematic diagram of a binocular camera for depth determination;
fig. 6 is a schematic diagram of the camera view angle determination.
Detailed Description
In order to make the aforementioned features and effects of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Extracting a light field depth image: shooting a scene to be detected by using a first light field camera to obtain a first view and a light field depth map of the scene to be detected.
Light field images can be captured using a light field camera. The light field image is four-dimensional data, and compared with a common image, the light field image additionally records the incident direction of light rays, which is equivalent to that a common camera shoots the same scene from different visual angles. The light field can be represented using a 4D coordinate axis (s, t, x, y), where (s, t) represents the ray incident angle dimension of the scene and (x, y) represents the ray incident position dimension. The normal image is a two-dimensional plane image composed of (x, y) when (s, t) takes a constant value. When the (x, y) images are viewed from different (s, t), the viewing angles of the images are different by a few degrees. If the four-dimensional coordinates are dispersed and only the pattern of light rays is observed from (x, s) or (y, t), an image consisting of a combination of different straight lines, called epipolar plane, can be seen, and the conversion formula between parallax and depth can be derived:
where f is the distance of the microlens array from the imaging plane in the light field camera and D is the distance of the object from the camera. According to the formula, the depth of the scene point can be deduced from the corresponding point relation in the epipolar line plane.
The invention herein uses a method of construction variables to extract depth. If the depth information is required, the slope of the straight line of the epipolar plane at a given (y, t) is first required, y and t being given values of y and t, respectively, from which the slope of one straight line at y can be obtained. First the structure tensor J for the epipolar line plane needs to be computed:
wherein G isσIs a Gaussian function with variance of sigma for smoothing and removing noise, and IxAnd IsThe gradient components of the epipolar plane in the x and s directions are (y, t), respectively. The direction of a ray in the epipolar plane can be represented by a vector
obtaining binocular visual depth: and shooting the scene to be detected by using a second light field camera or a traditional digital camera to obtain a second view of the scene to be detected, and obtaining a binocular depth map of the scene to be detected according to the parallax between the first view and the second view. The field angle of the first light field camera is equal to the field angle of the second light field camera or the conventional digital camera.
The two cameras with different placing positions form parallax. The size of parallax reflects the depth information of a scene, so that the depth is obtained through binocular vision, and the method is a common depth estimation algorithm. The position of the camera can affect the accuracy and range of the depth, and the position of the camera can be used when the real depth value is calculated. Normally, the photographing directions of the two cameras need to be consistent, and if the photographing directions are inconsistent, the real depth calculation error is caused. The general flow of calculating the binocular vision comprises the steps of calculating local difference cost of left and right views, aggregating the local difference cost into pixel-level difference cost, optimizing the difference cost, calculating predicted parallax, optimizing a parallax map, and finally obtaining the binocular vision depth. It should be noted that the cameras used in this step may be two light field cameras, or one light field camera and one traditional digital camera are added.
Calculating the corresponding relation among the light field depth, the binocular depth and the real depth: shooting a calibration scene with a depth scale by using the first light field camera, and normalizing the light field depth map to a real spatial scale to obtain a first real depth map; and shooting the calibration scene by using the first light field camera, acquiring a camera view angle of the first light field camera, and normalizing the binocular depth map to a real spatial scale according to the camera view angle to obtain a second real depth map.
The depth of the light field. The imaging formula of the convex lens is as follows:
where f is the lens focal length, u is the scene point to lens distance, i.e., the depth of field, and v is the imaging plane to lens distance. According to the characteristic of equivalent imaging of the light field camera, V is the distance from the imaging plane to the lens, refocusing is to change the distance from the imaging plane to aV, and a is a refocusing factor, as shown in fig. 4.
Combining the above two points, it can be known that v ═ af, and further, the derivation formula of the actual depth of field u can be obtained as follows:
however, for a light field camera, since many key parameters are not provided by the manufacturer, and when the user uses the camera, some parameters of the camera are changed. It is therefore necessary to calibrate the camera with the actual depth before solving for the true depth.
The calibration is to take a picture at a certain distance, for example, 10 cm, between the camera and the calibration checkerboard, and use the ruler with the space in the calibration checkerboard to calibrate, so as to obtain the depth value of the actual scene at each space. After recording the depth values, it is found that the resulting depth values are not a linear relationship. They are actually due to a power-wise transformation, so that a linearly changing true depth becomes a non-linearly changing parallax depth.
It is therefore necessary to fit the calculated disparity using a suitable function. Inspired by the derivation of the actual depth of field, and the observation of the fitting data, the present invention uses as the fitting function the following function:
the finally obtained y is the actual scene point depth, x is the calculated parallax, and a and b are the fitting parameters obtained in the fitting process.
The fitting function uses a nonlinear least squares method to perform gradient descent solution on the parameters. The resulting parameters will oscillate in the part where x and y exceed the domain of definition (referring to the furthest distance of the target between the camera and the calibration plate), so the domain of the function fitted can only be chosen between the calibrated ranges. That is, a mapping to true depth obtained by this method may be considered accurate within the calibration range. And outside the calibration range, the reliability of the calculation result is sharply reduced. Firstly, the short baseline camera has almost no parallax when shooting a far scene, and secondly, the relationship between the far field depth and the calculated parallax cannot be accurately obtained due to the limited calibration distance. Both of these limit the ability of a single camera of a light field camera to find true depth.
And acquiring binocular depth. The light field camera is different from a common camera in structure, a micro lens array is used for imaging inside the light field camera, and an output image needs to be processed through an algorithm to obtain a final result. Although the final output image of the light field camera can be modeled as a pinhole camera model, we cannot deduce the parameters of the pinhole model from the own parameters of the light field camera. The actual parameters of the camera are processed by an algorithm, such as multi-lens extraction visual angle, super-pixel resolution improvement and the like, and the parameters of the final output image are not simply transformed of the actual parameters. Therefore, it is necessary to calibrate the output image of the light field camera again. The calibration only requires knowledge of the output image perspective.
The method for obtaining the image visual angle comprises the following steps: as shown in fig. 3, the camera 3 is mounted on the guide rail 2, and a picture including the scale 1 is taken at regular intervals with the moving direction perpendicular to the scale 1. From the final image output, the length of the scale 1 falling within the viewing angle can be obtained. Respectively recording the length of the two shooting scales as l1And l2The camera moving distance between two shots is l, as shown in fig. 6, and the camera view angle β along the scale 1 direction is obtained from the triangular relationship:
the camera view is used here to indirectly calculate the actual depth in binocular vision.
The two cameras are separately placed, the distance between the cameras is L, and as shown in fig. 5, the binocular disparity is converted into the true depth:
where r is the resolution in the horizontal direction, dsmIs the binocular depth with the binocular disparity d calibrated through the real depth.
Depth fusion of different algorithms: and obtaining the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change, and fusing the first real depth map and the second real depth map according to the reliability and the Markov random field to obtain a fused depth map.
In order to enable the obtained depth maps in different algorithms to be completely fused together and ensure the accuracy and continuity of the depth, the invention provides a near-view and far-view optimization algorithm with global range optimization, which uses a Markov Random Field (MRF) algorithm for solving a proper fused depth value.
It is first necessary to define confidence values for the confidence of corresponding pixels in the light field depth, and for depths found by a single light field camera, it has been mentioned above that depth values beyond the calibration range (which refers to the furthest distance between the camera and the calibration plate) and beyond the base line of the microlens are unreliable. Since the farther away the distance, the closer the distant scene is to zero disparity for the microlenses in the light field camera, the depth values near the zero disparity pixels are also unreliable after algorithmic processing. And the resolution of the far depth will gradually become flat, approaching a threshold. Based on the observation, the gradient value of the change of the light field depth is used as the basis of the calculation of the reliability c of the light field depth, the initially obtained light field depth is adopted, and the actual depth obtained in the previous step is not adopted, because the intermediate transformation process has a gradient, and the gradient value only exists in the calibration process. The function is also defined as:
c=1-tan-1(k·dy)
wherein
We now have two depths, one is the light field depth that is only suitable for near scenes, and the other is the binocular depth that is more reliable for long scenes. Due to the relationship of the placement positions of the two cameras, the binocular depth cannot be measured at a depth less than a range. Therefore, the position of the camera needs to be controlled, so that the range in which the accurate depth cannot be measured falls within the range in which the depth of the light field can be accurately calculated.
The Markov random field is used for selecting and optimizing two depths, and the following three formulas are the optimization modeling process of the Markov random field.
Since the confidence of the light field depth is related to the distance, the data items need to be defined using the confidence of the light field:
the invention also comprises the step of adopting the color gradient of the fusion depth map as a weighting factor of a smoothing item to carry out smoothing and noise suppression treatment on the fusion depth map to obtain a final depth map. The smoothing term may be used to limit the propagation of depth values to regions with large color differences while smoothing the depth within the object, reducing noise conditions. The weighting factor E of the smoothing term is here the color gradient of the imagesmooth(p, p '), where p' is the neighborhood of p:
Esmooth(p,p')=||I(p)-I(p')||2
the total cost of optimization e (x) is:
after the total cost is obtained, the result of the total cost formula is the selection result of the depth map, namely the light field depth or the binocular depth is selected.
Therefore, the depth values obtained by the light field algorithm and the binocular vision algorithm are normalized to the real spatial scale by using the methods of calibrating distance measurement and function fitting, so that the calculation precision is improved; the credibility of the depth results obtained by different algorithms is different, and according to the characteristics of the algorithms, the credibility criterion is reasonably designed, so that the credibility value of the light field depth at the near position of the scene is higher, the credibility of the binocular depth at the far position of the scene is higher, and the depth value with high credibility is used as the depth information of the current pixel when the depth maps are fused; and combining the depth value and the credibility of the scene point, and smoothly fusing the depth value of each pixel obtained by different algorithms.
The following are examples of apparatuses corresponding to the above-described method examples, and the present embodiment apparatuses can be implemented in cooperation with the above-described embodiments. The related technical details mentioned in the above embodiments are still valid in the present embodiment, and are not described herein again for the sake of reducing repetition.
The invention also provides a depth acquisition system based on the binocular light field camera, which comprises the following components:
the light field depth acquisition module is used for shooting a scene to be detected by using a first light field camera to obtain a first view and a light field depth map of the scene to be detected;
the binocular depth acquisition module is used for shooting the scene to be detected by using a second light field camera or a traditional digital camera to obtain a second view of the scene to be detected, and obtaining a binocular depth map of the scene to be detected according to the parallax between the first view and the second view;
the light field depth calibration module is used for shooting a calibration scene with a depth scale by using the first light field camera and normalizing the light field depth map to a real space scale to obtain a first real depth map;
the binocular depth calibration module is used for shooting the calibration scene by using the first light field camera, acquiring the camera view angle of the first light field camera, and normalizing the binocular depth map to a real space scale according to the camera view angle to obtain a second real depth map;
and the depth map fusion module is used for acquiring the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change, and fusing the first real depth map and the second real depth map according to the reliability and the Markov random field to obtain a fusion depth map.
This degree of depth acquisition system based on binocular light field camera includes wherein:
and the noise suppression module adopts the color gradient of the fused depth map as a weighting factor of a smoothing item to carry out smoothing noise suppression treatment on the fused depth map so as to obtain a final depth map.
This degree of depth acquisition system based on binocular light field camera, wherein this binocular depth calibration module includes:
the first light field camera is erected on the guide rail, the moving direction is vertical to the scale, and the length of the two shooting scales is respectively recorded as l1And l2And the camera moving distance between two times of shooting is l, and the camera view angle beta can be obtained according to a triangular relation:
the binocular light field camera based depth acquisition system wherein the field of view of the first light field camera is equal to the field of view of the second light field camera or the conventional digital camera.
The depth acquisition system based on the binocular light field camera comprises the following steps of:
c=1-tan-1(k · dy), where dy is the gradient value of the change in depth of the light field and k is the scaling factor.
Claims (8)
1. A depth acquisition method based on a binocular light field camera is characterized by comprising the following steps:
step 1, shooting a scene to be detected by using a first light field camera to obtain a first view and a light field depth map of the scene to be detected;
step 2, shooting the scene to be detected by using a second light field camera or a traditional digital camera to obtain a second view of the scene to be detected, and obtaining a binocular depth map of the scene to be detected according to the parallax between the first view and the second view;
step 3, shooting a calibration scene with a depth scale by using the first light field camera, and normalizing the light field depth map to a real space scale to obtain a first real depth map;
step 4, shooting the calibration scene by using the first light field camera, acquiring a camera view angle of the first light field camera, and normalizing the binocular depth map to a real spatial scale according to the camera view angle to obtain a second real depth map;
and 5, obtaining the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change:
c=1-tan-1(k·dy)
where dy is the gradient value of the light field depth variation and k is the scaling factor;
and 6, fusing the first real depth map and the second real depth map according to the credibility and the Markov random field to obtain a fused depth map.
2. The binocular light field camera based depth acquisition method of claim 1, comprising:
and 7, adopting the color gradient of the fusion depth map as a weighting factor of a smoothing item, and carrying out smoothing and noise suppression treatment on the fusion depth map to obtain a final depth map.
3. The binocular light field camera based depth acquisition method of claim 1 or 2, wherein the step 4 comprises:
the first light field camera is erected on the guide rail, the moving direction is vertical to the scale, and the length of the two shooting scales is respectively recorded as l1And l2And the camera moving distance between two times of shooting is l, and the camera view angle beta can be obtained according to a triangular relation:
4. the binocular light field camera based depth acquisition method of claim 1, wherein the field angle of the first light field camera is equal to the field angle of the second light field camera or the conventional digital camera.
5. A depth acquisition system based on a binocular light field camera, comprising:
the light field depth acquisition module is used for shooting a scene to be detected by using a first light field camera to obtain a first view and a light field depth map of the scene to be detected;
the binocular depth acquisition module is used for shooting the scene to be detected by using a second light field camera or a traditional digital camera to obtain a second view of the scene to be detected, and obtaining a binocular depth map of the scene to be detected according to the parallax between the first view and the second view;
the light field depth calibration module is used for shooting a calibration scene with a depth scale by using the first light field camera and normalizing the light field depth map to a real space scale to obtain a first real depth map;
the binocular depth calibration module is used for shooting the calibration scene by using the first light field camera, acquiring the camera view angle of the first light field camera, and normalizing the binocular depth map to a real space scale according to the camera view angle to obtain a second real depth map;
the depth map fusion module is used for acquiring the reliability of each pixel point in the light field depth map by using the gradient value of the light field depth change, and fusing the first real depth map and the second real depth map according to the reliability and the Markov random field to obtain a fusion depth map;
the calculation method of the reliability comprises the following steps:
c=1-tan-1(k·dy)
where dy is the gradient value of the light field depth variation and k is the scaling factor.
6. The binocular light field camera based depth acquisition system of claim 5, comprising:
and the noise suppression module adopts the color gradient of the fused depth map as a weighting factor of a smoothing item to carry out smoothing noise suppression treatment on the fused depth map so as to obtain a final depth map.
7. The binocular light field camera based depth acquisition system of claim 5 or 6, wherein the binocular depth calibration module comprises:
the first light field camera is arranged on a guide rail, the moving direction of the first light field camera is vertical to the scale, and the first light field camera is divided intoRespectively recording the length of the two-time shooting scale as l1And l2And the camera moving distance between two times of shooting is l, and the camera view angle beta can be obtained according to a triangular relation:
8. the binocular light field camera based depth acquisition system of claim 5 wherein the field of view of the first light field camera is equal to the field of view of the second light field camera or the conventional digital camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097816.9A CN109840922B (en) | 2018-01-31 | 2018-01-31 | Depth acquisition method and system based on binocular light field camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097816.9A CN109840922B (en) | 2018-01-31 | 2018-01-31 | Depth acquisition method and system based on binocular light field camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109840922A CN109840922A (en) | 2019-06-04 |
CN109840922B true CN109840922B (en) | 2021-03-02 |
Family
ID=66882930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810097816.9A Active CN109840922B (en) | 2018-01-31 | 2018-01-31 | Depth acquisition method and system based on binocular light field camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109840922B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028281B (en) * | 2019-10-22 | 2022-10-18 | 清华大学 | Depth information calculation method and device based on light field binocular system |
CN111292367B (en) * | 2020-02-18 | 2023-04-07 | 青岛联合创智科技有限公司 | Binocular camera depth map generation method with variable baseline |
CN111479075B (en) * | 2020-04-02 | 2022-07-19 | 青岛海信移动通信技术股份有限公司 | Photographing terminal and image processing method thereof |
CN112747692A (en) * | 2020-05-15 | 2021-05-04 | 奕目(上海)科技有限公司 | Three-dimensional measurement method and device for precise small hole |
CN111862184B (en) * | 2020-07-03 | 2022-10-11 | 北京航空航天大学 | Light field camera depth estimation system and method based on polar image color difference |
CN112258591B (en) * | 2020-12-08 | 2021-03-30 | 之江实验室 | Method for obtaining high-precision depth map by combining multiple depth cameras |
CN114173106B (en) * | 2021-12-01 | 2022-08-05 | 北京拙河科技有限公司 | Real-time video stream fusion processing method and system based on light field camera |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104662589A (en) * | 2012-08-21 | 2015-05-27 | 派力肯影像公司 | Systems and methods for parallax detection and correction in images captured using array cameras |
CN105654484A (en) * | 2015-12-30 | 2016-06-08 | 西北工业大学 | Light field camera external parameter calibration device and method |
CN106921824A (en) * | 2017-05-03 | 2017-07-04 | 丁志宇 | Circulating type mixes light field imaging device and method |
CN107403447A (en) * | 2017-07-14 | 2017-11-28 | 梅卡曼德(北京)机器人科技有限公司 | Depth image acquisition method |
CN107431800A (en) * | 2015-02-12 | 2017-12-01 | 奈克斯特Vr股份有限公司 | For carrying out environment measurement and/or the method and apparatus using such measurement |
-
2018
- 2018-01-31 CN CN201810097816.9A patent/CN109840922B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104662589A (en) * | 2012-08-21 | 2015-05-27 | 派力肯影像公司 | Systems and methods for parallax detection and correction in images captured using array cameras |
CN107431800A (en) * | 2015-02-12 | 2017-12-01 | 奈克斯特Vr股份有限公司 | For carrying out environment measurement and/or the method and apparatus using such measurement |
CN105654484A (en) * | 2015-12-30 | 2016-06-08 | 西北工业大学 | Light field camera external parameter calibration device and method |
CN106921824A (en) * | 2017-05-03 | 2017-07-04 | 丁志宇 | Circulating type mixes light field imaging device and method |
CN107403447A (en) * | 2017-07-14 | 2017-11-28 | 梅卡曼德(北京)机器人科技有限公司 | Depth image acquisition method |
Non-Patent Citations (3)
Title |
---|
光场深度估计方法的对比研究;高隽等;《模式识别与人工智能》;20160930;第29卷(第9期);全文 * |
基于光场相机的深度获取及应用研究;潘磊;《中国优秀硕士学位论文全文数据库信息科技辑》;20180115(第1期);摘要、第3章、图3.1 * |
基于全光场相机聚焦性检测的深度获取;姬长动;《中国优秀硕士学位论文全文数据库信息科技辑》;20170215(第2期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109840922A (en) | 2019-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109840922B (en) | Depth acquisition method and system based on binocular light field camera | |
KR100776649B1 (en) | A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method | |
CN103198524B (en) | A kind of three-dimensional reconstruction method for large-scale outdoor scene | |
CN113034568B (en) | Machine vision depth estimation method, device and system | |
US8718326B2 (en) | System and method for extracting three-dimensional coordinates | |
Zeller et al. | Depth estimation and camera calibration of a focused plenoptic camera for visual odometry | |
WO2016184099A1 (en) | Depth estimation method based on light field data distribution | |
CN106960454B (en) | Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle | |
CN108510540B (en) | Stereoscopic vision camera and height acquisition method thereof | |
CN104504688A (en) | Method and system based on binocular stereoscopic vision for passenger flow density estimation | |
Salih et al. | Depth and geometry from a single 2d image using triangulation | |
CN107545586B (en) | Depth obtaining method and system based on light field polar line plane image local part | |
CN105627932A (en) | Distance measurement method and device based on binocular vision | |
KR101709317B1 (en) | Method for calculating an object's coordinates in an image using single camera and gps | |
JP2020112438A (en) | Sea level measurement system, sea level measurement method and sea level measurement program | |
CN111028281A (en) | Depth information calculation method and device based on light field binocular system | |
CN103824303A (en) | Image perspective distortion adjusting method and device based on position and direction of photographed object | |
Gadasin et al. | Reconstruction of a Three-Dimensional Scene from its Projections in Computer Vision Systems | |
CN114812558B (en) | Monocular vision unmanned aerial vehicle autonomous positioning method combining laser ranging | |
CN110992463B (en) | Three-dimensional reconstruction method and system for sag of transmission conductor based on three-eye vision | |
US8340399B2 (en) | Method for determining a depth map from images, device for determining a depth map | |
CN115375745A (en) | Absolute depth measurement method based on polarization microlens light field image parallax angle | |
CN108090930A (en) | Barrier vision detection system and method based on binocular solid camera | |
Yang et al. | Error model for scene reconstruction from motion and stereo | |
CN107610170B (en) | Multi-view image refocusing depth acquisition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |