Nothing Special   »   [go: up one dir, main page]

CN108564620A - Scene depth estimation method for light field array camera - Google Patents

Scene depth estimation method for light field array camera Download PDF

Info

Publication number
CN108564620A
CN108564620A CN201810256154.5A CN201810256154A CN108564620A CN 108564620 A CN108564620 A CN 108564620A CN 201810256154 A CN201810256154 A CN 201810256154A CN 108564620 A CN108564620 A CN 108564620A
Authority
CN
China
Prior art keywords
depth
estimation
scene
light field
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810256154.5A
Other languages
Chinese (zh)
Other versions
CN108564620B (en
Inventor
杨俊刚
王应谦
肖超
李骏
安玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810256154.5A priority Critical patent/CN108564620B/en
Publication of CN108564620A publication Critical patent/CN108564620A/en
Application granted granted Critical
Publication of CN108564620B publication Critical patent/CN108564620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a scene depth estimation method for a light field array camera, which utilizes sub-images acquired by the light field array camera to obtain an initial depth map of a current scene and a corresponding confidence distribution map through longitudinal variance analysis because objects at different depths in a three-dimensional scene correspond to different parallaxes. Subsequently, the invention designs a 'depth propagation under confidence guidance' algorithm to carry out denoising filtering and edge preservation on the initial depth map. By adopting the method of the invention, the depth of the current scene can be effectively estimated. The method of the invention can obtain better results in weak texture areas with difficult depth estimation.

Description

A kind of scene depth method of estimation for light field array camera
Technical field
The present invention relates to image procossing, computer vision, light fields to calculate imaging field, especially a kind of to be directed to light field array The scene depth method of estimation of camera.
Background technology
In recent years, the light-field camera based on light field and calculating imaging theory becomes the hot spot of research.It passes through acquisition The light field of real world is obtained with the three-dimensional information of current scene in single exposure, by collected data into Row processing may be implemented super-resolution and calculate the function that many traditional cameras such as imaging, scene three-dimensional reconstruction cannot achieve.And it is big The realization of most functions is required for accurately estimating the depth of current scene.
As an important branch in computer vision research field, estimation of Depth is ground extensively in the past more than ten years Study carefully.However what most of research was proposed both for binocular camera, if only utilizing two sub- cameras in array camera Estimation of Depth is carried out, then is unable to fully the effective information using the current scene captured.Recent years, some scholars propose Based on the depth estimation method of microlens type light-field camera, and achieve preferable effect.But microlens type light-field camera Equivalent baseline is relatively narrow, and the light field sampling obtained is more intensive in an angular direction, and angular resolution is relatively high, so as to cause These requirements based on the depth estimation algorithm of microlens array to angular resolution are also relatively high, and array camera often has There is wider baseline, sampling in an angular direction is also more sparse, and this lower angular resolution frequently can lead to depth The result of degree estimation has larger noise and depth error hiding.If directly by the depth estimation algorithm application based on lenticule Come on to array camera, effect will lose.It would therefore be desirable to the scene letter for making full use of array camera to be captured Breath, by inhibiting noise and depth error hiding to making full use of for these information, to realize the item sampled in sparse angular The depth of current scene is preferably estimated under part.
Invention content
The technical problem to be solved by the present invention is in view of the shortcomings of the prior art, provide a kind of for light field array camera Scene depth method of estimation, the scene information for making full use of array camera to be captured, by being made full use of to these information To inhibit noise and depth error hiding, realization preferably to estimate the depth of current scene under conditions of sparse angular samples.
In order to solve the above technical problems, the technical solution adopted in the present invention is:A kind of field for light field array camera Scape depth estimation method, which is characterized in that include the following steps:
1) by comparing the scene depth in the variance evaluation initial depth estimation on angle direction during refocusing;
2) confidence level for calculating scene depth by the second-order deviation on analytic angle direction during refocusing is distributed;
3) noise and depth error hiding in initial depth estimation figure are filtered out using confidence level distribution;
4) edge through step 3) treated estimation of Depth figure is reinforced, the exact depth for obtaining current scene is estimated Meter figure.
In step 1), the estimated value of the scene depthWherein, x is scene depth The abscissa value of a certain pixel, W in figureDIt is a neighborhood around x, | WD| represent the sum of pixel in window;
U={ u1,u2,L,uUBe camera in array position;N is depth resolution;U is that the camera on u direction is total Number;S={ s1,s2,L,sNIt is (ionospheric) focussing factor;L(u,x-siU) image that denotation coordination is u camera obtains is in abscissa x-siGray value of image at u.
In step 2), the calculation formula of confidence level distribution R (x) is:
Wherein, a is attenuation coefficient;B is translation coefficient;η (x)=LW(x)/max{LW(x) },ε is one a small amount of, It is V'(x, si) mean value.
A values are 0.3;B values are 0.5.
The specific implementation process of step 3) includes the following steps:
1) estimate that extraction one is centered on (i, j), the block P that size is ρ × ρ in figure X from initial depthX;From confidence level Corresponding block P is extracted in distributionR;(i, j) is initialized as (1,1);
2) pass through normalizationWherein PR(x, y) is block PRMiddle xth row y row The value of confidence level;By normalization, mask M is generated;
3) by PXFiltered depth map X is inserted with the inner product of MfIn, i.e.,
4) judge whether all pixels in X are traversed, if so, exporting filtered depth map Xf;Otherwise, step is returned It is rapid 1).
The specific implementation process of step 4) includes:
1) from filtered depth map XfThe block P that size of the middle extraction one centered on (i, j) is ρ × ρX;After expansion Confidence level be distributed ReThe middle corresponding block P of extractionR
2) pass through confidence level overturning and energy normalizedGenerate mask Mb
3) by PXWith MbInner product filling exact depth estimation figure XbIn, i.e.,
If 4) XfMiddle all pixels are all traversed, then export exact depth estimation figure Xb;Otherwise, return to step 1).
Compared with prior art, the advantageous effect of present invention is that:The present invention can utilize light field array camera pair The depth distribution of current scene is accurately estimated, is analyzed so as to the three-dimensional structure to current scene, to being based on light The precision improvement that the scene three-dimensional reconstruction of field array camera, super-resolution calculate the various functions such as imaging has facilitation.With The continuous promotion and popularization of light-field camera, the method for the present invention have larger meaning and practical value.
Description of the drawings
Fig. 1 is the scene depth algorithm for estimating structure diagram for light field array camera;
Fig. 2 is the Two plane model schematic diagram of light field.Wherein, (a) is the biplane three-dimensional model of light field:Light passes through picture Plane Π={ (x, y) } and camera plane Ω={ (u, v) }, position and direction can by the coordinate value of the two planes come It indicates.Wherein, (u, v) represents the position coordinates of camera in array camera, and (x, y) represents the two dimension of some camera acquisition A pixel in image, in this way, by u, v, x, this four coordinates of y, the data that entire array camera is obtained can be by It shows, it is the image that is obtained of camera at (u, v) at its coordinate (x, y) that we carry out denotation coordination with L (u, v, x, y) Pixel gray value (value range 0-255), determined by the scene captured by camera, L can be understood as the four of light field Dimension coordinate (bidimensional camera coordinates, two dimensional image coordinate) is closed to a mapping between the gray value of image acquired in camera array System.To which L (u, v, x, y) can indicate the current light-field taken by camera array.(b) be light field three-dimensional model in xu Projection on direction:Since four-dimensional ligh field model has symmetry upwards in u and v, x with two other side of y, in order to simplify to model It analyzes and without loss of generality, we fixes y=y*With v=v*, light field is projected into the spaces xu and is analyzed.Pass through analysis chart 2 (b) two-dimensional projection of the light field in, we can obtain scene depth γ and the displacement of the depth hypograph respective pixel it is inclined Poor d=L1-L2Meet relationship d=fB/ γ, depth is corresponded to which the problem of estimation of Depth can be attributed in array subgraph Between pixel the problem of displacement difference estimation.
Fig. 3 is the design sketch that inventive algorithm obtains, and (a) is the scene graph that experiment uses, and is (b) side through the invention The scene confidence level distribution map that method is calculated (c) is the scene depth figure obtained using method disclosed by the invention.
Specific implementation mode
Since estimation of Depth is mainly to be realized by the displacement difference estimation in an angular direction of different location pixel, And displacement difference and depth have one-to-one relationship.Therefore the estimation of displacement difference is referred to as depth by us in the present invention Degree estimation.Without loss of generality, four-dimensional ligh field model L (u, v, x, y) is reduced to two dimension by the present invention during introduction step Model L (u, x), is introduced algorithm with facilitating.The subgraph that the present invention is obtained by analyzing array camera in an angular direction Picture carries out preliminary estimation of Depth by comparing the size of variance, and initial depth pair is estimated by analyzing second-order deviation The confidence level answered.Later, the present invention comes by using " depth under confidence level guiding is propagated " algorithm in estimating initial depth Noise and depth error hiding filtered out.In this algorithm, initial depth is forward flowed under the guiding of confidence level first It is dynamic so that noise and depth error hiding in low confidence region are substituted by the region of surrounding high confidence level;Then depth is swollen The lower reverse flow of confidence level guiding after swollen enhances the edge in depth map while further filtering.By setting Depth propagation algorithm under reliability guiding, can obtain the ideal and accurate depth profile of current scene.The present invention calculates The flow chart of method is as shown in Figure 1, specifically include following steps:
1. coming to scene depth progress initial estimation by comparing the variance on angle direction during refocusing.It meets again Burnt process can be expressed as:
In formula, u={ u1,u2,L,uUIt is that (camera in general setting centre position is with reference to phase for the position of camera in array Machine);S={ s1,s2,L,sNIt is (ionospheric) focussing factor, N is depth resolution (the depth number of plies divided in total on depth direction), U It is the camera sum on u direction.The variance of array subgraph in an angular direction can be expressed as:
Because focal zone often corresponds to smaller variance rather than focal zone in an angular direction in an angular direction Often correspond to larger variance, it is possible to by calculating the variance under each (ionospheric) focussing factor, by being compared, select It is the corresponding (ionospheric) focussing factor of depth where this pixel to go out the minimum corresponding (ionospheric) focussing factor of variance.In order to increase the Shandong of algorithm Stick, we calculate initial estimation of Depth value using following formula:
Herein, WDIt is a neighborhood around x, usually could be provided as 7 × 7 window;|WD| represent pixel in window Sum;D (x) is initial Displacement Estimation value.
2. being carried out to the confidence level of scene depth by the second-order deviation on analytic angle direction during refocusing It calculates.The second-order deviation value on light field array subgraph angle direction is obtained by calculating following formula:
In formula,It is variance V'(x, si) mean value, W (x) can be used for measure V'(x) Fluctuation situation, to weigh the confidence level of the depth value.However, the scale of W (x) is excessive, be not suitable for directly applying, We handle it,.Following formula is used to carry out logarithmetics processing first:
In formula ε be one in a small amount with prevent denominator be equal to 0, be then normalized further according to following formula:
η (x)=LW(x)/max{LW(x)}
By normalization, the value range of η is limited between 0 to 1.Finally, in order to by η point for high confidence region with Low confidence region is mapped using sigmoid functions.As follows:
In formula, a is attenuation coefficient, the sensitivity of controlling curve, value 0.3;B is translation coefficient, control threshold it is big It is small, value 0.5.By calculating above, that is, the confidence level for obtaining current scene estimation of Depth is distributed R.
3. using " depth of confidence level guiding is propagated " algorithm, the noise in figure and depth error hiding are estimated to initial depth It is filtered out.Calculated confidence level is distributed R, we can be realized global excellent by minimizing following object function Change.
In formula, X0It is the initial displacement estimation of vectorization, X is variable, and R represents confidence level distribution,For complete 1 vector.X, X0, R andDimension having the same.The estimation of Depth figure of optimizationIt can be acquired by way of minimizing object function.And mesh By fidelity term E in scalar functionsR(X) and regular terms JR(X) it forms.λ is regularization coefficient, for controlling the dynamics of filter action.Square Battle array H is that for controlling depth value from high confidence level regional spread to the operator in low confidence region, HX can be by under confidence level guiding Algorithm 1 is realized.
Note:Boundary is handled by filling marginal value
4. by using " depth under confidence level guiding flows back " algorithm, the edge of depth map is reinforced.It is specific real Applying method is:
By minimizing object function, although the noise and depth error hiding in weak texture region can effectively be pressed down System, but can cause to be subjected to displacement value diffusion around high confidence level edges of regions.In order to keep the intensity of edge, need herein Introduce an edge strengthening measure.The measure, which is divided into, to flow back for confidence region expansion with depth.
Confidence region expansion can be realized by maximum filter.We define ReFor the confidence level distribution after expansion Figure, wherein any one pixel can be calculate by the following formula to obtain.
In formula, Pi,jIt is the block centered on R (i, j), the effect of maximum filter is to extract the neighborhood P of R (i, j)i,j The result that interior maximum value is exported as filter.Pass through the operation of maximum filter, originally confidence in confidence level distribution map Spending higher region will expand, and the lower region of confidence level will be shunk.
Since the blurring effect at edge is concentrated mainly on the side of the low confidence at edge, and other side is due to by fidelity Item protection, there is no the depth reflux that by prodigious loss, may be used herein under confidence level guiding during optimization Strategy carries out Edge Enhancement, can specifically be realized by minimizing following object function:
In formula, λbIt is regularization weights, matrix HbFor space-variant filter operator, bigger is occupied in low confidence region wherein Weight.Filtering sees below algorithm 2.
Note:Boundary is handled by filling marginal value.

Claims (6)

1. a kind of scene depth method of estimation for light field array camera, which is characterized in that include the following steps:
1) by comparing the scene depth in the variance evaluation initial depth estimation on angle direction during refocusing;
2) confidence level for calculating scene depth by the second-order deviation on analytic angle direction during refocusing is distributed;
3) noise and depth error hiding in initial depth estimation figure are filtered out using confidence level distribution;
4) edge through step 3) treated estimation of Depth figure is reinforced, obtains the exact depth estimation of current scene Figure.
2. the scene depth method of estimation according to claim 1 for light field array camera, which is characterized in that step 1) In, the estimated value of the scene depthWherein, x is a certain pixel in scene depth figure Abscissa value, WDIt is a neighborhood around x, | WD| represent the sum of pixel in window; U={ u1, u2,L,uUBe camera in array position;N is depth resolution;U is the camera sum on u direction;S={ s1,s2,L,sN} For (ionospheric) focussing factor;L(u,x-siU) image that the camera that denotation coordination is u obtains is x-s in abscissaiGray value of image at u.
3. the scene depth method of estimation according to claim 2 for light field array camera, which is characterized in that step 2) In, the calculation formula of confidence level distribution R (x) is:
Wherein, a is attenuation coefficient;B is translation coefficient;η (x)=LW(x)/max{LW(x) }, ε is one a small amount of, It is V'(x, si) mean value.
4. the scene depth method of estimation according to claim 3 for light field array camera, which is characterized in that a values It is 0.3;B values are 0.5.
5. the scene depth method of estimation according to claim 1 for light field array camera, which is characterized in that step 3) Specific implementation process include the following steps:
1) estimate that extraction one is centered on (i, j), the block P that size is ρ × ρ in figure X from initial depthX;From confidence level distribution Extract corresponding block PR;(i, j) is initialized as (1,1);
2) pass through normalizationWherein PR(x, y) is block PRThe confidence of middle xth row y row The value of degree;By normalization, mask M is generated;
3) by PXFiltered depth map X is inserted with the inner product of MfIn, i.e.,
4) judge whether all pixels in X are traversed, if so, exporting filtered depth map Xf;Otherwise, return to step 1).
6. the scene depth method of estimation according to claim 5 for light field array camera, which is characterized in that step 4) Specific implementation process include:
1) from filtered depth map XfThe block P that size of the middle extraction one centered on (i, j) is ρ × ρX;From setting after expansion Reliability is distributed ReThe middle corresponding block P of extractionR
2) pass through confidence level overturning and energy normalizedGenerate mask Mb
3) by PXWith MbInner product filling exact depth estimation figure XbIn, i.e.,
If 4) XfMiddle all pixels are all traversed, then export exact depth estimation figure Xb;Otherwise, return to step 1).
CN201810256154.5A 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera Active CN108564620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810256154.5A CN108564620B (en) 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810256154.5A CN108564620B (en) 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera

Publications (2)

Publication Number Publication Date
CN108564620A true CN108564620A (en) 2018-09-21
CN108564620B CN108564620B (en) 2020-09-04

Family

ID=63533407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810256154.5A Active CN108564620B (en) 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera

Country Status (1)

Country Link
CN (1) CN108564620B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN110197506A (en) * 2019-05-30 2019-09-03 大连理工大学 A kind of light field depth estimation method based on variable height rotating parallel quadrangle
CN110276371A (en) * 2019-05-05 2019-09-24 杭州电子科技大学 A kind of container angle recognition methods based on deep learning
CN110400342A (en) * 2019-07-11 2019-11-01 Oppo广东移动通信有限公司 Parameter regulation means, device and the electronic equipment of depth transducer
CN111028281A (en) * 2019-10-22 2020-04-17 清华大学 Depth information calculation method and device based on light field binocular system
CN111091601A (en) * 2019-12-17 2020-05-01 香港中文大学深圳研究院 PM2.5 index estimation method for outdoor mobile phone image in real time in daytime
US11205278B2 (en) 2019-07-11 2021-12-21 Shenzhen Heytap Technology Corp., Ltd. Depth image processing method and apparatus, and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279961A (en) * 2013-05-22 2013-09-04 浙江大学 Video segmentation method based on depth recovery and motion estimation
CN104966289A (en) * 2015-06-12 2015-10-07 北京工业大学 Depth estimation method based on 4D light field
CN105023249A (en) * 2015-06-26 2015-11-04 清华大学深圳研究生院 Highlight image restoration method and device based on optical field
CN105139401A (en) * 2015-08-31 2015-12-09 山东中金融仕文化科技股份有限公司 Depth credibility assessment method for depth map
CN105184808A (en) * 2015-10-13 2015-12-23 中国科学院计算技术研究所 Automatic segmentation method for foreground and background of optical field image
US9414048B2 (en) * 2011-12-09 2016-08-09 Microsoft Technology Licensing, Llc Automatic 2D-to-stereoscopic video conversion
CN106340041A (en) * 2016-09-18 2017-01-18 杭州电子科技大学 Light field camera depth estimation method based on cascade shielding filtering filter
US20170213070A1 (en) * 2016-01-22 2017-07-27 Qualcomm Incorporated Object-focused active three-dimensional reconstruction
CN107038719A (en) * 2017-03-22 2017-08-11 清华大学深圳研究生院 Depth estimation method and system based on light field image angle domain pixel

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9414048B2 (en) * 2011-12-09 2016-08-09 Microsoft Technology Licensing, Llc Automatic 2D-to-stereoscopic video conversion
CN103279961A (en) * 2013-05-22 2013-09-04 浙江大学 Video segmentation method based on depth recovery and motion estimation
CN104966289A (en) * 2015-06-12 2015-10-07 北京工业大学 Depth estimation method based on 4D light field
CN105023249A (en) * 2015-06-26 2015-11-04 清华大学深圳研究生院 Highlight image restoration method and device based on optical field
CN105139401A (en) * 2015-08-31 2015-12-09 山东中金融仕文化科技股份有限公司 Depth credibility assessment method for depth map
CN105184808A (en) * 2015-10-13 2015-12-23 中国科学院计算技术研究所 Automatic segmentation method for foreground and background of optical field image
US20170213070A1 (en) * 2016-01-22 2017-07-27 Qualcomm Incorporated Object-focused active three-dimensional reconstruction
CN106340041A (en) * 2016-09-18 2017-01-18 杭州电子科技大学 Light field camera depth estimation method based on cascade shielding filtering filter
CN107038719A (en) * 2017-03-22 2017-08-11 清华大学深圳研究生院 Depth estimation method and system based on light field image angle domain pixel

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIE CHEN 等,: "Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions", 《ARXIV》 *
MICHAEL W. TAO 等,: "Depth from Combining Defocus and Correspondence Using Light-Field Cameras", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
肖照林: "基于相机阵列的光场成像与深度估计方法研究", 《中国博士学位论文全文数据库》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN110276371A (en) * 2019-05-05 2019-09-24 杭州电子科技大学 A kind of container angle recognition methods based on deep learning
CN110276371B (en) * 2019-05-05 2021-05-07 杭州电子科技大学 Container corner fitting identification method based on deep learning
CN110197506A (en) * 2019-05-30 2019-09-03 大连理工大学 A kind of light field depth estimation method based on variable height rotating parallel quadrangle
CN110197506B (en) * 2019-05-30 2023-02-17 大连理工大学 Light field depth estimation method based on variable-height rotating parallelogram
CN110400342A (en) * 2019-07-11 2019-11-01 Oppo广东移动通信有限公司 Parameter regulation means, device and the electronic equipment of depth transducer
CN110400342B (en) * 2019-07-11 2021-07-06 Oppo广东移动通信有限公司 Parameter adjusting method and device of depth sensor and electronic equipment
US11205278B2 (en) 2019-07-11 2021-12-21 Shenzhen Heytap Technology Corp., Ltd. Depth image processing method and apparatus, and electronic device
CN111028281A (en) * 2019-10-22 2020-04-17 清华大学 Depth information calculation method and device based on light field binocular system
CN111028281B (en) * 2019-10-22 2022-10-18 清华大学 Depth information calculation method and device based on light field binocular system
CN111091601A (en) * 2019-12-17 2020-05-01 香港中文大学深圳研究院 PM2.5 index estimation method for outdoor mobile phone image in real time in daytime
CN111091601B (en) * 2019-12-17 2023-06-23 香港中文大学深圳研究院 PM2.5 index estimation method for real-time daytime outdoor mobile phone image

Also Published As

Publication number Publication date
CN108564620B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN108564620A (en) Scene depth estimation method for light field array camera
CN108470370B (en) Method for jointly acquiring three-dimensional color point cloud by external camera of three-dimensional laser scanner
US10353271B2 (en) Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
US10217293B2 (en) Depth camera-based human-body model acquisition method and network virtual fitting system
CN111462206B (en) Monocular structure light depth imaging method based on convolutional neural network
CN103426200B (en) Tree three-dimensional reconstruction method based on unmanned aerial vehicle aerial photo sequence image
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN110120071A (en) A kind of depth estimation method towards light field image
CN112991420A (en) Stereo matching feature extraction and post-processing method for disparity map
CN113989758B (en) Anchor guide 3D target detection method and device for automatic driving
CN106897986A (en) A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN115147709B (en) Underwater target three-dimensional reconstruction method based on deep learning
Lee et al. Improving focus measurement via variable window shape on surface radiance distribution for 3D shape reconstruction
CN110189347A (en) A kind of method and terminal measuring object volume
CN114299405A (en) Unmanned aerial vehicle image real-time target detection method
CN114549669A (en) Color three-dimensional point cloud obtaining method based on image fusion technology
Chen et al. Scene segmentation of remotely sensed images with data augmentation using U-net++
CN115471749A (en) Multi-view multi-scale target identification method and system for extraterrestrial detection unsupervised learning
CN113034371A (en) Infrared and visible light image fusion method based on feature embedding
Mo et al. Soft-aligned gradient-chaining network for height estimation from single aerial images
He et al. A novel way to organize 3D LiDAR point cloud as 2D depth map height map and surface normal map
CN113670268B (en) Binocular vision-based unmanned aerial vehicle and electric power tower distance measurement method
Zhang et al. A Robust Multi‐View System for High‐Fidelity Human Body Shape Reconstruction
CN108564594A (en) A kind of target object three-dimensional space motion distance calculating method
CN114972276A (en) Automatic driving distance judgment algorithm for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant