Nothing Special   »   [go: up one dir, main page]

CN107657635B - Depth camera temperature error correction method and system - Google Patents

Depth camera temperature error correction method and system Download PDF

Info

Publication number
CN107657635B
CN107657635B CN201710966716.0A CN201710966716A CN107657635B CN 107657635 B CN107657635 B CN 107657635B CN 201710966716 A CN201710966716 A CN 201710966716A CN 107657635 B CN107657635 B CN 107657635B
Authority
CN
China
Prior art keywords
depth
depth camera
image
temperature
deviation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710966716.0A
Other languages
Chinese (zh)
Other versions
CN107657635A (en
Inventor
刘贤焯
许星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN201710966716.0A priority Critical patent/CN107657635B/en
Publication of CN107657635A publication Critical patent/CN107657635A/en
Application granted granted Critical
Publication of CN107657635B publication Critical patent/CN107657635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method and a system for correcting a temperature error of a depth camera, wherein the method comprises the steps of acquiring a spot image of a current target by using the depth camera, and calculating a deviation value between corresponding pixels of the spot image and a reference spot image; modeling a measurement error of the depth camera caused by a temperature change; and correcting the current measurement deviation value by using the modeled measurement error, and calculating a depth image according to the corrected deviation value. A relation model between temperature change and depth measurement errors is established, and relatively real depth values can be directly calculated by obtaining the distance change between a lens of the acquisition module and the image sensor caused by the temperature change, so that the measurement accuracy of the depth camera can be improved.

Description

Depth camera temperature error correction method and system
Technical Field
The invention relates to the technical field of optics and electronics, in particular to a method and a system for correcting a temperature error of a depth camera.
Background
In some applications based on depth images, such as 3D modeling and dimension measurement, a depth camera is required to acquire a depth image with high accuracy and precision, however, since the depth camera is composed of components such as a laser light source, an optical element and an image sensor, the depth camera is inevitably affected by its own temperature and ambient temperature, the temperature may cause unstable performance of the optical element, and also may cause thermal deformation of a depth camera body, and these factors may cause the quality of the depth image to be degraded, thereby reducing the precision of the depth camera.
The invention aims at solving the problem and provides a method and a system for correcting temperature errors of a depth camera.
Disclosure of Invention
The invention provides a method and a system for correcting a temperature error of a depth camera, and aims to solve the problem that the temperature in the prior art reduces the accuracy of the depth camera.
In order to solve the above problems, the technical solution adopted by the present invention is as follows:
a method of depth camera temperature error correction, comprising: s1: acquiring a spot image of a current target by using a depth camera, and calculating a deviation value between corresponding pixels of the spot image and a reference spot image; s2: modeling a measurement error of the depth camera caused by a temperature change; s3: and correcting the current measurement deviation value by using the modeled measurement error, and calculating a depth image according to the corrected deviation value.
The step S2 includes: the temperature change delta T enables the distance l between a lens of an acquisition module and an image sensor in the depth camera to change delta l; and the distance change delta l causes the deviation delta d between the measured deviation value d' and the real deviation value d, and delta d is delta l · tan theta, wherein theta is a component of an included angle between a connecting line of a target and the optical center of the lens and the optical axis of the lens along a baseline direction, and the baseline refers to a connecting line between the acquisition module and the projection module in the depth camera.
The correction of the current measurement deviation value by using the modeled measurement error refers to correction by using the following formula: d ═ d- Δ d; the calculating of the depth image according to the corrected deviation value means calculating the depth image according to the following formula:
Figure BDA0001436578510000011
the change in pitch Δ l satisfies the relationship: Δ l ═ k Δ T, where k is the coefficient of temperature change; the calculating of the depth image according to the corrected deviation value means calculating the depth image according to the following formula:
Figure BDA0001436578510000021
the invention also provides a system for correcting the temperature error of the depth camera, which comprises the following components: the projection module is used for projecting the spot image to the target; the acquisition module is used for acquiring the spot image; a processor configured to perform the steps of: t1: calculating a deviation value between corresponding pixels of the speckle image and the reference speckle image; t2: modeling a measurement error of the depth camera caused by a temperature change; t3: and correcting the current measurement deviation value by using the modeled measurement error, and calculating a depth image according to the corrected deviation value.
The step T2 includes: the temperature change delta T enables the distance l between a lens of an acquisition module and an image sensor in the depth camera to change delta l, and the delta l is k delta T, wherein k is a temperature change coefficient; and the change in spacing Δ l is such that the measured deviation d' from the true deviation d occursAnd the deviation delta d is delta l · tan theta, wherein theta is a component of an included angle between a connecting line of a target and the optical center of the lens and the optical axis of the lens along a baseline direction, and the baseline refers to a connecting line between the acquisition module and the projection module in the depth camera. The correction of the current measurement deviation value by using the modeled measurement error refers to correction by using the following formula: d ═ d- Δ d; the calculating of the depth image according to the corrected deviation value means calculating the depth image according to the following formula:
Figure BDA0001436578510000022
the invention has the beneficial effects that: the method and the system for correcting the depth camera error are provided, a relation model between temperature change and depth measurement error is established, a relatively real depth value can be directly calculated by obtaining the distance change between a lens of an acquisition module and an image sensor caused by the temperature change, and the measurement precision of the depth camera can be improved.
Drawings
FIG. 1 is a schematic diagram of the imaging principle and error formation of a depth camera according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a method for correcting a temperature error of a depth camera according to an embodiment of the invention.
FIG. 3 is a schematic diagram of temperature correction of a depth camera according to an embodiment of the invention.
Fig. 4 is a schematic diagram of a principle of a laser safety control method in an embodiment of the invention.
FIG. 5 is a schematic diagram of another method for correcting a temperature error of a depth camera according to an embodiment of the invention.
FIG. 6 is a schematic diagram of a method for correcting the depth image by using the modeled measurement error in the embodiment of the present invention.
Among these, 11-projection module, 12-acquisition module, 13-plane, 14-target object, 15-beam, 16-ray, 17-ray, 121-image sensor position, 122-image sensor position, 21-depth camera, 22-real plane, 23-measured plane, 211-projection module, 212-acquisition module, 31-depth camera, 32-depth camera, 33-target, 34-acquired depth image, 35-acquired depth image.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
In the depth image acquired by the depth camera, the value on each pixel represents the distance between the corresponding spatial point and the depth camera, and the quality of the depth image comprises precision and accuracy, wherein the precision refers to the difference between a plurality of acquired depth images when the position between the depth camera and the target is relatively fixed, the smaller the difference is, the higher the precision is, or the measurement consistency and the stability of the depth camera are high, the accuracy refers to the difference between the measured value and a real value, the smaller the difference is, the higher the accuracy is, the measured value is the value displayed on the depth image, and the real value refers to the value represented by the real distance between the target and the depth camera.
When a single depth camera is used for measurement, the temperature of the depth camera is gradually increased after the depth camera is started, the accuracy of the obtained depth image is poor until the temperature is stable, particularly the accuracy, the obtained depth image is very unstable, and therefore the depth image is often extracted for further application after the temperature is stable. However, for accuracy, the acquired depth image is finally deviated from its true value due to a general system error decision, and for the structured light depth camera, the acquired depth image is deviated from the true value and deflected due to the influence of temperature and the like.
In some applications, a plurality of depth cameras are required to be used for obtaining a depth image with a larger field of view, each depth camera independently obtains the depth image and then splices and fuses the plurality of depth images, when the accuracy of the depth image obtained by the depth camera is poor, splicing errors can occur, for example, two depth cameras simultaneously obtain depth images of different areas of a plane (a partial overlapping area generally exists between the two depth images), and the fused depth images become non-planes, such as curved surfaces, cross sections and the like.
Therefore, correcting the temperature-induced errors of the depth camera is beneficial to improving the accuracy of the depth camera. In the following, referring to fig. 1-3, a method and system for temperature error correction are provided by taking a structured light depth camera as an example.
Structured light depth camera measurement principle
FIG. 1 is a schematic diagram illustrating the imaging principle of a depth camera according to an embodiment of the present invention. The structured light depth camera generally includes a projection module 11, an acquisition module 12, and a depth calculation processor (not shown), in one embodiment, the projection module 11 is a near infrared spot projector for projecting a near infrared spot pattern to a target space, and if projected onto a certain plane, the formed spot image has a random and uncorrelated characteristic; the collecting module 12 is a near infrared camera for collecting the speckle image, and the collected speckle image is then transmitted to the depth calculating processor for depth calculation to output a depth image. In fig. 1, a single spot is taken as an example for illustration, the projection module 11 projects a light beam 15 to a Z point on the target object 14, the Z point is imaged on the image sensor 121 through the lens B of the collection module 12, and the light ray 17 is only used to represent the imaging relationship.
The depth camera also typically includes a memory that stores a reference speckle image that is projected by the projection module 11 to a known distance Z0The plane 13 of the image acquisition module 12 acquires the spot image, and the acquisition light 16 is used for representing the imaging relation. The spot on the plane 13 is imaged by the lens at the position 121 on the image sensor, and it can be seen from the figure that the position at which the same spot is imaged on the image sensor changes due to the target object 14 being at a different depth than the reference plane 13. The depth calculation processor performs matching calculation on the current target spot image and the reference spot image during calculation, finds a deviation value d between a spot (i.e., a pixel) in the current spot image and a corresponding spot (pixel) in the reference spot image, and calculates a depth value of the target according to the following formula, wherein the depth value of each pixel finally constitutes a depth image.
Figure BDA0001436578510000051
Wherein Z is the depth value of the target; z0Is the depth value of the plane 13; in order to acquire the distance between the lens in the module 12 and the image sensor, which is generally equal to the focal length of the lens, the lens here can also be a lens group; b is the base length between the projection module 11 and the collection module 12.
Temperature induced measurement error
Gather the module in the degree of depth camera and in gathering spot image in-process, can produce the high heat, lead to the module temperature higher, can bring many-sided influence from this, the temperature can make the performance of lens in gathering the module change, for example can change focus makes spot image contrast descend, to this problem, generally can solve through the lens of gathering the glass material, the glass material is higher to the adaptability of temperature. In addition, the temperature may cause thermal deformation of the acquisition module, for example, the lens and the image sensor may deviate from the original positions, that is, the value of l changes, in the embodiment shown in fig. 1, it is assumed that the temperature changes cause the distance l between the lens and the image sensor to change by Δ l, in this embodiment, it is assumed that the position B where the lens is located is unchanged, and only the image sensor changes from the position 121 to the position 122, it is understood that this is only a simplified process for convenience of description, and other changes may also be made in other embodiments.
Generally, in reference spot image acquisition, the temperature of the acquisition module is assumed to be T1The distance between the lens and the image sensor is l; when the target object is measured, the temperature of the acquisition module is T2And T is2≠T1That means that the point on the current target should be originally imaged at the 121 position, and finally imaged at the 122 position due to the temperature change, see the intersection line of the extension line of the light ray 17 and the image sensor at the 122 position in fig. 1. When performing the matching calculation, the following relationship exists between the measured deviation value d' and the true deviation value d:
d′=d-Δd (2)
in the above formula, Δ d is a change in the deviation value due to a change in position, and as can be seen from fig. 1, Δ d is Δ l · tan θ, where θ is a component of an angle between a line connecting the object and the optical center B of the lens and the optical axis of the lens along the direction of the base line (x).
From the above analysis, the following two correction methods can be derived:
internal calibration method: substituting equation (2) into equation (1) may obtain a true depth value calculation expression:
Figure BDA0001436578510000061
equation (3) illustrates the relationship between the true depth value Z and the measurement deviation value d' and the temperature influence Δ l, and thus it can be known that the true depth value Z can be directly calculated as long as the distance change Δ l caused by the temperature is obtained.
External correction method: the partial derivative calculation is performed on the formula (1), and the relationship between the deviation change Δ d and the depth change (i.e. the difference between the true depth value and the measured value) can be obtained:
Figure BDA0001436578510000062
as can be seen from the formula (4), the depth value change Δ Z can be calculated after Δ l is obtained, and then the measurement value can be corrected according to the depth value change Δ Z.
The difference between the following two correction methods is that the internal correction method can directly output a real depth image by directly modeling the temperature change and the real depth value and acquiring the temperature change value. The external correction rule is to model the temperature change and the depth value change, and calculate the real depth value by obtaining the temperature change value and the measured depth value. Two calibration methods will be described below with reference to specific examples.
Temperature error correction using internal correction
As shown in fig. 2, the depth camera measurement error caused by temperature change can be corrected through the following method ideas:
firstly, a depth calculation processor in a depth camera is utilized to obtain a measurement deviation value of a spot image of a current target relative to a reference spot image;
secondly, modeling the measurement error of the depth camera caused by temperature change;
and finally, correcting the current measurement deviation value by using the modeled measurement error, and calculating a depth image according to the corrected deviation value.
The temperature modeling of measurement errors and the depth image correction will be explained below with emphasis on embodiments.
FIG. 3 is a schematic diagram of a depth camera temperature correction according to an embodiment of the invention. If the real object is a plane 22, the plane reflected in the depth image measured by the depth camera 21 due to temperature changes is 23. The plane 23 is deflected with respect to the plane 22 and there is a difference az between the measured depth value Z' and the true depth value Z.
In the measurement error modeling stage, based on the analysis result of the error in the foregoing and based on the formula (3), it is assumed that the deformation Δ l of the acquisition module has a linear relationship with the temperature change, i.e., Δ l ═ k Δ T, where k is a temperature change coefficient, and Δ T refers to the difference between the current temperature of the acquisition module and the temperature at the time of acquiring the reference spot image, so that the measurement of the deformation Δ l can be realized by measuring the temperature change coefficients k and Δ T. In the error correction stage, the measured Δ l is substituted into formula (3) to obtain a real depth image:
Figure BDA0001436578510000071
wherein temperature change coefficient k can be obtained through carrying out the temperature variation experiment to gathering the module, perhaps obtains according to the material and the empirical formula of gathering the module, and the acquireing of delta T can be through disposing a temperature sensor in gathering the module, thereby temperature sensor is used for carrying out real-time detection to the temperature of gathering the module and realizes the measurement to delta T.
The above-described implementation of the correction method may be performed by a depth calculation processor. The temperature error correction system of the depth camera comprises a projection module, a collection module, a temperature sensor, a memory and a depth calculation processor, wherein the temperature sensor is used for measuring the temperature of the collection module in real time and transmitting the temperature to the processor, the memory is used for storing a reference spot image and storing the temperature during collection of the reference spot image and a temperature change coefficient k of the collection module, the processor receives the current spot image transmitted by the collection module, performs matching calculation with the reference spot image to obtain a measured value of a deviation value, and calculates a real depth value by combining temperature differences delta T and k and using a formula (5) so as to correct the temperature error. The depth computation processor may be integrated in the depth camera or may reside in other computing devices independent of the depth camera.
The correction engine in the depth camera measurement error correction system is located in the processor, the processor does not need to output the depth image and then correct the depth image, the method can be called as an internal correction method, however, a temperature sensor device is additionally adopted in the internal correction method process, the cost of the depth camera is increased, the size of the depth camera is enlarged, the internal correction method is usually based on a specific error model theory, actually, the error model theory only represents errors caused by a certain reason, and the temperature errors of the depth camera are difficult to measure comprehensively. Therefore, in some depth cameras, an internal correction method is not provided, and the depth camera can directly output a depth image without error correction, and then correct the original depth image according to a specific application scene by using an additional error correction engine.
Temperature error correction using external correction
Fig. 4 is a schematic diagram illustrating a laser safety control method according to another embodiment of the present invention. The depth cameras 31 and 32 measure the scene target 33 together, and each acquire partial depth images 34 and 35 of the target 33, however, due to the temperature error, the acquired depth images 34 and 35 are not consistent in depth with the real target 33, that is, a temperature error is generated. Generally, in order to better fuse the depth images 34 and 35 into a single depth image, the depth cameras 31 and 32 will have a partial common view, and it is expected that the depth image obtained by fusing the depth images 34 and 35 will deviate from the real target, and in addition, the data in the two depth images will be different even in the common view portion. In this embodiment, the temperature error correction method and system will be described by taking two depth cameras as an example, but the correction method and system can be applied to the case when more than 2 depth cameras work together.
As shown in fig. 5, according to the idea that the depth images acquired by the different depth cameras in the common field of view need to be as consistent as possible, i.e. the idea of minimum difference, the method for correcting the temperature error of the depth camera in this embodiment:
firstly, acquiring a depth image of a target by at least two depth cameras, wherein two adjacent depth cameras in the plurality of depth cameras have partial common view areas;
secondly, modeling the measurement error of each depth camera according to the temperature change;
and finally, correcting the acquired depth image by using the modeled measurement error so that the difference of the corrected depth image in the common visual field area is minimum.
The temperature error modeling and correction will be explained in detail below.
The establishment of temperature variations and error causing causes in the temperature error modeling can be seen in detail in the foregoing section. In addition, the temperature error modeling also includes establishing errors between the plurality of depth camera depth images caused by temperature changes, as will be described in more detail below.
As shown in FIG. 4, there is a deviation between each of the depth images 34, 35 acquired by the depth cameras 31 and 32 and the real target 33, and in their common field of view region, the difference Δ Z between the depth images 34 and 35 may be expressed as
ΔZ=Z1-Z2
=Z1-Z+Z-Z2
=ΔZ1-ΔZ2 (5)
If the difference is caused by temperature change, then equation (4) can be applied to the depth cameras 31 and 32, respectively, and equation (5) translates into:
Figure BDA0001436578510000091
from Δ d ═ Δ l · tan θ, the above formula becomes:
Figure BDA0001436578510000092
in the above Z1、Z2The true depth values of the depth cameras 31 and 32 are referred to, but are often replaced with measured values in view of the depth values being unknown. It should be noted that the above formulas are only described with respect to a single point in the target, and in practical applications, the depth value and the angle value in the formulas generally represent two-dimensional arrays.
In the process of error correction by using the model of temperature change and depth error obtained above, namely, the Δ Z of each pixel of the known common region is essentially known, Δ l is solved1、Δl2The process of (1).
In one embodiment, m pixels in the common-view region are selected, where m is smaller than the total number of pixels in the common-view region, and the error correction comprises the following steps:
(1) calculating the difference DeltaZ of the depth values of the pixels in the common area in the different depth images(i)=Z1 (i)-Z2 (i),i∈[1,m];
(2) The difference is caused by the errors of two adjacent depth cameras together, and the errors of two adjacent depth cameras are assumed to be the same, namely, deltaz1 (i)=-ΔZ2 (i)=ΔZ(i)/2。
(3) According to Δ Z1 (i)、ΔZ2 (i)Is substituted into the formula (4) to respectively calculate delta l1 (i)、Δl2 (i)And Δ l corresponding to each pixel of the common region1 (i)、Δl2 (i)Average to obtain the final Deltal1、Δl2
(4) Using the final Δ l1、Δl2And the correction of all pixel depth values of the two depth cameras is realized in combination with formula (4).
In the above method, it is assumed that the error of the common region is caused by both the two depth cameras, and the errors of the two are the same, and the errors of the common pixel region are averaged. However, in practice, the error between the two is not the same, so the accuracy of the above-mentioned correction method is low.
In some embodiments, the error correction can be more accurate using least squares. The mathematical model of the correction method is as follows: given m pixels in the common field of view area of two depth cameras and corresponding depth values
Figure BDA0001436578510000101
Determining a distance error Δ l caused by temperature changes of two depth cameras1、Δl2So that the cost function
Figure BDA0001436578510000102
Minimum, wherein Δ Z(i)Determined using equation (7), k is a coefficient, and in one embodiment, k is 0.5. The solving method includes a gradient descent method, a newton iteration method, a normal equations method (normal equations), and the like, and the specific solution method is not described herein.
The system for temperature error correction by using the external correction method generally includes two or less depth cameras, the correction method is generally completed by an additional correction engine, the correction engine may be a dedicated processor or a software module running in the processor, the correction engine is respectively connected with the plurality of depth cameras, and the correction method is executed by receiving the depth data of the two depth cameras and the mutual position relationship between the two depth cameras.
In the method, the specific error adopted for constructing the mathematical model is simplified, and the corresponding error in practical application is relatively complex. By applying the method disclosed by the invention to a specific complex scene, the method can be directly applied or reasonably changed and applied on the basis of the idea of the invention, and the accuracy of a depth camera can be improved to a certain extent. Reasonable variations based on specific application scenarios based on the idea of the present invention should be considered as the protection scope of the present invention.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (7)

1. A method of depth camera temperature error correction, comprising:
s1: acquiring a spot image of a current target by using a depth camera, and calculating a deviation value between corresponding pixels of the spot image and a reference spot image;
s2: modeling a measurement error caused by thermal deformation of an acquisition module of the depth camera due to temperature change, wherein the thermal deformation of the acquisition module is in a linear relation with the temperature change, and the temperature change refers to the difference between the current temperature of the acquisition module and the temperature of a reference spot image during acquisition;
in the process of collecting the spot image by the collecting module of the depth camera, the temperature change delta T enables the distance l between a lens of the collecting module in the depth camera and an image sensor to change delta l and the position of the lens to be unchanged; and the number of the first and second groups,
the change in distance Δ l causes a deviation Δ d of the measured deviation value d' from the true deviation value d, and Δ d ═ Δ l.
S3: correcting the current measurement deviation value by using the modeled measurement error, and calculating the true depth value of the depth image according to the corrected deviation value, wherein the true depth value of the depth image calculated according to the corrected deviation value refers to calculating the depth image according to the following formula:
Figure FDA0003323879960000011
wherein Z is0Depth measured as a result of temperature changeB is the length of a base line between a projection module and an acquisition module, wherein the base line refers to a connecting line between the acquisition module and the projection module in the depth camera; l is a distance between a lens and an image sensor of a collection module in the depth camera, delta l is a change of the distance between the lens and the image sensor of the collection module in the depth camera caused by temperature change, theta is an included angle between a connecting line of an object and an optical center of the lens and an optical axis of the lens, and d' is a measured deviation value.
2. The method of depth camera temperature error correction of claim 1, wherein the correcting a current measurement offset value using the modeled measurement error refers to correcting using: d ═ d- Δ d
3. The method of depth camera temperature error correction of claim 2, wherein the change in separation Δ/, satisfies the relationship: Δ l ═ k Δ T, where k is the coefficient of temperature change.
4. The method of claim 3, wherein said calculating the depth image based on the corrected deviation value means calculating the depth image based on the following equation:
Figure FDA0003323879960000021
5. a system for depth camera temperature error correction, comprising:
the projection module is used for projecting the spot image to the target;
the acquisition module is used for acquiring the spot image;
a processor configured to perform the steps of:
t1: calculating a deviation value between corresponding pixels of the speckle image and the reference speckle image;
t2: modeling a measurement error of the depth camera caused by a temperature change; the temperature change delta T enables the distance l between a lens of an acquisition module in the depth camera and an image sensor to change delta l, the position of the lens is unchanged, and delta l is k delta T, wherein k is a temperature change coefficient; and the number of the first and second groups,
the change in distance Δ l causes a deviation Δ d of the measured deviation value d' from the true deviation value d, and Δ d ═ Δ l · tan θ;
t3: correcting the current measurement deviation value by using the modeled measurement error, and calculating a depth image according to the corrected deviation value, wherein the calculation of the depth image according to the corrected deviation value refers to the calculation of the depth image according to the following formula:
Figure FDA0003323879960000022
wherein Z is0B is the length of a baseline between a projection module and an acquisition module, wherein the baseline refers to a connecting line between the acquisition module and the projection module in the depth camera; l is a distance between a lens and an image sensor of a collection module in the depth camera, delta l is a change of the distance between the lens and the image sensor of the collection module in the depth camera caused by temperature change, theta is an included angle between a connecting line of an object and an optical center of the lens and an optical axis of the lens, and d' is a measured deviation value.
6. The system for depth camera temperature error correction according to claim 5, wherein the correcting the current measurement offset value using the modeled measurement error refers to correcting using: d ═ d- Δ d
7. The system for depth camera temperature error correction according to claim 6, wherein said calculating a depth image from the corrected offset value is calculating a depth image according to:
Figure FDA0003323879960000031
CN201710966716.0A 2017-10-17 2017-10-17 Depth camera temperature error correction method and system Active CN107657635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710966716.0A CN107657635B (en) 2017-10-17 2017-10-17 Depth camera temperature error correction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710966716.0A CN107657635B (en) 2017-10-17 2017-10-17 Depth camera temperature error correction method and system

Publications (2)

Publication Number Publication Date
CN107657635A CN107657635A (en) 2018-02-02
CN107657635B true CN107657635B (en) 2022-03-29

Family

ID=61118429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710966716.0A Active CN107657635B (en) 2017-10-17 2017-10-17 Depth camera temperature error correction method and system

Country Status (1)

Country Link
CN (1) CN107657635B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921903B (en) * 2018-06-28 2021-01-15 Oppo广东移动通信有限公司 Camera calibration method, device, computer readable storage medium and electronic equipment
CN108668078B (en) * 2018-04-28 2019-07-30 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
WO2019205890A1 (en) 2018-04-28 2019-10-31 Oppo广东移动通信有限公司 Image processing method, apparatus, computer-readable storage medium, and electronic device
CN108711167A (en) * 2018-05-15 2018-10-26 深圳奥比中光科技有限公司 Depth Imaging system and its temperature error bearing calibration
CN108917639A (en) * 2018-05-15 2018-11-30 深圳奥比中光科技有限公司 Depth Imaging system and its temperature error bearing calibration
CN108833884B (en) * 2018-07-17 2020-04-03 Oppo广东移动通信有限公司 Depth calibration method and device, terminal, readable storage medium and computer equipment
CN109213231B (en) * 2018-08-17 2022-01-14 奥比中光科技集团股份有限公司 Temperature control system
CN111381222B (en) * 2018-12-27 2024-08-06 浙江舜宇智能光学技术有限公司 Temperature calibration equipment and method for TOF (time of flight) camera module
CN109903241B (en) * 2019-01-31 2021-06-15 武汉市聚芯微电子有限责任公司 Depth image calibration method and system of TOF camera system
CN110400331B (en) * 2019-07-11 2021-04-30 Oppo广东移动通信有限公司 Depth map processing method and device
EP3859395A4 (en) 2019-12-06 2021-08-04 Shenzhen Goodix Technology Co., Ltd. Three-dimensional image sensing system and related electronic device, and time-of-flight ranging method
CN111426393B (en) * 2020-04-07 2021-11-16 北京迈格威科技有限公司 Temperature correction method, device and system
CN111473755B (en) * 2020-04-21 2022-05-17 福建汇川物联网技术科技股份有限公司 Remote distance measurement method and device
CN113096189B (en) * 2021-03-22 2024-04-05 西安交通大学 ITOF depth camera calibration and depth optimization method
CN114945091B (en) * 2022-07-19 2022-10-25 星猿哲科技(深圳)有限公司 Temperature compensation method, device and equipment of depth camera and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572505A (en) * 2010-11-03 2012-07-11 微软公司 In-home depth camera calibration
US20140340771A1 (en) * 2013-05-14 2014-11-20 JCD (Guang Zhou) Optical Corporation Limited Lens Barrel
CN105432079A (en) * 2013-07-17 2016-03-23 微软技术许可有限责任公司 Real-time registration of a stereo depth camera array
CN107133984A (en) * 2017-03-24 2017-09-05 深圳奥比中光科技有限公司 The scaling method and system of depth camera and main equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7397035B2 (en) * 2005-10-14 2008-07-08 Siemens Medical Solutions Usa, Inc. Scatter correction for time-of-flight positron emission tomography data
CN105323454B (en) * 2014-07-30 2019-04-05 光宝电子(广州)有限公司 Polyphaser image capture system and image reorganization compensation method
US9823352B2 (en) * 2014-10-31 2017-11-21 Rockwell Automation Safety Ag Absolute distance measurement for time-of-flight sensors
CN106959075B (en) * 2017-02-10 2019-12-13 深圳奥比中光科技有限公司 Method and system for accurate measurement using a depth camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572505A (en) * 2010-11-03 2012-07-11 微软公司 In-home depth camera calibration
US20140340771A1 (en) * 2013-05-14 2014-11-20 JCD (Guang Zhou) Optical Corporation Limited Lens Barrel
CN105432079A (en) * 2013-07-17 2016-03-23 微软技术许可有限责任公司 Real-time registration of a stereo depth camera array
CN107133984A (en) * 2017-03-24 2017-09-05 深圳奥比中光科技有限公司 The scaling method and system of depth camera and main equipment

Also Published As

Publication number Publication date
CN107657635A (en) 2018-02-02

Similar Documents

Publication Publication Date Title
CN107657635B (en) Depth camera temperature error correction method and system
CN107730561B (en) Depth camera temperature error correction method and system
US8718326B2 (en) System and method for extracting three-dimensional coordinates
US10764487B2 (en) Distance image acquisition apparatus and application thereof
US9330324B2 (en) Error compensation in three-dimensional mapping
US7479982B2 (en) Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
US9348111B2 (en) Automatic detection of lens deviations
EP2751521B1 (en) Method and system for alignment of a pattern on a spatial coded slide image
JP6209833B2 (en) Inspection tool, inspection method, stereo camera production method and system
JP2012088114A (en) Optical information processing device, optical information processing method, optical information processing system and optical information processing program
KR20150112362A (en) Imaging processing method and apparatus for calibrating depth of depth sensor
CN208863003U (en) A kind of double patterning optics 3D size marking component and its system
JPWO2018042954A1 (en) In-vehicle camera, adjustment method of in-vehicle camera, in-vehicle camera system
JP5487946B2 (en) Camera image correction method, camera apparatus, and coordinate transformation parameter determination apparatus
JPH11355813A (en) Device for deciding internal parameters of camera
Luhmann 3D imaging: how to achieve highest accuracy
CN109813278B (en) Ranging model correction method, ranging method and device and automatic driving system
CN109813277B (en) Construction method of ranging model, ranging method and device and automatic driving system
WO2015159835A1 (en) Image processing device, image processing method, and program
WO2019087253A1 (en) Stereo camera calibration method
CN105423942B (en) The modification method of biprism defect error in BSL 3D DIC systems
KR102167847B1 (en) System and Method for Calibration of Mobile Mapping System Using Laser Observation Equipment
CN110232715B (en) Method, device and system for self calibration of multi-depth camera
KR101991512B1 (en) Height Measuring Method Using Laser Displacement Measuring Apparatus
JP4651550B2 (en) Three-dimensional coordinate measuring apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co., Ltd

Address before: A808, Zhongdi building, industry university research base, China University of Geosciences, No.8, Yuexing Third Road, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant