Nothing Special   »   [go: up one dir, main page]

CN113470096A - Depth measurement method and device and terminal equipment - Google Patents

Depth measurement method and device and terminal equipment Download PDF

Info

Publication number
CN113470096A
CN113470096A CN202010244769.3A CN202010244769A CN113470096A CN 113470096 A CN113470096 A CN 113470096A CN 202010244769 A CN202010244769 A CN 202010244769A CN 113470096 A CN113470096 A CN 113470096A
Authority
CN
China
Prior art keywords
depth
pixel point
fusion
frequency
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010244769.3A
Other languages
Chinese (zh)
Inventor
向显嵩
江超
刘昆
王世通
鲍文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010244769.3A priority Critical patent/CN113470096A/en
Publication of CN113470096A publication Critical patent/CN113470096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of measurement, and provides a depth measurement method, a depth measurement device and terminal equipment, wherein the method comprises the following steps: respectively acquiring depth images of a current scene under at least two measurement frequencies, wherein the at least two measurement frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points; determining a plurality of candidate fusion depths of each pixel point, and determining a relative depth of each pixel point; identifying the fusion depth of a target pixel point according to a plurality of candidate fusion depths and relative depths of the target pixel point, wherein the target pixel point is any one of the plurality of pixel points; and generating a target depth image of the current scene based on the fusion depth of the target pixel points. According to the method, the change of the front-back relation between the pixel points is considered in the fusion calculation, the original depth logic of the scene can be effectively protected, and the accuracy of depth measurement is improved.

Description

Depth measurement method and device and terminal equipment
Technical Field
The application belongs to the technical field of measurement, and particularly relates to a depth measurement method and device and terminal equipment.
Background
A Time of Flight (TOF) camera is a detector that measures the depth of an object (i.e., the distance relative to the TOF camera) by acquiring the probe wave Time of Flight. The TOF camera can detect the depth of an object in a scene, a depth map is taken as an output result, and the value of each pixel point in the depth map is the depth of the point.
The measurement range of a TOF camera is determined by the modulation frequency of its probe wave, which is equal to half the speed of light divided by the modulation frequency. For example, the range is 7.5 meters in the measurement mode using a 20 megahertz (MHz) modulation frequency, and 2.5 meters in the measurement mode using a 60MHz modulation frequency. When the depth of the object exceeds the range corresponding to a certain frequency, the output depth of the measuring mode is the 'folding' depth, namely the true depth takes the remainder by taking the range as a modulus. For example, an object at 3.5 meters will produce 1 fold in the 60MHz measurement mode due to the depth exceeding the 2.5 meter range, outputting a depth measurement of 1 meter; accordingly, if the output depth measurement is 1 meter, then the true depth value for the object may be 1 meter, or 3.5 meters (fold 1), or 6 meters (fold 2), and so on. Therefore, when the true depth value of the object is restored according to the depth measurement value output by the TOF camera, if the folding times are judged incorrectly, the finally determined true depth value of the object is also judged incorrectly, and the accuracy of depth measurement is seriously affected.
Disclosure of Invention
The embodiment of the application provides a depth measurement method, a depth measurement device and terminal equipment, and the accuracy of depth measurement can be improved.
In a first aspect, an embodiment of the present application provides a depth measurement method, including:
the method comprises the steps of respectively acquiring depth images of a current scene under at least two measuring frequencies, wherein the at least two measuring frequencies at least comprise a first frequency and a second frequency, the first frequency is larger than the second frequency, and correspondingly, the measuring range of the first frequency is smaller than that of the second frequency. Aiming at each pixel point in the depth image, a plurality of candidate fusion depths of each pixel point and the relative depth of the pixel point can be determined, so that the fusion depth of a target pixel point can be identified according to the candidate fusion depths and the relative depth of the target pixel point, and the target pixel point is any one of the pixel points. After the fusion depth of each target pixel point is identified, a target depth image of the current scene can be generated based on the fusion depths of the target pixel points.
According to the depth measurement method and device, the relative depth information of each pixel point is combined, the change of the front-back relation between the pixel points is considered in fusion calculation, the original depth logic of a scene can be effectively protected, and the accuracy of depth measurement is improved.
In a possible implementation manner of the first aspect, the multiple candidate fusion depths of each pixel point may be obtained by multiple folding numbers of the high-frequency measurement value. Therefore, the first depth value, i.e. the high-frequency measurement value, of each pixel point at the first frequency can be obtained respectively, and then a plurality of candidate fusion depths of each pixel point are calculated according to the first depth value.
In a possible implementation manner of the first aspect, when calculating the multiple candidate fusion depths of each pixel point according to the first depth value, the measurement range corresponding to the first frequency, that is, the measurement range corresponding to the high-frequency modulation frequency, may be determined first, then the sum of the first depth value of each pixel point and multiple non-negative integer multiple values of the measurement range corresponding to the first frequency is calculated respectively, and the sum of the first depth value and the multiple non-negative integer multiple values is used as the multiple candidate fusion depths of each pixel point. The plurality of non-negative integers may include 0, 1, 2, etc.
In a possible implementation manner of the first aspect, the relative depth of each pixel point may be determined by a low-frequency measurement value of the pixel point. Therefore, the second depth value of each pixel point under the low-frequency modulation frequency, that is, the second frequency can be respectively obtained, then each pixel point is sequenced according to the sequence of the second depth values from small to large, and the initial sequencing serial number of each pixel point after sequencing is determined, so that the initial sequencing serial number can be used as the relative depth of the corresponding pixel point.
In a possible implementation manner of the first aspect, a relative position variation between each candidate fusion depth of the target pixel point and the relative depth of the target pixel point may be determined, and then the candidate fusion depth corresponding to the minimum value of the relative position variation may be identified as the fusion depth of the target pixel point.
In a possible implementation manner of the first aspect, a relative position variation between each candidate fusion depth of the target pixel and the relative depth of the target pixel may be determined by inserting each candidate fusion depth of the target pixel into the sequence sorted according to the second depth value, and by calculating a difference between sequence numbers of the candidate fusion depths in the sequence and an initial sorting sequence number of the target pixel, the difference between the sequence numbers may be used as the relative position variation.
In a possible implementation manner of the first aspect, the relative depth of each pixel may also be determined by a difference between distances between candidate fusion depths corresponding to the low-frequency measurement value and the high-frequency measurement value. Therefore, the second depth value of each pixel point under the second frequency can be respectively obtained, the distance difference between each candidate fusion depth of each pixel point and the second depth value is calculated, the minimum value of the distance difference is extracted to serve as the minimum fusion distance of the corresponding pixel point, and then the relative depth of each pixel point is determined according to the minimum fusion distance.
In a possible implementation manner of the first aspect, the relative depth of each pixel point is determined according to the minimum fusion distance, and all the pixel points can be divided into reference pixel points and non-reference pixel points according to the minimum fusion distance. The pixel points with the minimum fusion distance smaller than the preset threshold value can be identified as reference pixel points, and the pixel points with the minimum fusion distance larger than or equal to the preset threshold value are identified as non-reference pixel points. And then sequencing the reference pixel points according to the sequence of the second depth values from small to large, and determining the initial sequencing serial number of each reference pixel point after sequencing to use the initial sequencing serial number as the relative depth of the reference pixel point. For the non-reference pixel point, the second depth value of the non-reference pixel point can be inserted into the sequencing sequence of the reference pixel point, and the sequencing sequence number of the second depth value of the non-reference pixel point in the sequencing sequence of the reference pixel point is determined, so that the sequencing sequence number can be used as the relative depth of the non-reference pixel point.
In a possible implementation manner of the first aspect, the target pixel point may include a reference pixel point and a non-reference pixel point, and therefore, when identifying the fusion depth of the target pixel point, different determination methods may be adopted according to whether the target pixel point is the reference pixel point. Specifically, for a reference pixel point, a candidate fusion depth corresponding to the minimum fusion distance of the reference pixel point may be identified as the fusion depth thereof; for the non-reference pixel point, the relative position variation between each candidate fusion depth of the non-reference pixel point and the relative depth of the non-reference pixel point can be respectively determined, and then the candidate fusion depth corresponding to the minimum value of the relative position variation is identified as the fusion depth of the non-reference pixel point.
In a second aspect, an embodiment of the present application provides a depth measurement device, including:
the depth image acquisition module is used for respectively acquiring depth images of a current scene under at least two measurement frequencies, wherein the at least two measurement frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points;
the candidate fusion depth determining module is used for determining a plurality of candidate fusion depths of each pixel point; and the number of the first and second groups,
the relative depth determining module is used for determining the relative depth of each pixel point;
the fusion depth identification module is used for identifying the fusion depth of a target pixel point according to a plurality of candidate fusion depths and relative depths of the target pixel point, wherein the target pixel point is any one of the plurality of pixel points;
and the target depth image generation module is used for generating a target depth image of the current scene based on the fusion depth of the target pixel points.
In a possible implementation manner of the second aspect, the candidate fusion depth determining module may specifically include the following sub-modules:
the first depth value acquisition submodule is used for respectively acquiring a first depth value of each pixel point under a first frequency;
and the candidate fusion depth calculation operator module is used for calculating a plurality of candidate fusion depths of each pixel point according to the first depth value.
In a possible implementation manner of the second aspect, the candidate fusion depth calculator module may specifically include the following units:
the measuring range determining unit is used for determining the measuring range corresponding to the first frequency;
and the candidate fusion depth calculating unit is used for calculating the sum of a first depth value of each pixel point and a plurality of non-negative integer times of the measuring range corresponding to the first frequency respectively, and taking the sum of the first depth value and the plurality of non-negative integer times as a plurality of candidate fusion depths of each pixel point.
In a possible implementation manner of the second aspect, the relative depth determining module may specifically include the following sub-modules:
the second depth value obtaining submodule is used for respectively obtaining a second depth value of each pixel point under a second frequency;
the pixel point sequencing submodule is used for sequencing each pixel point according to the sequence of the second depth value from small to large;
and the first relative depth determining submodule is used for determining an initial sequencing serial number of each pixel point after sequencing and taking the initial sequencing serial number as the relative depth of the pixel point.
In a possible implementation manner of the second aspect, the fusion depth identification module may specifically include the following sub-modules:
the relative position variation determining submodule is used for respectively determining the relative position variation between each candidate fusion depth of the target pixel point and the relative depth of the target pixel point;
and the fusion depth identification submodule is used for identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the target pixel point.
In a possible implementation manner of the second aspect, the relative position change amount determining submodule may specifically include the following units:
a candidate fusion depth insertion unit, configured to insert each candidate fusion depth of the target pixel point into a sequence ordered according to a second depth value;
and the relative position variation calculating unit is used for calculating the difference of the sequence numbers of the candidate fusion depths in the sequence and the initial sequence number of the target pixel point, and taking the difference of the sequence numbers as the relative position variation.
In a possible implementation manner of the second aspect, the relative depth determining module may further include the following sub-modules:
a distance difference value calculating submodule for calculating a distance difference value between each candidate fusion depth of each pixel point and the second depth value;
the minimum fusion distance extraction submodule is used for extracting the minimum value of the distance difference value and taking the minimum value as the minimum fusion distance of the corresponding pixel point;
and the second relative depth determining submodule is used for determining the relative depth of each pixel point according to the minimum fusion distance.
In a possible implementation manner of the second aspect, the second relative depth determination submodule may specifically include the following units:
a reference pixel point identification unit, configured to identify a pixel point with the minimum fusion distance smaller than a preset threshold as a reference pixel point;
the reference pixel point sequencing unit is used for sequencing the reference pixel points according to the sequence of the second depth values from small to large;
the reference pixel point relative depth determining unit is used for determining an initial sequencing serial number of each reference pixel point after sequencing and taking the initial sequencing serial number as the relative depth of the reference pixel point;
a non-reference pixel point inserting unit, configured to insert, for any non-reference pixel point, a second depth value of the non-reference pixel point into the sorting sequence of the reference pixel point;
and the non-reference pixel point relative depth determining unit is used for determining a sorting sequence number of the second depth value of the non-reference pixel point in the sorting sequence of the reference pixel points, and taking the sorting sequence number as the relative depth of the non-reference pixel point.
In a possible implementation manner of the second aspect, the target pixel point may include the reference pixel point and the non-reference pixel point, and the fusion depth identifying module may further include the following sub-modules:
a reference pixel point fusion depth identification submodule, configured to identify, for any reference pixel point, a candidate fusion depth corresponding to the minimum fusion distance of the reference pixel point as a fusion depth of the reference pixel point;
and the non-reference pixel point fusion depth identification submodule is used for respectively determining the relative position variation between each candidate fusion depth of the non-reference pixel points and the relative depth of the non-reference pixel points aiming at any non-reference pixel point, and identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the non-reference pixel points.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the depth measurement method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor of a terminal device, implements the depth measurement method according to any one of the above first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the depth measurement method of any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
according to the depth image generation method and device, the depth images of the current scene under at least two measuring frequencies are collected respectively, and after a plurality of candidate fusion depths of each pixel point and the relative depth of each pixel point are determined, the final fusion depth of each pixel point can be determined according to the candidate fusion depths and the relative depth of each pixel point, and then the target depth image of the current scene can be generated. According to the depth measurement method and device, the folding number of the high-frequency measurement value is obtained through the relative depth information provided by the low-frequency measurement value of the pixel point, the change of the front-back relation between the pixel points is considered in the fusion calculation, the original depth logic of a scene can be effectively protected, the fusion result which best accords with the original depth logic of the scene can be given, and the accuracy of the depth measurement is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIGS. 1(a) -1 (b) are schematic diagrams of a depth measurement method in the prior art;
FIG. 2 is a schematic diagram illustrating an algorithm flow of a depth measurement method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a hardware structure of a mobile phone to which the depth measurement method according to an embodiment of the present disclosure is applied;
fig. 4 is a schematic diagram of a software structure of a mobile phone to which the depth measurement method according to an embodiment of the present disclosure is applied;
FIG. 5 is a flow chart illustrating exemplary steps of a depth measurement method provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of candidate fusion depths provided by an embodiment of the present application;
FIG. 7 is a flow chart illustrating exemplary steps of a depth measurement method according to another embodiment of the present application;
FIG. 8 is a schematic diagram of candidate fusion depths provided by another embodiment of the present application;
FIG. 9 is a block diagram of a depth measuring device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
For ease of understanding, a depth measurement method commonly used in the prior art will first be described by way of a specific example.
In general, the resolution of a TOF camera to measure depth is proportional to its probe wave modulation frequency, with higher frequencies also giving higher resolution. At the same time, noise generally interferes with high frequency measurements less than low frequencies. In order to take account of the large range and the precision, the TOF camera generally uses a plurality of modulation frequency modes for combined measurement, and a fusion depth is calculated as a final measurement result by applying a fusion algorithm to the measurement values of the plurality of frequencies. The core of the fusion algorithm is to judge the folding times of a certain frequency measurement value and then restore the real measurement value.
The fusion algorithm in the prior art generally determines the number of folds on the basis of the proximity principle based on the measured values of the individual frequencies and their possible values. The "nearest neighbor principle" means that a set of folding numbers is determined to minimize the distance difference between possible values (i.e. positions after the folding) of the corresponding frequencies. Algorithms for determining the combination of the folding numbers according to the principle of proximity include a traversal method and a fast algorithm based on the Chinese remainder theorem, and the like.
Fig. 1(a) is a schematic diagram of a depth measurement method in the prior art. In fig. 1, it is assumed that the low frequency measurement of 20MHz is unfolded and the high frequency measurement of 60MHz is possibly folded 0, 1 or 2 times. Wherein the distance difference between the high frequency possible value, where the measurement result is folded 1 time, and the low frequency measurement value is the smallest (i.e. distance difference 1 in fig. 1), and thus the number of folds of the high frequency measurement result can be determined to be 1. And finally, giving a final fusion depth according to the possible value obtained by folding the high-frequency measurement result for 1 time, or giving the final fusion depth after weighted averaging the possible value obtained by folding the high-frequency measurement result for 1 time and the low-frequency measurement result.
However, due to the complexity of the real scene, the measurement signal of the TOF camera is easily interfered by factors such as crosstalk at the transmitting end, ambient light noise, and lens stray light, and the measured value of each frequency has a certain deviation from the true distance. Especially in a scene with holes or gaps, signals in the holes or the gaps are small, and are very easy to be interfered, so that a measurement result is deviated, and especially, the low-frequency measurement result is deviated greatly.
As shown in fig. 1(b), when the low frequency measurement value of a certain pixel in the depth map is smaller and the distance difference from the unfolded high frequency measurement value is smaller (i.e. the distance difference is 0 in fig. 1 (b)), the wrong high frequency folding number is calculated according to the existing fusion algorithm based on the proximity principle. The calculated fusion depth will now be much smaller than the actual depth of the object, and this error will result in a serious depth logical error in which the depth within the hole in the depth map is shallower than the opening.
That is to say, in the fusion process in the prior art, only the measured values of the pixel points are utilized, and the front-back logical relationship between the pixel points is not considered, so that the error between the obtained fusion depth and the true depth of the object is large, and even completely wrong.
In order to solve the above problems, a core concept of the embodiment of the present application is proposed in that, first, a measurement result of a TOF camera in a non-folding low-frequency measurement mode is utilized to define a relative depth (i.e., a front-back logical relationship of each pixel point) between each pixel point on a depth map; then, when the fusion depth is calculated, the relative depth variation of the pixel points is considered to maintain the original depth logic of the scene, so that the depth logic error of the fusion result in the prior art is corrected.
As shown in fig. 2, which is a schematic diagram of an algorithm flow of the depth measurement method provided in the embodiment of the present application, according to the algorithm flow shown in fig. 2, the depth measurement method in the embodiment of the present application mainly includes the following steps:
s201, calculating possible fusion positions and corresponding distance differences according to high-frequency and low-frequency measurement results;
in this step, the high and low frequency modes of the TOF camera can be used to make depth measurements of the same scene.
It should be noted that, in order to ensure the validity of the low-frequency measurement result, it should be ensured that the measurement result of the lowest frequency is not folded in the current scene, i.e. the low-frequency range is enough to cover the entire scene.
Typically, the ranges for the 20MHz measurement mode and the 60MHz measurement mode are about 7.5 meters and 2.5 meters, respectively. For most indoor scenes, 7.5 meters is sufficient to cover all depths within the scene. Therefore, the low frequency mode in this step may select a modulation frequency of 20MHz, and the high frequency mode may select a modulation frequency of 60 MHz.
And after the measurement is finished, calculating the fusion position of the depth values corresponding to different folding numbers of the high-frequency mode and the measurement value of the low-frequency mode aiming at each pixel point.
For example, for a certain pixel point, the high-frequency measurement value obtained after the measurement is completed is 1 meter, and then the depth value corresponding to the pixel point being folded 0 times, 1 time or 2 times in the high-frequency mode is 1 meter, or 3.5 meters (folded 1 time), or 6 meters (folded 2 times), respectively. And then calculating the fusion positions of the three depth values and the low-frequency mode measured value to obtain three fusion positions of the pixel point. Namely, a fusion location 0 of the low frequency mode measurement value with a depth value of 1 meter, a fusion location 1 of the low frequency mode measurement value with a depth value of 3.5 meters, and a fusion location 2 of the low frequency mode measurement value with a depth value of 6 meters.
At the same time, the distance difference between the high and low frequency measurement values corresponding to each fusion position should be calculated. That is, for the fusion position 0, calculating the distance difference between the high-frequency measurement value (1 meter) and the low-frequency measurement value of the pixel point; for the fusion position 1, calculating the distance difference between the high-frequency measurement value (3.5 meters) and the low-frequency measurement value of the pixel point; and fusing the position 2, and calculating the distance difference between the high-frequency measurement value (6 meters) and the low-frequency measurement value of the pixel point.
S202, defining the relative depth of each pixel according to the low-frequency measurement value;
in this step, the low-frequency measurement results of each pixel point in the field of view of the TOF camera (i.e., on the corresponding depth map) may be sorted, and the relative depth of each pixel point is defined as the sequence number of the pixel point in the sorting sequence, where the sequence number may represent the number of the pixel points sorted in front of the sequence number (including the pixel point itself).
For example, suppose that there are 50000 pixels on the depth map, and in the low-frequency measurement mode, the 50000 pixels are respectively measured to obtain a low-frequency measurement value. Then, the 50000 pixels can be sorted according to the sequence of the low-frequency measured values, the sorted serial numbers are 1, 2, 3, … … and 50000 respectively, and for any two pixels, the pixel with the relatively larger serial number corresponds to the pixel with the lower-frequency measured value which is also larger than the pixel with the smaller serial number.
Of course, the low-frequency measurement results participating in the sorting may be obtained from all the pixel points, or may be obtained only from the pixel points with small difference between the high and low frequencies after being fused in the previous step.
For example, in the previous step, for the fusion position 0, if the distance difference between the calculated high-frequency measurement value (1 meter) and the calculated low-frequency measurement value of the pixel point is smaller than a certain set threshold, the fusion position 0 can be directly determined as the final fusion position of the pixel point without reordering the pixel points. The threshold may be determined according to actual needs, and is not limited in this embodiment of the application.
S203, calculating the relative depth change of each fusion position relative to the low-frequency measurement value and the pixel point;
in this step, the relative depth change of the pixel point may mean that, for the pixel point to be evaluated in the depth logic, the rank of the low-frequency measurement value of the pixel point in the depth ordering sequence may be first found, then, the rank of a certain fusion position of the pixel point in the depth ordering sequence may be found, and the rank quantization amount between the two may be calculated.
For example, for a pixel point to be evaluated for a certain depth logic, it is assumed that a low-frequency measurement value of the pixel point is found to be 537.1 millimeters (mm), a ranking serial number of the measurement value in a depth ranking sequence is 6, and then positions of three fusion positions in the depth ranking sequence are respectively determined according to the three fusion positions corresponding to the pixel point calculated in the first step.
It should be noted that, the positions of the three fusion positions in the depth ordering sequence may compare the depth value of each fusion position with the low-frequency measurement value of each pixel point in the depth ordering sequence to find out two low-frequency measurement values closest to the fusion position, where one of the two low-frequency measurement values should be the depth value of the fusion position, and the other low-frequency measurement value should be smaller than the depth value of the fusion position. Then, the sequence number obtained by inserting the depth value of the fusion position between the two low-frequency measurement values is used as the ranking sequence number of the fusion position in the depth ranking sequence.
For example, if the depth value of a certain fusion position is 530.03mm, the two closest low-frequency measurement values are 528.9mm (No. 4) and 534.2mm (No. 5), respectively, and after inserting the depth value between the two closest low-frequency measurement values, the fusion position of 530.03mm with the ranking number of 5 can be obtained.
The relative depth change between the ranking number (5) and the ranking number (6) of the low-frequency measurement value 537.1mm of the pixel point is 1.
If the depth rank of a certain fusion position is changed greatly compared with the depth rank before fusion (low-frequency measurement value), for example, the relative depth change determined by the rank number is 1000, that is, it indicates that the pixel point exchanges the front-back relationship with more pixel points after fusion.
It should be noted that, similar to step 2, the pixel points to be evaluated by the depth logic may be from all the pixel points, or may be only from the pixel points whose distance difference between the high frequency and the low frequency after fusion is greater than the set threshold in S201.
And S204, comprehensively determining the final fusion depth according to the fusion distance difference and the relative depth change.
In this step, for the pixel point to be evaluated for the depth logic, a reasonable fusion position can be comprehensively determined and a final depth map can be given according to the distance difference calculated in S201 and the pixel relative depth change after fusion calculated in S203.
The depth measurement method provided by the present application is described below with reference to specific embodiments.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The depth measurement method provided by the embodiment of the application can be applied to terminal devices such as a mobile phone, a tablet personal computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific type of the terminal device at all.
Take the terminal device as a mobile phone as an example. Fig. 3 is a block diagram illustrating a partial structure of a mobile phone according to an embodiment of the present disclosure. Referring to fig. 3, the cellular phone includes: radio Frequency (RF) circuit 310, memory 320, input unit 330, display unit 340, sensor 350, audio circuit 360, wireless fidelity (Wi-Fi) module 370, processor 380, and power supply 390. Those skilled in the art will appreciate that the handset configuration shown in fig. 3 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 3:
the RF circuit 310 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 380; in addition, the data for designing uplink is transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 310 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 320 may be used to store software programs and modules, and the processor 380 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 320. The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 300. Specifically, the input unit 330 may include a touch panel 331 and other input devices 332. The touch panel 331, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on the touch panel 331 or near the touch panel 331 using any suitable object or accessory such as a finger, a stylus, etc.) on or near the touch panel 331, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 331 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch panel 331 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 330 may include other input devices 332 in addition to the touch panel 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 331 can cover the display panel 341, and when the touch panel 331 detects a touch operation on or near the touch panel 331, the touch panel is transmitted to the processor 380 to determine the type of the touch event, and then the processor 380 provides a corresponding visual output on the display panel 341 according to the type of the touch event. Although in fig. 3, the touch panel 331 and the display panel 341 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 331 and the display panel 341 may be integrated to implement the input and output functions of the mobile phone.
The handset 300 may also include at least one sensor 350, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 341 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between the user and the handset. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signals into electrical signals, which are received by the audio circuit 360 and converted into audio data, which are then processed by the audio data output processor 380 and then transmitted to, for example, another cellular phone via the RF circuit 310, or output to the memory 320 for further processing.
Wi-Fi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 370, and provides wireless broadband internet access for the user. Although fig. 3 shows the Wi-Fi module 370, it is understood that it does not belong to the essential constitution of the handset 300, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing units; preferably, the processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
The handset 300 may also include a camera 390, which camera 390 may be a TOF camera. Optionally, the position of the camera 390 on the mobile phone 300 may be front-located or rear-located, which is not limited in this embodiment of the application.
Optionally, the mobile phone 300 may include a single camera, a dual camera, or a triple camera, which is not limited in this embodiment.
For example, the cell phone 300 may include three cameras, one being a main camera, one being a wide camera, and one being a tele camera.
Optionally, when the mobile phone 300 includes a plurality of cameras, the plurality of cameras may be all front-mounted, all rear-mounted, or a part of the cameras front-mounted and another part of the cameras rear-mounted, which is not limited in this embodiment of the present application.
Although not shown, the handset 300 also includes a power source (e.g., a battery) to power the various components. The power supply may be logically coupled to the processor 380 through a power management system to manage charging, discharging, and power consumption management functions through the power management system.
In addition, although not shown, the mobile phone 300 may further include a bluetooth module, etc., which will not be described herein.
Fig. 4 is a schematic diagram of a software structure of a mobile phone 300 according to an embodiment of the present application. Taking the operating system of the mobile phone 300 as an Android system as an example, in some embodiments, the Android system is divided into four layers, which are an application layer, an application Framework (FWK) layer, a system layer and a hardware abstraction layer, and the layers communicate with each other through a software interface.
As shown in fig. 4, the application layer may include a series of application packages, which may include short message, calendar, camera, video, navigation, gallery, call, and other applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer may include some predefined functions, such as functions for receiving events sent by the application framework layer.
As shown in fig. 4, the application framework layer may include a window manager, a resource manager, and a notification manager, among others.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The application framework layer may further include:
a viewing system that includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide the communication functions of the handset 300. Such as management of call status (including on, off, etc.).
The system layer may include a plurality of functional modules. For example: a sensor service module, a physical state identification module, a three-dimensional graphics processing library (such as OpenGL ES), and the like.
The sensor service module is used for monitoring sensor data uploaded by various sensors in a hardware layer and determining the physical state of the mobile phone 300;
the physical state recognition module is used for analyzing and recognizing user gestures, human faces and the like;
the three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The system layer may further include:
the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The hardware abstraction layer is a layer between hardware and software. The hardware abstraction layer may include a display driver, a camera driver, a sensor driver, etc. for driving the relevant hardware of the hardware layer, such as a display screen, a camera, a sensor, etc.
The following embodiments may be implemented on the cellular phone 300 having the above-described hardware structure/software structure. The following embodiment will take the mobile phone 300 as an example to illustrate the depth measurement method provided in the embodiment of the present application.
Referring to fig. 5, a schematic step flow chart of a depth measurement method provided in an embodiment of the present application is shown, and by way of example and not limitation, the method may be applied to the mobile phone 300, where the mobile phone 300 is configured with a TOF camera that measures depth using a multi-frequency mode, and the method specifically may include the following steps:
s501, respectively collecting depth images of a current scene under at least two measuring frequencies, wherein the at least two measuring frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points;
in the embodiment of the present application, the first frequency may be a high frequency modulation frequency, and the second frequency may be a low frequency modulation frequency. In order to ensure that the measuring range under the low-frequency modulation frequency can sufficiently cover all depths of the current scene, the specific numerical value of the high-frequency modulation frequency and the low-frequency modulation frequency can be determined according to the actual scene depth.
As a first example of this embodiment, the first frequency, i.e. the high frequency modulation frequency, may be configured to be 60MHz, with a range of about 2.5 meters in the measurement mode, and the second frequency, i.e. the low frequency modulation frequency, may be configured to be 20MHz, with a range of about 7.5 meters in the measurement mode. For most indoor scenes, a 7.5 meter range is sufficient to cover all depths within the scene. If the scene depth is too large, a lower modulation frequency can be used as a low-frequency according to actual needs.
In the embodiment of the application, a TOF camera of a mobile phone can be controlled to acquire the depth image of the current scene at two measuring frequencies of 20MHz and 60MHz respectively.
S502, determining a plurality of candidate fusion depths of each pixel point, and determining the relative depth of each pixel point;
for high frequency modulation frequencies, due to their short measurement range, when the field depth exceeds its range, the obtained high frequency measurement may not be the actual measurement of the scene, but rather a certain number of folds.
Therefore, for each pixel point in the depth image, a high-frequency measurement value can be taken, and a plurality of possible candidate fusion depths are calculated by combining the specific range of the high-frequency modulation frequency.
In the embodiment of the present application, a first depth value of each pixel point under a high-frequency modulation frequency may be obtained, and then a plurality of candidate fusion depths of each pixel point may be calculated according to the first depth value.
In a specific implementation, the measurement range corresponding to the high-frequency modulation frequency may be determined first, and then the sum of a plurality of non-negative integer multiple values of the first depth value of each pixel point and the measurement range corresponding to the high-frequency modulation frequency is calculated respectively, and the sum is used as a plurality of candidate fusion depths of each pixel point.
Fig. 6 is a schematic diagram of a candidate fusion depth according to an embodiment of the present application. In fig. 6, the high frequency range is about 2.5 meters for a 60MHz high frequency modulation frequency. For the first depth value of a certain pixel point obtained by measurement, the sum of the first depth value and 0 time, 1 time or 2 times of the high-frequency range may be calculated respectively, and the sum is used as three candidate fusion depths of the pixel point, that is, the candidate fusion position 0, the candidate fusion position 1 and the candidate fusion position 2 in fig. 6.
In the embodiment of the present application, the relative depth of each pixel point may be determined according to a measurement value corresponding to the low-frequency modulation frequency.
In specific implementation, the second depth value of each pixel point under the low-frequency modulation frequency can be respectively obtained, then each pixel point is sequenced according to the sequence of the second depth values from small to large, and the initial sequencing serial number of each pixel point after sequencing is determined, so that the initial sequencing serial number can be used as the relative depth of the corresponding pixel point.
As shown in table one, the table is an example table of relative depths of pixel points according to an embodiment of the present application. For each pixel point, the low-frequency measurement values, i.e., the second depth values, of the pixel point may be used as references to sequence the low-frequency measurement values from small to large, so as to obtain the relative depths shown in table one.
Table one:
Figure BDA0002433701550000111
for example, if the depth value measured by a certain pixel point under the low frequency modulation frequency is 537.1mm, the relative depth is 6 according to the table one.
S503, identifying the fusion depth of a target pixel point according to a plurality of candidate fusion depths and relative depths of the target pixel point, wherein the target pixel point is any one of the plurality of pixel points;
in the embodiment of the present application, a fusion depth closest to the true depth value of the pixel point may be identified from a plurality of candidate fusion depths of the pixel point by combining the relative depth of each pixel point. The present embodiment may restore the folded depth value with the high frequency (60MHz) measured depth value as the final fusion depth.
In specific implementation, for each candidate fusion depth of each pixel point, the final fusion depth of the pixel point can be determined according to the position change condition of the candidate fusion depth in the relative depth sequence table.
That is, the relative position variation between each candidate fusion depth of the target pixel point and the relative depth of the target pixel point may be determined, and then the candidate fusion depth corresponding to the minimum value of the relative position variation is identified as the fusion depth of the target pixel point. The variation represents the variation of the relative depth between the pixels corresponding to the fusion position.
In a specific implementation, each candidate fusion depth of the target pixel may be inserted into the sequence sorted according to the second depth value, and then a difference (absolute value) between sequence numbers of the candidate fusion depths in the sequence and an initial sorting sequence number of the target pixel is calculated, and the difference between the sequence numbers is used as a relative position variation.
For example, if three candidate fusion depths of a certain pixel are T0, T1, and T2, respectively, according to the values of T0, T1, and T2, the three candidate fusion depths may be inserted into the relative depth sequence shown in table one, and then the change between T0, T1, and T2 and the initial sorting sequence number of the pixel is determined one by one, so that the candidate fusion depth corresponding to the minimum difference between the sequence numbers may be identified as the final fusion depth of the pixel.
Assuming that the sequence number of a certain pixel point in the relative depth sequence shown in table one is 6, if the value of the candidate fusion depth T0 is between the sequence 3 and the sequence 4 after the candidate fusion depth T0 is inserted into the sequence, the sequence number of the candidate fusion depth T0 can be determined to be 4; if the candidate fusion depth T1 is inserted into the sequence and the value size thereof is between the sequence 15 and the sequence 16, the sequence number of the candidate fusion depth T1 may be determined to be 16; similarly, if the candidate fused depth T2 is inserted into the sequence and its value size is between the sequence 37 and the sequence 38, the sequence number of the candidate fused depth T2 may be determined as 38; and then calculating the difference between the sequence number value of each candidate fusion depth and the sequence number value of the pixel point in the relative depth sequence one by one, and obtaining that the difference between the sequence number values of T0 is 2(6-4), the difference between the sequence number values of T1 is 10(16-6), and the difference between the sequence number values of T2 is 32 (38-6). Since the difference between the sequence numbers of T0 is the smallest, T0 can be used as the fusion depth of the pixel.
S504, generating a target depth image of the current scene based on the fusion depth of the target pixel points.
After the final fusion depth of each pixel point is determined according to the steps, a target depth image of the current scene can be output based on the fusion depths of all the pixel points.
In the embodiment of the application, the depth images of the current scene under at least two measuring frequencies are respectively collected, and after a plurality of candidate fusion depths of each pixel point and the relative depth of each pixel point are determined, the final fusion depth of each pixel point can be determined according to the candidate fusion depths and the relative depth of each pixel point, and then the target depth image of the current scene can be generated. According to the depth measurement method and device, the folding number of the high-frequency measurement value is obtained through the relative depth information provided by the low-frequency measurement value of the pixel point, the change of the front-back relation between the pixel points is considered in the fusion calculation, the original depth logic of a scene can be effectively protected, the fusion result which best accords with the original depth logic of the scene can be given, and the accuracy of the depth measurement is improved.
Referring to fig. 7, a flowchart illustrating schematic steps of a depth measurement method according to another embodiment of the present application is shown, where the method may specifically include the following steps:
s701, respectively acquiring depth images of a current scene under at least two measuring frequencies, wherein the at least two measuring frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points;
it should be noted that the method may be applied to a terminal device configured with a TOF camera in a multi-frequency mode, such as a mobile phone, a tablet computer, a camera, a video camera, and the like, and the specific type of the terminal device is not limited in this embodiment.
In an embodiment of the application, the first frequency may be a 60MHz frequency of the TOF camera, which corresponds to a range of about 2.5 meters, and the second frequency may be 20MHz, which corresponds to a range of about 7.5 meters.
In the embodiment of the application, the TOF camera can be controlled to acquire the depth image of the current scene at two measurement frequencies, namely 20MHz and 60MHz respectively.
S702, determining a plurality of candidate fusion depths of each pixel point;
in the embodiment of the present application, the depth value after the folding may be restored by the depth value obtained by the high-frequency modulation frequency measurement, and the numerical value obtained by the weighted average of the depth value obtained by the low-frequency modulation frequency measurement is used as the final fusion depth.
Therefore, for each pixel point in the depth image, a high-frequency measurement value can be taken, and a plurality of possible candidate fusion depths are calculated by combining the specific range of the high-frequency modulation frequency.
Fig. 8 is a schematic diagram of candidate fusion depths according to another embodiment of the present application. In fig. 8, for each pixel point, a high-frequency measurement value thereof may be taken as a candidate fusion depth (candidate fusion position 0) corresponding to folding 0 times; then, taking the high-frequency measurement value plus 1 high-frequency range as a candidate fusion depth (candidate fusion position 1) corresponding to folding for 1 time; similarly, its high frequency measurement plus 2 high frequency ranges continues to be taken as the corresponding candidate fusion depth folded 2 times (candidate fusion position 2).
S703, respectively obtaining a second depth value of each pixel point under a second frequency;
the second depth value of each pixel point at the second frequency is the depth value of each pixel point in the depth image corresponding to the low-frequency modulation frequency, i.e., the low-frequency measurement value in fig. 8.
S704, calculating a distance difference value between each candidate fusion depth of each pixel point and the second depth value;
for the three candidate fusion depths in fig. 8, the distance differences between them and the low frequency measurement values can be calculated separately. That is, for the candidate fusion position 0, the distance difference (distance difference 0) from the low-frequency measurement value is calculated; for the candidate fusion location 1 and the candidate fusion location 2, distance differences between them and the low-frequency measurement values, i.e., the distance difference 1 and the distance difference 2, can also be calculated, respectively, in the manner described above.
The distance difference between each candidate fusion depth and its low frequency measurement value can be used to determine the relative depth of the pixel.
S705, extracting the minimum value of the distance difference value to serve as the minimum fusion distance of the corresponding pixel points, and determining the relative depth of each pixel point according to the minimum fusion distance;
in this embodiment, for each pixel, the smallest difference value, which is the smallest fusion distance, can be found from the distance differences between the candidate fusion depths of the pixel and the low-frequency measurement value. And then dividing all the pixel points into reference pixel points and non-reference pixel points according to the minimum fusion distance.
In a specific implementation, a threshold may be set, and pixels with a minimum fusion distance smaller than the preset threshold are identified as reference pixels, and those reference points with a minimum fusion distance greater than or equal to the threshold are identified as non-reference pixels (other points).
For the reference pixel points, a low-frequency measurement value, namely a second depth value, can be taken, then all the reference pixel points are sequenced according to the sequence of the second depth value from small to large, and the initial sequencing serial number of each reference pixel point after sequencing is determined, so that the initial sequencing serial number can be used as the relative depth of the reference pixel point.
For the non-reference pixel point, the second depth value of the non-reference pixel point can be inserted into the sorting sequence of the reference pixel point, and the sorting sequence number of the second depth value of the non-reference pixel point in the sorting sequence of the reference pixel point is determined, so that the sorting sequence number of the non-reference pixel point can be used as the relative depth of the non-reference pixel point.
After the relative depth of each pixel point is determined, the final fusion depth of each pixel point can be given according to the change condition of the relative depth.
S706, aiming at any reference pixel point, identifying the candidate fusion depth corresponding to the minimum fusion distance of the reference pixel point as the fusion depth of the reference pixel point;
for a reference pixel point, the candidate fusion depth corresponding to the minimum fusion distance of the pixel point can be directly identified as the final fusion depth of the pixel point.
For example, in fig. 8, if the minimum fusion distance is the distance difference 0 and the distance difference is smaller than the threshold, the candidate fusion depth 0 corresponding to the distance difference 0 can be identified as the final fusion depth of the pixel.
S707, respectively determining a relative position variation between each candidate fusion depth of the non-reference pixel point and the relative depth of the non-reference pixel point aiming at any non-reference pixel point, and identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the non-reference pixel point;
and for the non-reference pixel points, the non-reference pixel points can be used as points to be evaluated, and then the final fusion depth of each point to be evaluated is given according to the change situation of the relative depth.
In this embodiment of the present application, a relative position variation between each candidate fusion depth of a non-reference pixel and a relative depth of the pixel may be determined first, and then the candidate fusion depth corresponding to the minimum value of the relative position variation may be identified as the fusion depth of the pixel.
In a specific implementation, each candidate fusion depth of the point to be evaluated may be inserted into the reference pixel point sequence sorted according to the second depth value, and then an absolute value of a difference between a sorting order of each candidate fusion depth in the sequence and a relative depth of the point to be evaluated (the sorting order of the point to be evaluated after being inserted into the reference pixel point sequence) is calculated, and the absolute value is used as the relative position variation.
And then, finding out the candidate fusion depth corresponding to the minimum value of the relative position variation as the final fusion depth of the point to be evaluated.
S708, generating a target depth image of the current scene based on the fusion depth of the reference pixel points and the non-reference pixel points.
After the fusion depth of all the reference pixel points and the non-reference pixel points is calculated, the depth image of the current scene can be output according to the fusion depth of each pixel point.
In the embodiment of the application, the folding number of the high-frequency measurement value is obtained through the relative depth information provided by the low-frequency measurement value of the pixel point, the areas with lower accuracy of the fusion result can be found out on the fusion result of the prior art, the depth values of the areas are calculated according to the original depth logic of the scene, and the accuracy of depth measurement is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 9 shows a block diagram of a depth measuring device provided in an embodiment of the present application, corresponding to the depth measuring method described in the above embodiment, and only the relevant parts of the embodiment of the present application are shown for convenience of description.
Referring to fig. 9, the apparatus may be applied to a terminal device, and specifically may include the following modules:
a depth image collecting module 901, configured to collect depth images of a current scene at least two measurement frequencies respectively, where the at least two measurement frequencies at least include a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images include a plurality of pixel points;
a candidate fusion depth determining module 902, configured to determine multiple candidate fusion depths for each pixel point; and the number of the first and second groups,
a relative depth determining module 903, configured to determine a relative depth of each pixel point;
a fusion depth identification module 904, configured to identify a fusion depth of a target pixel according to a plurality of candidate fusion depths and relative depths of the target pixel, where the target pixel is any one of the plurality of pixels;
and a target depth image generating module 905, configured to generate a target depth image of the current scene based on the fusion depth of the multiple target pixel points.
In this embodiment of the present application, the candidate fusion depth determining module 902 may specifically include the following sub-modules:
the first depth value acquisition submodule is used for respectively acquiring a first depth value of each pixel point under a first frequency;
and the candidate fusion depth calculation operator module is used for calculating a plurality of candidate fusion depths of each pixel point according to the first depth value.
In this embodiment, the candidate fusion depth calculator module may specifically include the following units:
the measuring range determining unit is used for determining the measuring range corresponding to the first frequency;
and the candidate fusion depth calculating unit is used for calculating the sum of a first depth value of each pixel point and a plurality of non-negative integer times of the measuring range corresponding to the first frequency respectively, and taking the sum of the first depth value and the plurality of non-negative integer times as a plurality of candidate fusion depths of each pixel point.
In this embodiment of the application, the relative depth determining module 903 may specifically include the following sub-modules:
the second depth value obtaining submodule is used for respectively obtaining a second depth value of each pixel point under a second frequency;
the pixel point sequencing submodule is used for sequencing each pixel point according to the sequence of the second depth value from small to large;
and the first relative depth determining submodule is used for determining an initial sequencing serial number of each pixel point after sequencing and taking the initial sequencing serial number as the relative depth of the pixel point.
In this embodiment of the application, the fusion depth recognition module 904 may specifically include the following sub-modules:
the relative position variation determining submodule is used for respectively determining the relative position variation between each candidate fusion depth of the target pixel point and the relative depth of the target pixel point;
and the fusion depth identification submodule is used for identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the target pixel point.
In this embodiment of the present application, the relative position variation determining submodule may specifically include the following units:
a candidate fusion depth insertion unit, configured to insert each candidate fusion depth of the target pixel point into a sequence ordered according to a second depth value;
and the relative position variation calculating unit is used for calculating the difference of the sequence numbers of the candidate fusion depths in the sequence and the initial sequence number of the target pixel point, and taking the difference of the sequence numbers as the relative position variation.
In this embodiment of the application, the relative depth determining module 903 may further include the following sub-modules:
a distance difference value calculating submodule for calculating a distance difference value between each candidate fusion depth of each pixel point and the second depth value;
the minimum fusion distance extraction submodule is used for extracting the minimum value of the distance difference value and taking the minimum value as the minimum fusion distance of the corresponding pixel point;
and the second relative depth determining submodule is used for determining the relative depth of each pixel point according to the minimum fusion distance.
In this embodiment of the present application, the second relative depth determination sub-module may specifically include the following units:
a reference pixel point identification unit, configured to identify a pixel point with the minimum fusion distance smaller than a preset threshold as a reference pixel point;
the reference pixel point sequencing unit is used for sequencing the reference pixel points according to the sequence of the second depth values from small to large;
the reference pixel point relative depth determining unit is used for determining an initial sequencing serial number of each reference pixel point after sequencing and taking the initial sequencing serial number as the relative depth of the reference pixel point;
a non-reference pixel point inserting unit, configured to insert, for any non-reference pixel point, a second depth value of the non-reference pixel point into the sorting sequence of the reference pixel point;
and the non-reference pixel point relative depth determining unit is used for determining a sorting sequence number of the second depth value of the non-reference pixel point in the sorting sequence of the reference pixel points, and taking the sorting sequence number as the relative depth of the non-reference pixel point.
In this embodiment of the application, the target pixel point may include the reference pixel point and the non-reference pixel point, and the fusion depth identifying module 904 may further include the following sub-modules:
a reference pixel point fusion depth identification submodule, configured to identify, for any reference pixel point, a candidate fusion depth corresponding to the minimum fusion distance of the reference pixel point as a fusion depth of the reference pixel point;
and the non-reference pixel point fusion depth identification submodule is used for respectively determining the relative position variation between each candidate fusion depth of the non-reference pixel points and the relative depth of the non-reference pixel points aiming at any non-reference pixel point, and identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the non-reference pixel points.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to the description of the method embodiment section for relevant points.
Referring to fig. 10, a schematic diagram of a terminal device according to an embodiment of the present application is shown. As shown in fig. 10, the terminal device 1000 of the present embodiment includes: a processor 1010, a memory 1020, and a computer program 1021 stored in the memory 1020 and operable on the processor 1010. The processor 1010, when executing the computer program 1021, implements the steps in the various embodiments of the depth measurement method described above, such as the steps S501 to S504 shown in fig. 5. Alternatively, the processor 1010, when executing the computer program 1021, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 901 to 905 shown in fig. 9.
Illustratively, the computer program 1021 may be partitioned into one or more modules/units that are stored in the memory 1020 and executed by the processor 1010 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which may be used to describe the execution process of the computer program 1021 in the terminal device 1000. For example, the computer program 1021 may be segmented into a depth image acquisition module, a candidate fusion depth determination module, a relative depth determination module, a fusion depth recognition module, and a target depth image generation module, each of which functions specifically as follows:
the depth image acquisition module is used for respectively acquiring depth images of a current scene under at least two measurement frequencies, wherein the at least two measurement frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points;
the candidate fusion depth determining module is used for determining a plurality of candidate fusion depths of each pixel point; and the number of the first and second groups,
the relative depth determining module is used for determining the relative depth of each pixel point;
the fusion depth identification module is used for identifying the fusion depth of a target pixel point according to a plurality of candidate fusion depths and relative depths of the target pixel point, wherein the target pixel point is any one of the plurality of pixel points;
and the target depth image generation module is used for generating a target depth image of the current scene based on the fusion depth of the target pixel points.
The terminal device 1000 may be a mobile phone, a tablet computer, a camera, a video camera, or other devices equipped with a multi-frequency TOF camera. The terminal device 1000 can include, but is not limited to, a processor 1010, a memory 1020. Those skilled in the art will appreciate that fig. 10 is only one example of the terminal device 1000, and does not constitute a limitation to the terminal device 1000, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 1000 may further include an input and output device, a network access device, a bus, etc.
The Processor 1010 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 1020 may be an internal storage unit of the terminal device 1000, such as a hard disk or a memory of the terminal device 1000. The memory 1020 may also be an external storage device of the terminal device 1000, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so on, provided on the terminal device 1000. Further, the memory 1020 may also include both an internal memory unit and an external memory device of the terminal device 1000. The memory 1020 is used for storing the computer program 1021 and other programs and data required by the terminal device 1000. The memory 1020 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also discloses a computer readable storage medium, which stores a computer program, and the computer program can realize the depth measurement method of the foregoing embodiments when being executed by a processor.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed depth measurement method, apparatus and terminal device may be implemented in other ways. For example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a depth measuring device and a terminal equipment, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. A depth measurement method, comprising:
respectively acquiring depth images of a current scene under at least two measurement frequencies, wherein the at least two measurement frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points;
determining a plurality of candidate fusion depths of each pixel point, and determining a relative depth of each pixel point;
identifying the fusion depth of a target pixel point according to a plurality of candidate fusion depths and relative depths of the target pixel point, wherein the target pixel point is any one of the plurality of pixel points;
and generating a target depth image of the current scene based on the fusion depth of the target pixel points.
2. The method of claim 1, wherein determining the candidate fusion depths for each pixel point comprises:
respectively acquiring a first depth value of each pixel point under a first frequency;
and calculating a plurality of candidate fusion depths of each pixel point according to the first depth value.
3. The method of claim 2, wherein said calculating a plurality of candidate fusion depths for each pixel point according to the first depth value comprises:
determining a measuring range corresponding to the first frequency;
and respectively calculating the sum of the first depth value of each pixel point and a plurality of non-negative integer times of the measuring range corresponding to the first frequency, and taking the sum of the first depth value and the plurality of non-negative integer times as a plurality of candidate fusion depths of each pixel point.
4. The method of any one of claims 1-3, wherein said determining the relative depth of each pixel point comprises:
respectively acquiring a second depth value of each pixel point under a second frequency;
sequencing each pixel point according to the sequence of the second depth values from small to large;
and determining an initial sequencing serial number of each pixel point after sequencing, and taking the initial sequencing serial number as the relative depth of the pixel point.
5. The method of claim 4, wherein identifying the fusion depth of the target pixel according to the plurality of candidate fusion depths and relative depths of the target pixel comprises:
respectively determining the relative position variation between each candidate fusion depth of a target pixel point and the relative depth of the target pixel point;
and identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the target pixel point.
6. The method of claim 5, wherein the separately determining a relative position change between each candidate fusion depth of the target pixel and the relative depth of the target pixel comprises:
inserting each candidate fusion depth of the target pixel point into a sequence which is sequenced according to a second depth value;
and calculating the difference of the sequence number value between the sequence number of each candidate fusion depth in the sequence and the initial sequence number of the target pixel point, and taking the difference of the sequence number values as the relative position variation.
7. The method of any one of claims 1-3, wherein said determining the relative depth of each pixel point comprises:
respectively acquiring a second depth value of each pixel point under a second frequency;
calculating a distance difference value between each candidate fusion depth of each pixel point and the second depth value;
extracting the minimum value of the distance difference value as the minimum fusion distance of the corresponding pixel point;
and determining the relative depth of each pixel point according to the minimum fusion distance.
8. The method of claim 7, wherein said determining a relative depth of each pixel point according to the minimum blend distance comprises:
identifying the pixel point with the minimum fusion distance smaller than a preset threshold value as a reference pixel point;
sequencing the reference pixel points according to the sequence of the second depth values from small to large;
determining an initial sorting sequence number of each reference pixel point after sorting, and taking the initial sorting sequence number as the relative depth of the reference pixel point;
for any non-reference pixel point, inserting a second depth value of the non-reference pixel point into a sequencing sequence of the reference pixel points;
and determining a sorting sequence number of the second depth value of the non-reference pixel point in the sorting sequence of the reference pixel point, and taking the sorting sequence number as the relative depth of the non-reference pixel point.
9. The method of claim 8, wherein the target pixel includes the reference pixel and the non-reference pixel, and wherein identifying the fusion depth of the target pixel according to a plurality of candidate fusion depths and relative depths of the target pixel comprises:
aiming at any reference pixel point, identifying the candidate fusion depth corresponding to the minimum fusion distance of the reference pixel point as the fusion depth of the reference pixel point;
and aiming at any non-reference pixel point, respectively determining the relative position variation between each candidate fusion depth of the non-reference pixel point and the relative depth of the non-reference pixel point, and identifying the candidate fusion depth corresponding to the minimum value of the relative position variation as the fusion depth of the non-reference pixel point.
10. A depth measurement device, comprising:
the depth image acquisition module is used for respectively acquiring depth images of a current scene under at least two measurement frequencies, wherein the at least two measurement frequencies at least comprise a first frequency and a second frequency, the first frequency is greater than the second frequency, and the depth images comprise a plurality of pixel points;
the candidate fusion depth determining module is used for determining a plurality of candidate fusion depths of each pixel point; and the number of the first and second groups,
the relative depth determining module is used for determining the relative depth of each pixel point;
the fusion depth identification module is used for identifying the fusion depth of a target pixel point according to a plurality of candidate fusion depths and relative depths of the target pixel point, wherein the target pixel point is any one of the plurality of pixel points;
and the target depth image generation module is used for generating a target depth image of the current scene based on the fusion depth of the target pixel points.
11. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the depth measurement method according to any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a depth measurement method according to any one of claims 1 to 9.
CN202010244769.3A 2020-03-31 2020-03-31 Depth measurement method and device and terminal equipment Pending CN113470096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244769.3A CN113470096A (en) 2020-03-31 2020-03-31 Depth measurement method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244769.3A CN113470096A (en) 2020-03-31 2020-03-31 Depth measurement method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN113470096A true CN113470096A (en) 2021-10-01

Family

ID=77865511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244769.3A Pending CN113470096A (en) 2020-03-31 2020-03-31 Depth measurement method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN113470096A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881908A (en) * 2022-07-07 2022-08-09 武汉市聚芯微电子有限责任公司 Abnormal pixel identification method, device and equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305370A1 (en) * 2010-06-14 2011-12-15 Samsung Electronics Co., Ltd. Apparatus and method for depth unfolding based on multiple depth images
US20120033045A1 (en) * 2010-07-23 2012-02-09 Mesa Imaging Ag Multi-Path Compensation Using Multiple Modulation Frequencies in Time of Flight Sensor
CN103473794A (en) * 2012-06-05 2013-12-25 三星电子株式会社 Depth image generating method and apparatus and depth image processing method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305370A1 (en) * 2010-06-14 2011-12-15 Samsung Electronics Co., Ltd. Apparatus and method for depth unfolding based on multiple depth images
US20120033045A1 (en) * 2010-07-23 2012-02-09 Mesa Imaging Ag Multi-Path Compensation Using Multiple Modulation Frequencies in Time of Flight Sensor
CN103473794A (en) * 2012-06-05 2013-12-25 三星电子株式会社 Depth image generating method and apparatus and depth image processing method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHANGPENG TI,ETC: "Single-shot Time-of-Flight Phase Unwrapping Using Two Modulation Frequencies", 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION, 31 December 2016 (2016-12-31), pages 668 - 675 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881908A (en) * 2022-07-07 2022-08-09 武汉市聚芯微电子有限责任公司 Abnormal pixel identification method, device and equipment and computer storage medium

Similar Documents

Publication Publication Date Title
CN110069580B (en) Road marking display method and device, electronic equipment and storage medium
US9948856B2 (en) Method and apparatus for adjusting a photo-taking direction, mobile terminal
CN112989430A (en) Integrity verification method and device, terminal equipment and verification server
CN110795007B (en) Method and device for acquiring screenshot information
CN114816617B (en) Content presentation method, device, terminal equipment and computer readable storage medium
CN107766548B (en) Information display method and device, mobile terminal and readable storage medium
CN111857793B (en) Training method, device, equipment and storage medium of network model
CN106204552B (en) A kind of detection method and device of video source
WO2018018698A1 (en) Augmented reality information processing method, device and system
KR20210097765A (en) Method and apparatus for building an object based on a virtual environment, a computer device, and a readable storage medium
CN112989148A (en) Error correction word ordering method and device, terminal equipment and storage medium
CN113220848B (en) Automatic question and answer method and device for man-machine interaction and intelligent equipment
CN105303591B (en) Method, terminal and server for superimposing location information on jigsaw puzzle
CN112835493B (en) Screen capture display method and device and terminal equipment
CN107516099B (en) Method and device for detecting marked picture and computer readable storage medium
CN106092058B (en) Processing method, device and the terminal of information data
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN109618055B (en) Position sharing method and mobile terminal
CN110168599A (en) A kind of data processing method and terminal
CN112989198B (en) Push content determination method, device, equipment and computer-readable storage medium
CN113470096A (en) Depth measurement method and device and terminal equipment
CN112832737B (en) Shale gas well EUR determination method, device, equipment and storage medium
CN110990728B (en) Method, device, equipment and storage medium for managing interest point information
CN113031838B (en) Screen recording method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination