CN112532858A - Image processing method, image acquisition method and related device - Google Patents
Image processing method, image acquisition method and related device Download PDFInfo
- Publication number
- CN112532858A CN112532858A CN201910883236.7A CN201910883236A CN112532858A CN 112532858 A CN112532858 A CN 112532858A CN 201910883236 A CN201910883236 A CN 201910883236A CN 112532858 A CN112532858 A CN 112532858A
- Authority
- CN
- China
- Prior art keywords
- depth image
- image
- depth
- calibrating
- multipath interference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses an image processing method, an image acquisition method and a related device, wherein the image processing method comprises the following steps: acquiring a first depth image by a TOF camera; and optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image. By adopting the technical scheme provided by the embodiment of the application, the multi-path interference of the first depth image is optimized by utilizing the deep learning neural network, and the image distortion caused by the multi-path interference is favorably reduced.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image acquisition method, and a related apparatus for an image acquired by a TOF camera.
Background
A time of flight (TOF) camera is a depth imaging camera using TOF technology, and when image acquisition is performed by using the TOF camera, as shown in fig. 1, light emitted from a TOF light source 101 is reflected by a subject 102 and captured by a TOF camera 103, and a distance between the subject 102 and the TOF camera 103 can be calculated according to a time difference or a phase difference between the emitted light and the captured light.
It should be noted that the reflected light captured by the TOF camera 103 includes not only the light directly irradiated onto the photographed object 102 by the TOF light source 101 reflected by the photographed object 102, but also the light irradiated onto the background by the TOF light source 101 and reflected by the background to the TOF camera 103, and the light reflected by the background to the photographed object 102 and reflected by the photographed object 102 to the TOF camera 103, so that multi-path interference is generated, and the depth measurement is inaccurate due to the existence of the multi-path interference.
Errors in TOF camera imaging also include several aspects:
(1) periodic errors due to odd harmonics. The light wave emitted by the TOF light source 101 reaches the object 102 to be photographed as a target object, is reflected by the target object, and is received by the TOF camera 103, and since it is assumed that the waveforms of the transmitted signal and the received signal are both standard waveforms when distance and amplitude calculation is performed, the actual reference signal and the received signal are not completely standard waveforms, and there are also direct current components, higher harmonics, non-harmonic signals, and the like, the existence of these factors brings unavoidable interference to the result of distance measurement.
(2) Errors due to device temperature variations (temperature drift). In order to ensure the accuracy of four-phase sampling, the modulation signals of the transmitting end and the receiving end of the ToF camera need to be strictly synchronized. Due to the distance between the transmitting end and the receiving end, a small global phase difference exists when the modulation signal is transmitted to the transmitting end and the receiving end, and the phase difference is related to the temperature of the module. The temperature drift correction is to correct the change of global phase difference at different temperatures.
(3) Gradient error due to individual difference of pixels. A large number of existing research results show that errors caused by different integration times are different in size, and under the condition that the integration time is too long or too short, the influence on the measurement standard deviation is great. In addition, there are many factors that affect TOF cameras, for example, due to the limitation of the manufacturing process, each pixel point cannot be exactly the same, and therefore, the offset of each pixel may also cause errors.
(4) And calibrating the internal reference of the lens.
At present, calibration is mainly performed on the above 4 aspects when an image acquired by a TOF camera is corrected, as shown in fig. 1, a calibration unit in a correction system 104 is used to calibrate periodic errors caused by odd harmonics, calibrate errors caused by device temperature changes, calibrate gradient errors caused by pixel individual differences, calibrate parameters in a lens, and the like.
It should be noted that, although the prior art calibrates the periodic error caused by odd harmonics, the error caused by the temperature change of the device, the gradient error caused by the individual pixel difference, and the lens internal parameter, the prior art does not optimize the multipath interference because the signal directly returned by the object to be measured is difficult to separate from the interference signal returned by the environment.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image acquisition method and a related device, which can reduce image distortion caused by multipath interference.
In a first aspect, an embodiment of the present application discloses an image processing method, including the following steps:
acquiring a first depth image by a TOF camera;
and optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
According to the technical scheme, the deep learning neural network is used for optimizing the multi-path interference of the first deep image, and the image distortion caused by the multi-path interference is favorably reduced.
In some possible embodiments, before the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain the second depth image, the method further includes:
calibrating the first depth image by using a preset processing module to obtain a third depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image comprises: and optimizing the multipath interference in the third depth image by using a deep learning neural network to obtain a second depth image.
According to the method and the device, the first depth image is sequentially processed by the aid of the preset processing module and the depth learning neural network, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, after the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image, the method further includes:
calibrating the second depth image by using a preset processing module to obtain a fourth depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
According to the depth learning method and device, the first depth image is sequentially processed by the aid of the depth learning neural network and the preset processing module, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any one of the data pairs comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Because it is difficult to obtain a depth map without multipath interference when depth data are actually acquired, in the embodiment, when a preset data set is constructed, the depth map without multipath interference is obtained through a simulation TOF depth calculation process. The simulation comprises the whole process of constructing virtual three-dimensional scene data, giving material parameters such as surface reflectivity and the like to each entity in the virtual three-dimensional scene, establishing a virtual light source, establishing a virtual camera and the like, then performing ray tracing, simulating light rays emitted by the light source, reflected by each entity surface in the scene, and entering the camera by the reflected light rays. We compute only one reflection when we need to obtain a depth map that does not include multipath interference, and we compute multiple reflections when we need to obtain a depth map that includes multipath interference. The data obtained through simulation is basically consistent with the real shooting data in trend after verification, so the difficulty of obtaining the depth map without multipath interference is reduced.
In a second aspect, an embodiment of the present application discloses an image acquisition method, which is applied to an electronic device including a TOF camera, and the image acquisition method includes the following steps:
acquiring an image acquisition instruction;
triggering the TOF camera to acquire a first depth image according to the image acquisition instruction;
and optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
It should be noted that the image acquisition instruction may be an instruction for instructing the electronic device to acquire an image through the TOF camera after triggering a certain application. The application may be: applications such as photographing, 3D trial fitting, Augmented Reality (AR) decoration, AR games, motion sensing games, or holographic image interaction. The image acquisition instruction may be generated by a user by clicking a button on a display screen of the electronic device, or by a user pressing a control of the electronic device, or by a user sliding the display screen of the electronic device, or the like. The electronic device may detect the generated image capture instruction.
According to the technical scheme, when image acquisition is carried out, the multi-path interference of the first depth graph acquired by the TOF camera is optimized by using the deep learning neural network, and the image distortion caused by the multi-path interference is favorably reduced.
In some possible embodiments, before the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain the second depth image, the method further includes:
calibrating the first depth image by using a preset processing module to obtain a third depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image comprises: and optimizing the multipath interference in the third depth image by using a deep learning neural network to obtain a second depth image.
According to the method and the device, the first depth image is sequentially processed by the aid of the preset processing module and the depth learning neural network, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, after the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image, the method further includes:
calibrating the second depth image by using a preset processing module to obtain a fourth depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
According to the depth learning method and device, the first depth image is sequentially processed by the aid of the depth learning neural network and the preset processing module, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any one of the data pairs comprise: in the same scene and in the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Because it is difficult to obtain a depth map without multipath interference when depth data are actually acquired, in the embodiment, when a preset data set is constructed, the depth map without multipath interference is obtained through a simulation TOF depth calculation process. The simulation comprises the whole process of constructing virtual three-dimensional scene data, giving material parameters such as surface reflectivity and the like to each entity in the virtual three-dimensional scene, establishing a virtual light source, establishing a virtual camera and the like, then performing ray tracing, simulating light rays emitted by the light source, reflected by each entity surface in the scene, and entering the camera by the reflected light rays. We compute only one reflection when we need to obtain a depth map that does not include multipath interference, and we compute multiple reflections when we need to obtain a depth map that includes multipath interference. The data obtained through simulation is basically consistent with the real shooting data in trend after verification, so the difficulty of obtaining the depth map without multipath interference is reduced.
In a third aspect, an embodiment of the present application discloses an image processing apparatus, including:
a first acquisition unit for acquiring a first depth image by a TOF camera;
and the first processing unit is used for optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
According to the technical scheme, the deep learning neural network is used for optimizing the multi-path interference of the first deep image, and the image distortion caused by the multi-path interference is favorably reduced.
In some possible embodiments, the first processing unit is further configured to, before optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the first depth image by using a preset processing module to obtain a third depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the first processing unit is specifically configured to optimize the multipath interference in the third depth image by using the deep learning neural network to obtain the second depth image in the aspect that the multipath interference in the first depth image is optimized by using the deep learning neural network to obtain the second depth image.
According to the method and the device, the first depth image is sequentially processed by the aid of the preset processing module and the depth learning neural network, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, the first processing unit is further configured to, after optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the second depth image by using a preset processing module to obtain a fourth depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
According to the depth learning method and device, the first depth image is sequentially processed by the aid of the depth learning neural network and the preset processing module, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any one of the data pairs comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Because it is difficult to obtain a depth map without multipath interference when depth data are actually acquired, in the embodiment, when a preset data set is constructed, the depth map without multipath interference is obtained through a simulation TOF depth calculation process. The simulation comprises the whole process of constructing virtual three-dimensional scene data, giving material parameters such as surface reflectivity and the like to each entity in the virtual three-dimensional scene, establishing a virtual light source, establishing a virtual camera and the like, then performing ray tracing, simulating light rays emitted by the light source, reflected by each entity surface in the scene, and entering the camera by the reflected light rays. We compute only one reflection when we need to obtain a depth map that does not include multipath interference, and we compute multiple reflections when we need to obtain a depth map that includes multipath interference. The data obtained through simulation is basically consistent with the real shooting data in trend after verification, so the difficulty of obtaining the depth map without multipath interference is reduced.
In a fourth aspect, an embodiment of the present application discloses an image capturing apparatus, where the image capturing apparatus is applied in an electronic device including a TOF camera, the image capturing apparatus includes:
the second acquisition unit is used for acquiring an image acquisition instruction;
the trigger unit is used for triggering the TOF camera to acquire a first depth image according to the image acquisition instruction;
and the second processing unit is used for optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
According to the technical scheme, when image acquisition is carried out, the multi-path interference of the first depth graph acquired by the TOF camera is optimized by using the deep learning neural network, and the image distortion caused by the multi-path interference is favorably reduced.
In some possible embodiments, the second processing unit is further configured to, before optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the first depth image by using a preset processing module to obtain a third depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the second processing unit is specifically configured to optimize the multipath interference in the third depth image by using the deep learning neural network to obtain the second depth image in the aspect that the multipath interference in the first depth image is optimized by using the deep learning neural network to obtain the second depth image.
According to the method and the device, the first depth image is sequentially processed by the aid of the preset processing module and the depth learning neural network, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, the second processing unit is further configured to, after optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the second depth image by using a preset processing module to obtain a fourth depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
According to the depth learning method and device, the first depth image is sequentially processed by the aid of the depth learning neural network and the preset processing module, so that further optimization of the image is facilitated, and image distortion is reduced.
In some possible embodiments, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any one of the data pairs comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Because it is difficult to obtain a depth map without multipath interference when depth data are actually acquired, in the embodiment, when a preset data set is constructed, the depth map without multipath interference is obtained through a simulation TOF depth calculation process. The simulation comprises the whole process of constructing virtual three-dimensional scene data, giving material parameters such as surface reflectivity and the like to each entity in the virtual three-dimensional scene, establishing a virtual light source, establishing a virtual camera and the like, then performing ray tracing, simulating light rays emitted by the light source, reflected by each entity surface in the scene, and entering the camera by the reflected light rays. We compute only one reflection when we need to obtain a depth map that does not include multipath interference, and we compute multiple reflections when we need to obtain a depth map that includes multipath interference. The data obtained through simulation is basically consistent with the real shooting data in trend after verification, so the difficulty of obtaining the depth map without multipath interference is reduced.
In a fifth aspect, an embodiment of the present application discloses an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform part or all of the steps of the image processing method according to the first aspect or any possible embodiment of the first aspect.
In a sixth aspect, an embodiment of the present application discloses a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements part or all of the steps of the image processing method as described in the first aspect or any possible embodiment of the first aspect.
In a seventh aspect, the present application provides a computer program product, where the computer program product includes a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute some or all of the steps of the image processing method according to the first aspect or any possible embodiment of the first aspect.
In an eighth aspect, an embodiment of the present application discloses an electronic device, including: a TOF camera for acquiring a first depth image, a memory in which a computer program is stored which, when executed by the processor, causes the processor to perform part or all of the steps of the image acquisition method as described in the second aspect or any of the possible embodiments of the second aspect.
In a ninth aspect, the present application discloses a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements part or all of the steps of the image capturing method as described in the second aspect or any possible embodiment of the second aspect.
In a tenth aspect, the present application provides a computer program product, where the computer program product includes a computer-readable storage medium storing a computer program, where the computer program makes a computer execute some or all of the steps of the image acquisition method according to the second aspect or any possible embodiment of the second aspect.
By adopting the technical scheme provided by the embodiment of the application, the multi-path interference of the first depth image is optimized by utilizing the deep learning neural network, and the image distortion caused by the multi-path interference is favorably reduced.
Drawings
Fig. 1 is a schematic view of an application scenario of an image processing method in the prior art.
Fig. 2A is a schematic view of an application scenario of an image processing method according to an embodiment of the present application.
Fig. 2B is a schematic view of an application scenario of an image processing method according to another embodiment of the present application.
Fig. 3A is a flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 3B is a flowchart illustrating an image processing method according to another embodiment of the present application.
Fig. 3C is a flowchart illustrating an image processing method according to another embodiment of the present application.
Fig. 3D is a schematic diagram of a subject being photographed in an embodiment of the present application.
Fig. 3E is a schematic diagram of the object in fig. 3D after image processing is performed on the object in the prior art.
Fig. 3F is a schematic diagram obtained by processing an image of the object to be photographed in fig. 3D according to the present disclosure.
Fig. 3G is a schematic flowchart of an image acquisition method according to an embodiment of the present application.
Fig. 3H is a schematic flowchart of an image acquisition method according to another embodiment of the present application.
Fig. 3I is a schematic flowchart of an image acquisition method according to another embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an image capturing device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The image processing method provided by the embodiment of the application, as shown in fig. 3A, includes the following steps: 301. a first depth image is acquired by a TOF camera. 302. And optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
As shown in fig. 2A, incident light emitted by a TOF light source 201 is reflected by a subject 202 and captured by a TOF camera 203, a distance between a subject 102 and the TOF camera 103 can be calculated according to a time difference or a phase difference between light emitted from the subject 202 and captured, so as to obtain a first depth image, and then the first depth image is optimized by a multipath optimization unit in a correction system 204, specifically, the multipath optimization unit optimizes multipath interference in the first depth image by using a depth learning neural network, so as to obtain a second depth image. It should be noted that the functions of the correction system 204 may be implemented by hardware. To save costs, the functionality of the correction system 204 may also be implemented in pure software. Of course, the functionality of correction system 204 may also be implemented in part in hardware and in part in software.
According to the technical scheme, the deep learning neural network is used for optimizing the multi-path interference of the first deep image, and the image distortion caused by the multi-path interference is favorably reduced.
In some possible embodiments of the present application, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any data pair comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, the other datum is a depth map not including the multipath interference, and the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Because it is difficult to obtain a depth map without multipath interference when depth data are actually acquired, in the embodiment, when a preset data set is constructed, the depth map without multipath interference is obtained through a simulation TOF depth calculation process. The simulation comprises the whole process of constructing virtual three-dimensional scene data, giving material parameters such as surface reflectivity and the like to each entity in the virtual three-dimensional scene, establishing a virtual light source, establishing a virtual camera and the like, then performing ray tracing, simulating light rays emitted by the light source, reflected by each entity surface in the scene, and entering the camera by the reflected light rays. We compute only one reflection when we need to obtain a depth map that does not include multipath interference, and we compute multiple reflections when we need to obtain a depth map that includes multipath interference. The data obtained through simulation is basically consistent with the real shooting data in trend after verification, so the difficulty of obtaining the depth map without multipath interference is reduced.
As a further improvement, as shown in fig. 2B, in addition to the multi-path optimization, the calibration operation may be performed by a preset processing unit, and the calibration operation performed by the preset processing unit may include any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
It should be noted that, the order of the calibration operation and the multipath optimization operation is not limited, and the calibration operation may be performed first, and then the multipath optimization operation may be performed; it is also possible to perform the multipath optimization operation first and then perform the calibration operation.
As shown in fig. 3B, in this embodiment, the multipath optimization operation is performed first, and then the calibration operation is performed, and the image processing method corresponding to fig. 3B may include the following steps:
311. a first depth image is acquired by a TOF camera.
312. And calibrating the first depth image by using a preset processing module to obtain a third depth image. The preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
313. And optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
As shown in fig. 3C, in this embodiment, a calibration operation is performed first, and then a multipath optimization operation is performed. The image processing method corresponding to fig. 3C may include the steps of:
321. a first depth image is acquired by a TOF camera.
322. And optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
323. Calibrating the second depth image by using a preset processing module to obtain a fourth depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
Fig. 3F is a schematic view of a depth image corresponding to the detected object teddy bear obtained by using the technical solution provided in this embodiment. Specifically, the TOF camera performs image acquisition on the teddy bear shown in fig. 3D to obtain a first depth image shown in fig. 3E, and it can be seen from fig. 3E that the first depth image has a relatively rough surface, much noise, thin limbs, and missing ear patterns. The optimized second depth image shown in fig. 3F is obtained after the first depth image shown in fig. 3E is processed by using the flow shown in fig. 3C, as can be seen from fig. 3F, compared with the first depth image, the limbs of the teddy bear are obviously thickened, the lost ears reappear, and the second depth image is closer to the real image, which further proves that the technical scheme provided by the present application can reduce the image distortion caused by the multipath interference.
The technical scheme provided by the application can be applied to application software (APP) of the terminal equipment, and when the TOF camera is used by the terminal equipment to collect images, the image processing method of any one of the preceding embodiments can be used for the collected images. The terminal device may be an electronic device with a TOF camera, such as: mobile phones (or "cellular" phones), smart phones, portable wearable devices (such as smartwatches, etc.), tablet computers, Personal Computers (PCs), PDAs (Personal Digital assistants), etc. The APP may be: applications such as photographing, 3D fitting, AR decoration, AR games, somatosensory games, or holographic image interaction. A photographing APP in a smart phone (hereinafter, referred to as a mobile phone) is described as an example.
Referring to fig. 3G, fig. 3G is a schematic flowchart of an image capturing method according to an embodiment of the present disclosure, in which the image capturing method includes the following steps:
331. and acquiring an image acquisition instruction.
For example, the image capturing command may be generated by a user clicking a photographing button on a display screen of a mobile phone, and the mobile phone may detect the generated image capturing command.
332. A first depth image is acquired by a TOF camera.
333. And optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
According to the technical scheme, the deep learning neural network is used for optimizing the multi-path interference of the first deep image, and the image distortion caused by the multi-path interference is favorably reduced.
In some possible embodiments of the present application, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any data pair comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, the other datum is a depth map not including the multipath interference, and the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Because it is difficult to obtain a depth map without multipath interference when depth data are actually acquired, in the embodiment, when a preset data set is constructed, the depth map without multipath interference is obtained through a simulation TOF depth calculation process. The simulation comprises the whole process of constructing virtual three-dimensional scene data, giving material parameters such as surface reflectivity and the like to each entity in the virtual three-dimensional scene, establishing a virtual light source, establishing a virtual camera and the like, then performing ray tracing, simulating light rays emitted by the light source, reflected by each entity surface in the scene, and entering the camera by the reflected light rays. We compute only one reflection when we need to obtain a depth map that does not include multipath interference, and we compute multiple reflections when we need to obtain a depth map that includes multipath interference. The data obtained through simulation is basically consistent with the real shooting data in trend after verification, so the difficulty of obtaining the depth map without multipath interference is reduced.
As a further improvement, in addition to the multipath optimization, the first depth image may be subjected to a calibration operation performed by a preset processing unit, and the calibration operation performed by the preset processing unit may include any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
It should be noted that, the order of the calibration operation and the multipath optimization operation is not limited, and the calibration operation may be performed first, and then the multipath optimization operation may be performed; it is also possible to perform the multipath optimization operation first and then perform the calibration operation.
As shown in fig. 3H, in this embodiment, the multipath optimization operation is performed first, and then the calibration operation is performed, and the image acquisition method corresponding to fig. 3H may include the following steps:
341. and acquiring an image acquisition instruction.
342. A first depth image is acquired by a TOF camera.
343. And calibrating the first depth image by using a preset processing module to obtain a third depth image. The preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
344. And optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
As shown in fig. 3I, in this embodiment, a calibration operation is performed first, and then a multipath optimization operation is performed. The image acquisition method corresponding to fig. 3I may include the following steps:
351. and acquiring an image acquisition instruction.
352. A first depth image is acquired by a TOF camera.
353. And optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
354. And calibrating the second depth image by using a preset processing module to obtain a fourth depth image. The preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
Referring to fig. 4, an embodiment of the present application further provides an image processing apparatus, and as shown in fig. 4, an image processing apparatus 400 provided in the embodiment of the present application includes: a first acquisition unit 401 and a first processing unit 402. A first acquisition unit 401 for acquiring a first depth image by a TOF camera. The first processing unit 402 is configured to optimize multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
In some possible embodiments, the first processing unit 402 is further configured to, before optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the first depth image by using a preset processing module to obtain a third depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise; the first processing unit 402 is specifically configured to optimize the multipath interference in the third depth image to obtain a second depth image, in terms of optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain the second depth image.
In some possible embodiments, the first processing unit 401 is further configured to, after optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the second depth image by using a preset processing module to obtain a fourth depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
In some possible embodiments, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any one of the data pairs comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Referring to fig. 5, an embodiment of the present application further provides an image capturing apparatus, and as shown in fig. 5, an image capturing apparatus 500 provided in the embodiment of the present application includes: a second acquisition unit 501, a trigger unit 502 and a second processing unit 503. A second obtaining unit 501, configured to obtain an image acquisition instruction; a triggering unit 502, configured to trigger the TOF camera to acquire a first depth image according to the image acquisition instruction; the second processing unit 503 is configured to optimize the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
In some possible embodiments, the second processing unit 503 is further configured to, before optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image, calibrate the first depth image by using a preset processing module to obtain a third depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise; the second processing unit 503 is specifically configured to optimize the multipath interference in the third depth image by using the deep learning neural network to obtain the second depth image in terms of optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain the second depth image.
In some possible embodiments, the second processing unit 503 is further configured to, after optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the second depth image by using a preset processing module to obtain a fourth depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
In some possible embodiments, the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any one of the data pairs comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
Referring to fig. 6, an embodiment of the present application further provides an electronic device 600, where the electronic device 600 includes: a TOF camera 603, a memory 601 and a processor 602, wherein the TOF camera 603 is configured to acquire a first depth image, the memory 601 stores a computer program, and the computer program, when executed by the processor 602, causes the processor 602 to perform some or all of the steps of the image acquisition method according to any of the embodiments.
An embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to execute part or all of the steps of the image processing method according to any of the foregoing embodiments.
The embodiment of the application discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the computer readable storage medium realizes part or all of the steps of the image processing method in any one of the previous embodiments.
The present application provides a computer program product, which includes a computer-readable storage medium storing a computer program, where the computer program makes a computer execute some or all of the steps of the image processing method described in any one of the foregoing embodiments.
The embodiment of the application discloses a computer-readable storage medium, on which a computer program is stored, wherein the computer program is executed by a processor, and part or all of the steps of the image acquisition method in any one of the previous embodiments are executed.
The present application provides a computer program product, which includes a computer readable storage medium storing a computer program, where the computer program makes a computer execute part or all of the steps of the image acquisition method according to any one of the foregoing embodiments.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application, where the terminal device 700 includes: radio frequency unit 710, memory 720, input unit 730, TOF camera 740, audio circuit 750, processor 760, external interface 770, and power supply 780. Among them, the input unit 730 includes a touch screen 731 and other input devices 732, and the audio circuit 750 includes a speaker 751, a microphone 752, and a headphone jack 753. The touch screen 731 may be a display screen having a touch function. In this embodiment, after the APP for photographing is started, on the APP interface for photographing, a user may trigger generation of an image acquisition instruction by clicking a photographing button displayed on the touch screen 731, the TOF camera 740 acquires a first depth image, and the processor 760 optimizes multipath interference in the first depth image by using a depth learning neural network to obtain a second depth image. In some possible embodiments, the processor 760 may calibrate the second depth image to obtain the fourth depth image by using a preset processing module, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise. It should be noted that, the processor 760 may also first utilize a preset processing module to process the first depth image to obtain a third depth image, and then utilize a deep learning neural network to optimize multipath interference in the third depth image to obtain a second depth image.
The explanations and expressions of the technical features and the extensions of various implementation forms in the above specific method embodiments and embodiments are also applicable to the method execution in the apparatus, and are not repeated in the apparatus embodiments.
It should be understood that the division of the modules in the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. For example, each of the above modules may be a processing element separately set up, or may be implemented by being integrated in a certain chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and a certain processing element of the processor calls and executes the functions of each of the above modules. In addition, the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit chip having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software. The processing element may be a general-purpose processor, such as a Central Processing Unit (CPU), or may be one or more integrated circuits configured to implement the above methods, such as: one or more application-specific integrated circuits (ASICs), or one or more microprocessors (DSPs), or one or more field-programmable gate arrays (FPGAs), among others.
It is to be understood that the terms "first," "second," and the like in the description and in the claims, and in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (20)
1. An image processing method, characterized in that it comprises the steps of:
acquiring a first depth image by a TOF camera;
and optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
2. The image processing method of claim 1, wherein before the optimizing the multipath interference in the first depth image using the deep learning neural network to obtain the second depth image, the method further comprises:
calibrating the first depth image by using a preset processing module to obtain a third depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image comprises: and optimizing the multipath interference in the third depth image by using a deep learning neural network to obtain a second depth image.
3. The image processing method of claim 1, wherein after the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image, the method further comprises:
calibrating the second depth image by using a preset processing module to obtain a fourth depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
4. The image processing method according to any one of claims 1 to 3,
the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any data pair comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
5. An image acquisition method applied to an electronic device including a TOF camera, the image acquisition method comprising the steps of:
acquiring an image acquisition instruction;
triggering the TOF camera to acquire a first depth image according to the image acquisition instruction;
and optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
6. The image capturing method of claim 5, wherein before the optimizing the multipath interference in the first depth image using the deep learning neural network to obtain the second depth image, the method further comprises:
calibrating the first depth image by using a preset processing module to obtain a third depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image comprises: and optimizing the multipath interference in the third depth image by using a deep learning neural network to obtain a second depth image.
7. The image capturing method of claim 5, wherein after the optimizing the multipath interference in the first depth image by using the deep learning neural network to obtain a second depth image, the method further comprises:
calibrating the second depth image by using a preset processing module to obtain a fourth depth image, wherein the preset processing module is used for executing any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
8. The image acquisition method according to any one of claims 5 to 7,
the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any data pair comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
9. An image processing apparatus characterized by comprising:
a first acquisition unit for acquiring a first depth image by a TOF camera;
and the first processing unit is used for optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
10. The image processing apparatus according to claim 9,
the first processing unit is further configured to, before optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the first depth image by using a preset processing module to obtain a third depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the first processing unit is specifically configured to optimize the multipath interference in the third depth image by using the deep learning neural network to obtain the second depth image in the aspect that the multipath interference in the first depth image is optimized by using the deep learning neural network to obtain the second depth image.
11. The image processing apparatus according to claim 9,
the first processing unit is further configured to, after optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the second depth image by using a preset processing module to obtain a fourth depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
12. The image processing apparatus according to any one of claims 9 to 11,
the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any data pair comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
13. An image acquisition apparatus, characterized in that the data acquisition apparatus is applied in an electronic device comprising a TOF camera, the image acquisition apparatus comprising:
the second acquisition unit is used for acquiring an image acquisition instruction;
the trigger unit is used for triggering the TOF camera to acquire a first depth image according to the image acquisition instruction;
and the second processing unit is used for optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image.
14. The image capturing device of claim 13,
the second processing unit is further configured to, before optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the first depth image by using a preset processing module to obtain a third depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters and filtering noise;
the second processing unit is specifically configured to optimize the multipath interference in the third depth image by using the deep learning neural network to obtain the second depth image in the aspect that the multipath interference in the first depth image is optimized by using the deep learning neural network to obtain the second depth image.
15. The image capturing device of claim 13,
the second processing unit is further configured to, after optimizing the multipath interference in the first depth image by using a deep learning neural network to obtain a second depth image, calibrate the second depth image by using a preset processing module to obtain a fourth depth image, where the preset processing module is configured to perform any one or more of the following operations: calibrating multiple harmonics, calibrating temperature drift, calibrating gradient errors caused by individual pixel differences, calibrating lens internal parameters, and filtering noise.
16. The image capturing device according to any one of claims 13 to 15,
the input of the training model of the deep neural network comprises a preset data set, the preset data set comprises a plurality of data pairs, and two data in any data pair comprise: in the same scene and at the same angle, one datum is a depth map including multipath interference, and the other datum is a depth map not including the multipath interference, wherein the depth map not including the multipath interference is obtained by simulating a TOF depth calculation process.
17. An electronic device, comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image processing method according to any one of claims 1 to 4.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
19. An electronic device, comprising: a TOF camera for acquiring a first depth image, a memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the image acquisition method according to any one of claims 5 to 8, and a processor.
20. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image acquisition method according to any one of claims 5 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910883236.7A CN112532858A (en) | 2019-09-18 | 2019-09-18 | Image processing method, image acquisition method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910883236.7A CN112532858A (en) | 2019-09-18 | 2019-09-18 | Image processing method, image acquisition method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112532858A true CN112532858A (en) | 2021-03-19 |
Family
ID=74975144
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910883236.7A Pending CN112532858A (en) | 2019-09-18 | 2019-09-18 | Image processing method, image acquisition method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112532858A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116362977A (en) * | 2021-12-23 | 2023-06-30 | 荣耀终端有限公司 | Method and device for eliminating interference patterns in image |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130148102A1 (en) * | 2011-12-12 | 2013-06-13 | Mesa Imaging Ag | Method to Compensate for Errors in Time-of-Flight Range Cameras Caused by Multiple Reflections |
US20150193938A1 (en) * | 2014-01-06 | 2015-07-09 | Microsoft Corporation | Fast general multipath correction in time-of-flight imaging |
US9760837B1 (en) * | 2016-03-13 | 2017-09-12 | Microsoft Technology Licensing, Llc | Depth from time-of-flight using machine learning |
CN108961184A (en) * | 2018-06-28 | 2018-12-07 | 北京邮电大学 | A kind of bearing calibration of depth image, device and equipment |
CN109658352A (en) * | 2018-12-14 | 2019-04-19 | 深圳市商汤科技有限公司 | Optimization method and device, electronic equipment and the storage medium of image information |
US20190132573A1 (en) * | 2017-10-31 | 2019-05-02 | Sony Corporation | Generating 3d depth map using parallax |
CN109903241A (en) * | 2019-01-31 | 2019-06-18 | 武汉市聚芯微电子有限责任公司 | A kind of the depth image calibration method and system of TOF camera system |
CN109991584A (en) * | 2019-03-14 | 2019-07-09 | 深圳奥比中光科技有限公司 | A kind of jamproof distance measurement method and depth camera |
-
2019
- 2019-09-18 CN CN201910883236.7A patent/CN112532858A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130148102A1 (en) * | 2011-12-12 | 2013-06-13 | Mesa Imaging Ag | Method to Compensate for Errors in Time-of-Flight Range Cameras Caused by Multiple Reflections |
US20150193938A1 (en) * | 2014-01-06 | 2015-07-09 | Microsoft Corporation | Fast general multipath correction in time-of-flight imaging |
US9760837B1 (en) * | 2016-03-13 | 2017-09-12 | Microsoft Technology Licensing, Llc | Depth from time-of-flight using machine learning |
US20180129973A1 (en) * | 2016-03-13 | 2018-05-10 | Microsoft Technology Licensing, Llc | Depth from time-of-flight using machine learning |
CN108885701A (en) * | 2016-03-13 | 2018-11-23 | 微软技术许可有限责任公司 | Use the depth according to the flight time of machine learning |
US20190132573A1 (en) * | 2017-10-31 | 2019-05-02 | Sony Corporation | Generating 3d depth map using parallax |
CN108961184A (en) * | 2018-06-28 | 2018-12-07 | 北京邮电大学 | A kind of bearing calibration of depth image, device and equipment |
CN109658352A (en) * | 2018-12-14 | 2019-04-19 | 深圳市商汤科技有限公司 | Optimization method and device, electronic equipment and the storage medium of image information |
CN109903241A (en) * | 2019-01-31 | 2019-06-18 | 武汉市聚芯微电子有限责任公司 | A kind of the depth image calibration method and system of TOF camera system |
CN109991584A (en) * | 2019-03-14 | 2019-07-09 | 深圳奥比中光科技有限公司 | A kind of jamproof distance measurement method and depth camera |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116362977A (en) * | 2021-12-23 | 2023-06-30 | 荣耀终端有限公司 | Method and device for eliminating interference patterns in image |
CN116362977B (en) * | 2021-12-23 | 2023-12-22 | 荣耀终端有限公司 | Method and device for eliminating interference patterns in image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7127149B2 (en) | IMAGE CORRECTION METHOD, IMAGE CORRECTION APPARATUS, AND ELECTRONIC DEVICE | |
CN107409206B (en) | Real time calibration for polyphaser wireless device | |
WO2022037285A1 (en) | Camera extrinsic calibration method and apparatus | |
CN108989678B (en) | Image processing method and mobile terminal | |
CN109247068A (en) | Method and apparatus for rolling shutter compensation | |
TWI752594B (en) | An information processing method, electronic equipment, storage medium and program | |
CN104978077B (en) | interaction method and system | |
CN110245607B (en) | Eyeball tracking method and related product | |
CN109859216B (en) | Distance measurement method, device and equipment based on deep learning and storage medium | |
CN114170324A (en) | Calibration method and device, electronic equipment and storage medium | |
US11238563B2 (en) | Noise processing method and apparatus | |
CN105306819A (en) | Gesture-based photographing control method and device | |
CN109785444A (en) | Recognition methods, device and the mobile terminal of real plane in image | |
CN114290338B (en) | Two-dimensional hand-eye calibration method, device, storage medium, and program product | |
CN117474988A (en) | Image acquisition method and related device based on camera | |
CN112532858A (en) | Image processing method, image acquisition method and related device | |
US10893223B2 (en) | Systems and methods for rolling shutter compensation using iterative process | |
CN108600623B (en) | Refocusing display method and terminal device | |
US11166005B2 (en) | Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters | |
US20230188691A1 (en) | Active dual pixel stereo system for depth extraction | |
CN115344113A (en) | Multi-view human motion capture method, device, system, medium and terminal | |
CN114339023B (en) | Anti-shake detection method, device and medium for camera module | |
KR102718158B1 (en) | Method, device, processor and electronic device for obtaining calibration parameters | |
CN116485912B (en) | Multi-module coordination method and device for light field camera | |
WO2023088383A1 (en) | Method and apparatus for repositioning target object, storage medium and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210319 |