CN115248440A - TOF depth camera based on dot matrix light projection - Google Patents
TOF depth camera based on dot matrix light projection Download PDFInfo
- Publication number
- CN115248440A CN115248440A CN202110463787.5A CN202110463787A CN115248440A CN 115248440 A CN115248440 A CN 115248440A CN 202110463787 A CN202110463787 A CN 202110463787A CN 115248440 A CN115248440 A CN 115248440A
- Authority
- CN
- China
- Prior art keywords
- speckle
- infrared
- area
- light
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/46—Indirect determination of position data
- G01S17/48—Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a TOF depth camera based on dot matrix light projection, which comprises: a structured light projector for projecting structured light toward a target; the infrared camera is used for collecting an infrared speckle image and a plurality of infrared reference images; the processor module is used for generating a speckle depth map according to the phase difference of multi-frame infrared speckle images, the speckle depth map comprises the infrared speckle images and depth data, a corresponding infrared reference image is determined according to the depth data, a reference area corresponding to the speckle points in the infrared reference image is determined according to pixel coordinates of the scattered points, each sub-area in the reference area is matched with the corresponding speckle point area to determine a target sub-area with the highest correlation degree, the central offset between the target sub-area and the reference area is determined, and when the offset is larger than a preset threshold value, the speckle point area is determined to be a multipath interference point. The invention can detect mirror image multipath, remove multipath interference and realize the application of the TOF depth camera based on speckle dot matrix projection to the sweeping robot.
Description
Technical Field
The present invention relates to depth cameras, and in particular to a TOF depth camera based on dot matrix light projection.
Background
A Time of flight (TOF) depth camera acquires a depth image of a measured space by emitting a floodlight beam with a specific waveband, receiving a reflected light beam of an object in the measured space by using a sensor and measuring the flight Time of the light beam in the space to calculate the distance. The TOF depth camera can obtain a gray image and a depth image at the same time, and is widely applied to the technical fields of 3D depth vision-related gesture recognition, face recognition, 3D modeling, motion sensing games, machine vision, auxiliary focusing, security protection, automatic driving and the like.
Conventional TOF depth cameras assume that the received light beam is reflected only once in the target scene, while in the actual scene there is always a specular or diffuse surface of reflective material that reflects incident light in all directions, so that the TOF sensor receives a superposition of possibly once reflected light beams and multiple reflected light beams, which interferes with the accuracy of the TOF depth camera in measuring distance, an effect known as multipath interference.
A conventional TOF depth camera generally includes a light projector and a light receiving sensor, the light projector emits a flood light beam to a space to provide illumination, the light receiving sensor receives the reflected flood light beam for imaging, and the depth calculating device calculates a flight time according to a phase delay of the emitted light and the received light, so as to obtain distance information. There are some limitations to measuring depth in this way, for example, interference of ambient light may affect the accuracy of measurement, especially when the intensity of ambient light is higher than that of directly reflected light, so that the signal received by the light receiving sensor is mainly ambient light, and a typical scene is a mirror image.
Mirror multipath is a problem often encountered in a sweeping robot scene, the ground is a high-reflectivity tile, and signals received by corresponding pixels on the ground are the superposition of direct reflection (main path) and optical signals reflected for multiple times (secondary path) by an object. For high reflectivity objects, the intensity of the primary path is much lower than the intensity of the secondary path, resulting in measured ground depth deviating from the actual depth.
Disclosure of Invention
In view of the deficiencies in the prior art, it is an object of the present invention to provide a TOF depth camera based on dot matrix light projection.
According to the invention, the TOF depth camera based on the dot matrix light projection 1 is characterized by comprising the following modules:
a structured light projector for projecting structured light toward a target;
the infrared camera is used for receiving the structured light reflected by the target to generate an infrared speckle image and a plurality of infrared reference images, and the infrared reference images are generated by collecting a reference target at a plurality of different distances through the depth camera;
the processor module is used for generating a speckle depth map of a target according to the phase difference of a plurality of frames of infrared speckle images, wherein the speckle depth map comprises an infrared speckle image and depth data of each scattered spot in the infrared speckle image, determining a corresponding infrared reference image according to the depth data of each scattered spot, further determining a reference area corresponding to the speckle point in the infrared reference image according to pixel coordinates of the speckle point, matching each sub-area in the reference area with the corresponding scattered spot area to determine a target sub-area with the highest correlation degree, determining the central offset of the target sub-area and the reference area, determining the speckle point area as a multipath interference point when the offset is greater than a preset threshold, and further removing the multipath interference point in the infrared speckle image to generate the target speckle image.
Preferably, when determining the corresponding reference area of the scattered spot in the infrared reference image, the method includes the following steps:
step M1: acquiring depth data of each scattered spot, and determining the distance from a target surface point corresponding to the scattered spot to a depth camera according to the depth data;
step M2: determining the infrared reference images collected at the same distance according to the distance;
step M3: and determining a corresponding reference area corresponding to the speckle point in the infrared reference image according to the pixel coordinate of the speckle point.
Preferably, when the speckle point is determined to be a multipath interference point, the method comprises the following steps:
step N1: calculating the correlation degree of the scattered spot area and a sub-area of the upper left corner of the corresponding reference area;
and step N2: moving the sub-region by one pixel each time according to the sequence from left to right and from top to bottom, and then calculating the correlation degree of the sub-region and the speckle point region until a target sub-region with the highest correlation degree is determined;
and step N3: and determining the central offset of the target sub-region and the speckle point region, and determining the speckle point as a multipath interference point when the offset is greater than a preset threshold value.
Preferably, the calculation method of the correlation r is as follows:
wherein, A mn Is the amplitude of the reference region or regions,is the average amplitude of the reference region; b is mn Is the amplitude of the speckle point area and,the average amplitude of the speckle point area is obtained; m and n are pixel coordinate ranges.
Preferably, the infrared reference image acquisition includes the following steps:
step S101: projecting lattice light at a distance through the structured light projector toward the reference target;
step S102: receiving the lattice light reflected by the reference target by the infrared camera to generate an infrared reference image;
step S103: and repeatedly executing the step S101 to the step S102 to acquire the infrared reference images at a plurality of different distances.
Preferably, the structured light projector comprises a light source, a light source driver and a light modulator;
the light source driver is connected with the light source and used for driving the light source to emit light;
the light modulator is used for modulating the projected light of the light source into structured light and projecting the structured light to a target.
Preferably, the infrared camera includes a lens, an optical filter and an image sensor arranged along an optical path, and the image sensor is provided with at least four receiving windows; the pulse width of the receiving window is larger or smaller than the pulse width of the structured light;
the image sensor is used for receiving light signals of at least four structured lights through at least four receiving windows; the at least four receiving windows are sequentially arranged in time sequence, and then each speckle depth map is generated according to the optical signals from each receiving window.
Preferably, the structured light is a lattice light; the lattice light is distributed in the following preset shape: linear, triangular, quadrilateral, circular, hexagonal, pentagonal, randomly arranged, spatially encoded and quasi-lattice arrangements.
Preferably, the size of the scattered spot area can be set to be 5 × 5 pixel area, and the scattered spot area comprises a scattered spot;
the size of the reference region may be set to 8 × 8 pixel regions; the threshold is set to two pixels.
The invention provides a TOF depth camera based on lattice light projection, which comprises the following modules:
a structured light projector for projecting structured light toward a target;
the infrared camera is used for collecting infrared speckle images;
a memory for storing a plurality of infrared reference images generated by the depth camera pre-acquiring a reference target at a plurality of different distances,
the processor module is used for generating a speckle depth map of a target according to the phase difference of multiple frames of infrared speckle images, the speckle depth map comprises the infrared speckle images and depth data of each scattered spot in the infrared speckle images, determining a corresponding infrared reference image according to the depth data of each scattered spot, further determining a reference area corresponding to the speckle in the infrared reference image according to pixel coordinates of the speckle point, matching each sub-area in the reference area with the corresponding scattered spot area to determine a target sub-area with the highest correlation degree, determining the central offset of the target sub-area and the reference area, and when the offset is greater than a preset threshold, determining the speckle point area as a multipath interference point, further removing the multipath interference point in the infrared speckle images and generating the target speckle image.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the offset of each scattered spot in the collected infrared speckle image is determined through the infrared reference image, and when the offset is larger than a preset threshold value, the speckle point area is determined as a multipath interference point, so that the multipath interference point in the infrared speckle image is removed to generate a target speckle image, and thus, the mirror image multipath can be detected, the multipath interference is removed, and the application of the TOF depth camera based on speckle dot matrix projection to the sweeping robot is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a block diagram of a TOF depth camera based on lattice light projection in an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating steps for determining a reference area corresponding to a speckle point in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps for determining scattered points as multi-path interference points according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps for acquiring an infrared reference image according to an embodiment of the present invention;
FIG. 5 is a block diagram of a structured light projector according to an embodiment of the present invention;
FIG. 6 is a block diagram of an infrared camera according to an embodiment of the present invention;
FIGS. 7 (a), (b), and (c) are schematic diagrams of non-periodic arrangements of lattice light in embodiments of the present invention;
FIG. 8 is a block diagram of a TOF depth camera based on lattice light projection in a variation of the present disclosure.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The invention provides a TOF depth camera based on lattice light projection, and aims to solve the problems in the prior art.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of a TOF depth camera based on lattice light projection in an embodiment of the present invention, and as shown in fig. 1, the TOF depth camera based on lattice light projection provided by the present invention includes the following modules:
a structured light projector for projecting structured light toward a target;
the infrared camera is used for receiving the structured light reflected by the target to generate an infrared speckle image and a plurality of infrared reference images, and the infrared reference images are generated by collecting a reference target at a plurality of different distances through the depth camera;
the processor module is used for generating a speckle depth map of a target according to the phase difference of a plurality of frames of infrared speckle images, wherein the speckle depth map comprises an infrared speckle image and depth data of each scattered spot in the infrared speckle image, determining a corresponding infrared reference image according to the depth data of each scattered spot, further determining a reference area corresponding to the speckle point in the infrared reference image according to pixel coordinates of the speckle point, matching each sub-area in the reference area with the corresponding scattered spot area to determine a target sub-area with the highest correlation degree, determining the central offset of the target sub-area and the reference area, determining the speckle point area as a multipath interference point when the offset is greater than a preset threshold, and further removing the multipath interference point in the infrared speckle image to generate the target speckle image.
In the embodiment of the present invention, the infrared camera is an infrared detector, and the dot matrix light reflected by the target person is received by the infrared detector.
The infrared speckle images are acquired by a depth camera at a distance of 30-80 cm from the target person. The depth camera employs a TOF camera, and a light projector of the TOF camera may project a lattice of light toward the target.
In the embodiment of the invention, the multipath interference points can be directly deleted to generate the target speckle image.
In the embodiment of the invention, the offset of each scattered spot in the collected infrared speckle image is determined through the infrared reference image, and when the offset is greater than the preset threshold value, the speckle point area is determined as the multipath interference point, and then the multipath interference point in the infrared speckle image is removed to generate the target speckle image, so that the mirror image multipath can be detected, the multipath interference is removed, and the application of the TOF depth camera based on the speckle dot matrix projection to the sweeping robot is realized.
Fig. 2 is a flowchart of a step of determining a reference area corresponding to a speckle point in the embodiment of the present invention, and as shown in fig. 2, when determining the reference area corresponding to the speckle point in the infrared reference image, the method includes the following steps:
step M1: acquiring depth data of each scattered spot, and determining the distance between a target surface point corresponding to the scattered spot and a depth camera according to the depth data;
step M2: determining the infrared reference images collected at the same distance according to the distance;
step M3: and determining a corresponding reference area corresponding to the speckle point in the infrared reference image according to the pixel coordinate of the speckle point.
In the embodiment of the invention, each scattered spot is extracted from the infrared speckle image, and the distance is determined according to the depth data of the scattered spot, so that the corresponding infrared reference image is determined.
And determining a reference area of a corresponding position in the infrared reference image according to the pixel coordinates of each scattered spot.
Fig. 3 is a flowchart of a step of determining a scattered spot as a multipath interference point in the embodiment of the present invention, and as shown in fig. 3, when determining that the scattered spot is a multipath interference point, the method includes the following steps:
step N1: calculating the correlation degree of the scattered spot area and a sub-area of the upper left corner of the corresponding reference area;
and step N2: moving the sub-region by one pixel each time according to the sequence from left to right and from top to bottom, and then calculating the correlation degree of the sub-region and the speckle point region until a target sub-region with the highest correlation degree is determined;
and step N3: and determining the central offset of the target sub-region and the speckle point region, and determining the speckle point as a multipath interference point when the offset is greater than a preset threshold value.
In the embodiment of the present invention, the size of the speckle region may be set to be 5 × 5 pixel regions, where the pixel region includes a speckle; the size of the reference region may be set to 8 × 8 pixel regions; the threshold may be defined as two pixels, that is, when the central offset is greater than or equal to two pixels, the scattered spot is determined as a multipath interference point.
In the embodiment of the invention, the speckle point region is traversed through each pixel in the reference region by a pixel-by-pixel method in the order from left to right and from top to bottom.
In the embodiment of the present invention, the calculation method of the correlation r is as follows:
wherein A is mn Is the amplitude of the reference region or regions,is the average amplitude of the reference region; b is mn Is the amplitude of the speckle point area and,the average amplitude of the speckle point area is obtained; m and n are pixel coordinate ranges.
In the embodiment of the present invention, the amplitude may also be expressed by any one of a gray value, a pixel value, illuminance, luminous flux, and radiation power.
Fig. 4 is a flowchart of steps of acquiring an infrared reference image according to an embodiment of the present invention, and as shown in fig. 4, the infrared reference image acquisition includes the following steps:
step S101: projecting lattice light at a distance through the structured light projector toward the reference target;
step S102: receiving the lattice light reflected by the reference target by the infrared camera to generate an infrared reference image;
step S103: and repeatedly executing the step S101 to the step S102 to acquire the infrared reference images at a plurality of different distances.
In the embodiment of the invention, the infrared reference images at a plurality of different distances can be collected in advance and then stored in the memory for the depth camera to fetch. The reference target can be a checkerboard or a plane plate material.
FIG. 5 is a block diagram of a structured light projector according to an embodiment of the present invention, as shown in FIG. 5, the structured light projector including a light source, a light source driver, and a light modulator;
the light source driver is connected with the light source and used for driving the light source to emit light;
the light modulator is used for modulating the projected light of the light source into structured light and projecting the structured light to a target.
In an embodiment of the present invention, the light modulator employs a diffraction grating (DOE) or a Spatial Light Modulator (SLM).
Fig. 6 is a schematic block diagram of an infrared camera according to an embodiment of the present invention, and as shown in fig. 6, the infrared camera includes a lens, an optical filter, and an image sensor disposed along a light path, and the image sensor is provided with at least four receiving windows; the pulse width of the receiving window is larger or smaller than the pulse width of the structured light;
the image sensor is used for receiving light signals of at least four structured lights through at least four receiving windows; the at least four receiving windows are sequentially arranged in time sequence, and then each speckle depth map is generated according to the optical signals from each receiving window.
The image sensor comprises a plurality of photodetectors distributed in an array;
the lens is an optical imaging lens and is used for enabling direction vectors of the collimated light beams which penetrate through the lens and enter the light detector array to be in one-to-one correspondence with the light detectors;
the light detector is used for receiving the collimated light beam reflected by the target object.
In the embodiment of the present invention, in order to filter background noise, a narrow-band filter is usually installed in the optical imaging lens, so that the photodetector array can only pass incident collimated light beams with preset wavelengths. The preset wavelength may be the wavelength of the incident collimated light beam, such as 950 nm, or may be between 50 nm smaller and 50 nm larger than the incident collimated light beam. The photodetector array may be arranged periodically or aperiodically. The light detector array can be a combination of a plurality of single-point light detectors or a sensor chip integrating a plurality of light detectors according to the requirement of the quantity of the discrete dot matrix light. To further optimize the sensitivity of the light detectors, the illumination spot of a discrete array of light on the target person may correspond to one or more light detectors. When a plurality of light detectors correspond to the same irradiation light spot, signals of each detector can be communicated through a circuit, so that the light detectors with larger detection areas can be combined.
In the embodiment of the present invention, the light detector may adopt a CMOS light sensor, a CCD light sensor or a SPAD light sensor.
In an embodiment of the present invention, the structured light is a lattice light; the lattice light is distributed in the following preset shape: linear, triangular, quadrilateral, circular, hexagonal, pentagonal, randomly arranged, spatially encoded and quasi-lattice arrangements.
Fig. 7 (a), (b), (c) are schematic diagrams of aperiodic arrangement of the lattice light in the embodiment of the invention, as shown in fig. 7 (a), the spatial coding arrangement, specifically, in the periodic arrangement, a part of the light beams is absent, so as to implement spatial coding of the arrangement position, and the actually adopted coding is not limited to the example in fig. 7 (a); as shown in fig. 7 (b), the random arrangement, specifically the arrangement of the collimated light beams, is randomly distributed so that the similarity of the arrangement pattern at different positions is small or close to zero, and as shown in fig. 7 (c), the quasi-lattice arrangement, specifically the quasi-collimated light beams, are non-periodically arranged at close proximity positions and are periodically arranged at a long distance. Since the present invention is limited to an optical system in implementation, the arrangement of the actual collimated light beam in the cross section may have distortion, such as stretching, twisting, and the like. And the energy distribution of each collimated light beam in the cross section can be circular, circular ring or elliptical and the like. In this arrangement, shown in figure 7, this arrangement facilitates uniform sampling of non-deterministic targets, optimizing the effect of the final 3D depth map.
Fig. 8 is a schematic block diagram of a TOF depth camera based on lattice light projection according to a modification of the present invention, and as shown in fig. 8, the TOF depth camera based on lattice light projection according to the present invention includes the following modules:
a structured light projector for projecting structured light toward a target;
the infrared camera is used for receiving the structured light reflected by the target to generate an infrared speckle image;
a memory for storing a plurality of infrared reference images generated by the depth camera pre-acquiring a reference target at a plurality of different distances,
the processor module is used for generating a speckle depth map of a target according to the phase difference of multiple frames of infrared speckle images, the speckle depth map comprises the infrared speckle images and depth data of each scattered spot in the infrared speckle images, determining a corresponding infrared reference image according to the depth data of each scattered spot, further determining a reference area corresponding to the speckle in the infrared reference image according to pixel coordinates of the speckle point, matching each sub-area in the reference area with the corresponding scattered spot area to determine a target sub-area with the highest correlation degree, determining the central offset of the target sub-area and the reference area, and when the offset is greater than a preset threshold, determining the speckle point area as a multipath interference point, further removing the multipath interference point in the infrared speckle images and generating the target speckle image.
In the modification of the invention, the infrared reference images at a plurality of different distances can be collected in advance and then stored in the memory for the processor module to fetch for use.
In the embodiment of the invention, the offset of each scattered spot in the collected infrared speckle image is determined through the infrared reference image, and when the offset is larger than a preset threshold value, the speckle point area is determined as a multipath interference point, and then the multipath interference point in the infrared speckle image is removed to generate the target speckle image, so that the mirror image multipath can be detected, the multipath interference is removed, and the application of the TOF depth camera based on speckle dot matrix projection to the sweeping robot is realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.
Claims (10)
1. A TOF depth camera based on lattice light projection, comprising the following modules:
a structured light projector for projecting structured light toward a target;
the infrared camera is used for receiving the structured light reflected by the target to generate an infrared speckle image and a plurality of infrared reference images, and the infrared reference images are generated by collecting a reference target at a plurality of different distances through the depth camera;
the processor module is used for generating a speckle depth map of a target according to the phase difference of a plurality of frames of infrared speckle images, wherein the speckle depth map comprises an infrared speckle image and depth data of each scattered spot in the infrared speckle image, determining a corresponding infrared reference image according to the depth data of each scattered spot, further determining a reference area corresponding to the speckle point in the infrared reference image according to pixel coordinates of the speckle point, matching each sub-area in the reference area with the corresponding scattered spot area to determine a target sub-area with the highest correlation degree, determining the central offset of the target sub-area and the reference area, determining the speckle point area as a multipath interference point when the offset is greater than a preset threshold, and further removing the multipath interference point in the infrared speckle image to generate the target speckle image.
2. The lattice light projection based TOF depth camera of claim 1, comprising, in determining a corresponding reference region of the speckle pattern in the infrared reference image, the steps of:
step M1: acquiring depth data of each scattered spot, and determining the distance between a target surface point corresponding to the scattered spot and a depth camera according to the depth data;
step M2: determining the infrared reference images collected at the same distance according to the distance;
step M3: and determining a corresponding reference area corresponding to the speckle point in the infrared reference image according to the pixel coordinate of the speckle point.
3. The lattice light projection based TOF depth camera of claim 1, wherein in determining the speckle pattern as a multipath interference point, comprising the steps of:
step N1: calculating the correlation degree of the scattered spot area and a sub-area of the upper left corner of the corresponding reference area;
and step N2: moving the sub-region by one pixel each time according to the sequence from left to right and from top to bottom, and then calculating the correlation degree of the sub-region and the speckle point region until a target sub-region with the highest correlation degree is determined;
and step N3: and determining the central offset of the target sub-region and the speckle point region, and determining the speckle point as a multipath interference point when the offset is greater than a preset threshold value.
4. The lattice light projection based TOF depth camera according to claim 3, wherein the correlation r is calculated as follows:
5. The lattice light projection based TOF depth camera of claim 1, wherein said infrared reference image acquisition comprises the steps of:
step S101: projecting lattice light at a distance through the structured light projector toward the reference target;
step S102: receiving the dot matrix light reflected by the reference target by the infrared camera to generate an infrared reference image;
step S103: and repeatedly executing the step S101 to the step S102 to acquire the infrared reference images at a plurality of different distances.
6. The lattice light projection based TOF depth camera of claim 1 wherein the structured light projector comprises a light source, a light source driver, and a light modulator;
the light source driver is connected with the light source and used for driving the light source to emit light;
the light modulator is used for modulating the projected light of the light source into structured light and projecting the structured light to a target.
7. The dot matrix light projection based TOF depth camera according to claim 1, wherein the infrared camera comprises a lens, a filter and an image sensor arranged along an optical path, the image sensor being provided with at least four of the receiving windows; the pulse width of the receiving window is larger or smaller than the pulse width of the structured light;
the image sensor is used for receiving light signals of at least four structured lights through at least four receiving windows; the at least four receiving windows are sequentially arranged in time sequence, and then each speckle depth map is generated according to the optical signals from each receiving window.
8. The lattice light projection based TOF depth camera of claim 1, wherein the structured light is lattice light; the lattice light is distributed in the following preset shape: linear, triangular, quadrilateral, circular, hexagonal, pentagonal, randomly arranged, spatially encoded and quasi-lattice arrangements.
9. The dot matrix light projection based TOF depth camera of claim 1, wherein the size of the speckle point area can be set to 5 x 5 pixel area, the speckle point area comprising a scattered dot;
the size of the reference region may be set to 8 × 8 pixel regions; the threshold is set to two pixels.
10. A TOF depth camera based on lattice light projection, comprising the following modules:
a structured light projector for projecting structured light toward a target;
the infrared camera is used for receiving the structured light reflected by the target to generate an infrared speckle image;
a memory for storing a plurality of infrared reference images generated by the depth camera pre-acquiring a reference target at a plurality of different distances,
the processor module is used for generating a speckle depth map of a target according to the phase difference of a plurality of frames of infrared speckle images, wherein the speckle depth map comprises an infrared speckle image and depth data of each scattered spot in the infrared speckle image, determining a corresponding infrared reference image according to the depth data of each scattered spot, further determining a reference area corresponding to the speckle point in the infrared reference image according to pixel coordinates of the speckle point, matching each sub-area in the reference area with the corresponding scattered spot area to determine a target sub-area with the highest correlation degree, determining the central offset of the target sub-area and the reference area, determining the speckle point area as a multipath interference point when the offset is greater than a preset threshold, and further removing the multipath interference point in the infrared speckle image to generate the target speckle image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110463787.5A CN115248440A (en) | 2021-04-26 | 2021-04-26 | TOF depth camera based on dot matrix light projection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110463787.5A CN115248440A (en) | 2021-04-26 | 2021-04-26 | TOF depth camera based on dot matrix light projection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115248440A true CN115248440A (en) | 2022-10-28 |
Family
ID=83696455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110463787.5A Pending CN115248440A (en) | 2021-04-26 | 2021-04-26 | TOF depth camera based on dot matrix light projection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115248440A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953322A (en) * | 2023-01-15 | 2023-04-11 | 山东产研卫星信息技术产业研究院有限公司 | Stain removing method for satellite remote sensing image |
CN116067305A (en) * | 2023-02-09 | 2023-05-05 | 深圳市安思疆科技有限公司 | Structured light measurement system and measurement method |
-
2021
- 2021-04-26 CN CN202110463787.5A patent/CN115248440A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953322A (en) * | 2023-01-15 | 2023-04-11 | 山东产研卫星信息技术产业研究院有限公司 | Stain removing method for satellite remote sensing image |
CN115953322B (en) * | 2023-01-15 | 2023-07-14 | 山东产研卫星信息技术产业研究院有限公司 | Stain removing method for satellite remote sensing image |
CN116067305A (en) * | 2023-02-09 | 2023-05-05 | 深圳市安思疆科技有限公司 | Structured light measurement system and measurement method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230213656A1 (en) | Processing Of Lidar Images | |
US10514148B2 (en) | Pattern projection using microlenses | |
US8150142B2 (en) | Depth mapping using projected patterns | |
US9170097B2 (en) | Hybrid system | |
US9599463B2 (en) | Object detection device | |
US20160134860A1 (en) | Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy | |
CN112513677B (en) | Depth acquisition device, depth acquisition method, and recording medium | |
GB2395261A (en) | Ranging apparatus | |
CN111366941A (en) | TOF depth measuring device and method | |
US20240077586A1 (en) | Method for generating intensity information having extended expression range by reflecting geometric characteristic of object, and lidar apparatus performing same method | |
CN103486979A (en) | Hybrid sensor | |
US20230161041A1 (en) | Illumination Pattern For Object Depth Measurement | |
CN115248440A (en) | TOF depth camera based on dot matrix light projection | |
CN212694034U (en) | TOF depth measuring device and electronic equipment | |
CN100517198C (en) | Pointing device | |
JP5850225B2 (en) | Marker detection method and apparatus | |
CN110471050A (en) | The method of structure light imaging system and scanning scene | |
JP4560912B2 (en) | Distance measuring device | |
CN115248445A (en) | TOF camera capable of automatic exposure | |
CN115250316A (en) | TOF mirror surface multipath removal method, system, equipment and medium based on modulated light field | |
WO2024132461A1 (en) | Imaging device and method thereof | |
JP2002013918A (en) | Three-dimensional image forming device and three- dimensional image forming method | |
CN118565375A (en) | TOF and stripe structure light fusion depth module and electronic equipment | |
CN114697481A (en) | Simple depth camera | |
WO2023078903A1 (en) | Structured light pattern combined with projection of markers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |