WO2022264894A1 - 物体検知装置、物体検知方法およびプログラム - Google Patents
物体検知装置、物体検知方法およびプログラム Download PDFInfo
- Publication number
- WO2022264894A1 WO2022264894A1 PCT/JP2022/023092 JP2022023092W WO2022264894A1 WO 2022264894 A1 WO2022264894 A1 WO 2022264894A1 JP 2022023092 W JP2022023092 W JP 2022023092W WO 2022264894 A1 WO2022264894 A1 WO 2022264894A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- luminance
- group
- unit
- pixels
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 53
- 238000003384 imaging method Methods 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims description 154
- 238000012545 processing Methods 0.000 claims description 51
- 230000008569 process Effects 0.000 claims description 49
- 230000002194 synthesizing effect Effects 0.000 claims description 35
- 230000004927 fusion Effects 0.000 claims description 18
- 230000010365 information processing Effects 0.000 description 46
- 238000000926 separation method Methods 0.000 description 17
- 230000015572 biosynthetic process Effects 0.000 description 12
- 238000003786 synthesis reaction Methods 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 7
- 230000008859 change Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 6
- 240000004050 Pentaglottis sempervirens Species 0.000 description 5
- 239000002131 composite material Substances 0.000 description 5
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000015654 memory Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
Definitions
- the present disclosure relates to an object detection device, an object detection method, and a program device.
- Patent Document 1 discloses a method and apparatus for restoring point cloud data using a constructed model.
- Patent Document 2 discloses a method and system for detecting environmental information of a vehicle.
- JP 2019-149149 A Japanese Patent Publication No. 2019-533133
- the present disclosure provides an object detection device, an object detection method, and a program that improve the ability to detect objects with low reflectance.
- An object detection device includes a distance acquisition unit that acquires a distance image, a first luminance acquisition unit that acquires a first luminance image corresponding to the same imaging region as the distance image, and , generating a group that is a set of pixels within a certain range that can be regarded as the same object, and determining the group as a cluster if the number of pixels included in the group is equal to or greater than a first threshold value. and if the number of pixels included in the group is less than a first threshold, the luminance of a group of pixels in the first luminance image corresponding to the group is equal to or greater than a second threshold.
- a second clustering unit that determines the group as a cluster
- a 3D object detection unit that detects an object in the distance image based on the cluster and generates 3D object information indicating the detected object; , provided.
- an object detection method acquires a first luminance image and a distance image corresponding to the same imaging area, and sets pixels within a certain range that can be regarded as the same object in the distance image.
- a group is generated, and if the number of pixels included in the group is equal to or greater than a first threshold value, a first clustering process is performed to determine the group as a cluster constituting an object, and If the number of pixels is less than a first threshold, the group of pixels in the first luminance image and the luminance of the pixel group corresponding to the group is equal to or higher than a second threshold, the group is performed as a cluster, an object in the range image is detected based on the cluster, and 3D object information indicating the detected object is generated.
- a program according to one aspect of the present disclosure causes a computer to execute the object detection method described above.
- the object detection device, object detection method, and program of the present disclosure can enhance the ability to detect objects with low reflectance.
- FIG. 1 is a block diagram showing a configuration example of an object detection device according to Embodiment 1.
- FIG. 2 is a diagram showing an arrangement example of pixels of the image sensor according to Embodiment 1.
- FIG. 3 is a flow chart showing specific examples of the first clustering process and the second clustering process according to the first embodiment.
- FIG. 4 is a flow chart showing a modification of the first clustering process and the second clustering process according to the first embodiment.
- FIG. 5 is a block diagram showing a configuration example of an object detection device according to Embodiment 2.
- FIG. 6 is a block diagram showing a detailed configuration example of the information processing system according to the second embodiment.
- FIG. 7 is a flow chart showing an operation example of the object detection device according to the second embodiment.
- FIG. 1 is a block diagram showing a configuration example of an object detection device 100 according to Embodiment 1.
- the object detection device 100 shown in FIG. 1 is mounted, for example, on a moving body, captures images of monitoring areas set in front, sides, and rear of the moving body, and detects objects included in the captured two-dimensional and three-dimensional images. It is a device that detects Therefore, object detection device 100 includes image sensor 3 , light emitting unit 4 , signal processing unit 5 , and information processing system 1 .
- a two-dimensional image may be abbreviated as a 2D image
- a three-dimensional image may be abbreviated as a 3D image.
- the image sensor 3 is a solid-state imaging device having a plurality of pixels arranged in a matrix and generates pixel signals under the control of the signal processing section 5 .
- An example of the pixel array of the image sensor 3 is shown in FIG. In FIG. 2, the image sensor 3 has first pixels 31 and second pixels 32 .
- the first pixel 31 having sensitivity to visible light is labeled with the character "W", meaning "white” in "black and white”.
- An optical filter that suppresses infrared light may be arranged in the first pixel 31 .
- the second pixels 32 sensitive to infrared light are labeled with "IR" in the sense of "infrared”.
- An optical filter that suppresses visible light may be arranged in the second pixel 32 .
- the pixel array of the image sensor 3 is not limited to that shown in FIG.
- the first pixels 31 and the second pixels 32 may be arranged alternately in the row direction.
- the first pixels 31 and the second pixels 32 may be alternately arranged in the row direction and the column direction.
- the number of rows of the first pixels 31 and the number of rows of the second pixels 32 are the same in FIG. 2, they may be different.
- the image sensor 3 includes R pixels sensitive to red light, G pixels sensitive to green light, B pixels sensitive to blue light, and IR pixels sensitive to infrared light. good too.
- R pixels, G pixels, B pixels and IR pixels may be arranged in a square.
- the light emitting section 4 emits pulsed light, which is infrared light, according to the timing signal from the signal processing section 5 .
- the light emitting unit 4 outputs light in a wavelength range to which the second pixels 32 of the image sensor 3 are sensitive, that is, infrared light.
- an element such as a light emitting diode (LED: Light Emitting Diode) or a laser diode, which has a relatively high response speed and can flash at high speed, is used.
- the signal processing unit 5 controls the image sensor 3 to generate a luminance image and a distance image. Specifically, the signal processing unit 5 generates a BW luminance image D1 and an IR luminance image D2 as luminance images, and a distance image D3 as a distance image.
- BW is an abbreviation for black and white.
- IR is an abbreviation for infrared.
- the signal processing unit 5 uses pixel signals obtained from the first pixels 31 to generate the BW luminance image D1.
- the signal processing unit 5 generates an IR luminance image using pixel signals obtained from the second pixels 32.
- the signal processing unit 5 uses the image sensor 3 to perform TOF (Time Of Flight) measurement.
- TOF Time Of Flight
- a distance image D3 is generated from the pixel signals from the second pixels 32 by controlling the distance.
- the distance image D3 is a collection of pixels representing distance values from the image sensor 3 to the object. That is, each pixel forming the distance image D3 indicates the distance value from the image sensor 3 to the object that reflected the pulsed light of the light emitting section 4 .
- the signal processing unit 5 controls the intensity-modulated light from the light emitting unit 4 (hereinafter referred to as Also referred to as "intensity-modulated light” or “pulsed light”) is output to the monitoring area. Then, the signal processing unit 5 measures time using the phase difference between the phase of intensity change at the time of light reception by the image sensor 3 and the phase of intensity change at the time of light projection from the light emitting unit 4 . If the frequency of intensity change in the intensity-modulated light is constant, the phase difference can be converted into the distance to the object by a relatively simple calculation.
- Equation 2 the frequency of the modulation signal that modulates the intensity of light is f [Hz] and the phase difference is ⁇ [rad]
- the distance L to the object can be obtained.
- phase difference ⁇ it is sufficient to obtain the received light intensity for a plurality of different phases of the modulated signal for each second pixel 32 of the image sensor 3 .
- the amount of received light in each phase interval having a predetermined phase width (time width) is detected, and the received light output corresponding to this amount of received light is used to calculate the phase difference ⁇ .
- the phase intervals are set at 90-degree intervals, four phase intervals with equal phase intervals are periodically obtained for one period of the modulated signal.
- the amount of received light for each phase section is C0 to C3
- the phase difference ⁇ is expressed by Equation 3 below.
- phase difference ⁇ changes depending on which phase of the modulation signal the received light amounts C0 to C3 correspond to
- absolute value of the phase difference ⁇ may be used.
- the signal processing section 5 is provided because it is necessary to project the intensity-modulated light from the light emitting section 4 and detect the amount of received light for each specific phase interval in the image sensor 3 .
- the signal processing unit 5 drives the light emitting unit 4 by applying a modulation signal to the light emitting unit 4 so that intensity-modulated light as described above is projected.
- the image sensor 3 obtains received light outputs corresponding to the received light amounts C0 to C3 for each of the four phase sections, and the received light outputs (electrical signals) of the image sensor 3 are input to the signal processing unit 5 .
- the signal processing unit 5 uses the received light output to calculate the distance to the object. At this time, the signal processing unit 5 supplies the image sensor 3 with a readout signal generated based on the reference signal synchronized with the modulated signal to read out the received light output.
- the information processing system 1 in FIG. 1 detects objects in the luminance image and the distance image generated by the signal processing unit 5. Therefore, the information processing system 1 includes a BW luminance acquisition unit 11, an IR luminance acquisition unit 12, a distance acquisition unit 13, a coordinate conversion unit 15, a first clustering unit 101, a second clustering unit 102, a separation unit 16, a three-dimensional object A detection unit 17 and a fusion unit 18 are provided.
- the BW luminance acquisition unit 11 is a specific example of one of the first luminance acquisition unit described above and the second luminance acquisition unit described later.
- the IR luminance acquisition section 12 is a specific example of the other of the first luminance acquisition section described above and the second luminance acquisition section described later.
- the information processing system 1 may be configured by a computer system having one or more processors and one or more memories.
- This computer system may be one of SoC (System on a Chip), server, and cloud computing.
- the processor implements the functions of the information processing system 1 by executing programs recorded in the memory.
- the program may be recorded in memory in advance, recorded in a non-temporary recording medium such as a memory card and provided, or provided through an electric communication line.
- the program is a program for causing one or more processors to function as the information processing system 1 .
- the BW brightness acquisition unit 11 acquires the BW brightness image D1 from the signal processing unit 5.
- the IR brightness acquisition unit 12 acquires the IR brightness image D2 from the signal processing unit 5.
- the distance acquisition unit 13 acquires the distance image D3 from the signal processing unit 5.
- the coordinate transformation unit 15 performs coordinate transformation processing on the distance image D3 into an X, Y, Z orthogonal coordinate system. Specifically, the coordinate conversion unit 15 generates point cloud data including points having X, Y, and Z coordinate values based on the distance image D3. Point cloud data is also called a point cloud.
- the distance image D3 after the coordinate conversion output from the coordinate conversion unit 15, that is, the point cloud data will be referred to as point cloud data d3.
- the coordinate transformation section 15 outputs the distance image D3 and the point cloud data d3 to the separation section 16 .
- the separation unit 16 separates the object and the surrounding area located around the object.
- the "surrounding area" is, for example, the road surface, the ground surface, the floor surface, etc. that are excluded from detection targets when detecting the presence or absence of an object.
- the separation unit 16 separates an area other than the road surface and the like including the object from the peripheral area such as the road surface.
- the separation unit 16 separates the object and the surrounding area based on the point cloud data d3 generated by the coordinate conversion unit 15 .
- the separation unit 16 first extracts components corresponding to the peripheral region from the point cloud data d3 and the distance image D3. Then, the separation unit 16 outputs the component corresponding to the object in the distance image D3 to the first clustering unit 101 by removing the extracted peripheral region from the distance image D3. Further, the separation unit 16 outputs the component corresponding to the object in the point cloud data d3 to the three-dimensional object detection unit 17 by removing the extracted peripheral region from the point cloud data d3.
- the peripheral area is all areas other than the object in the distance image D3, and includes not only the area near the object but also the area far from the object.
- the first clustering unit 101 performs a first clustering process on the distance image D3 input from the separating unit 16. Specifically, the first clustering unit 101 generates a group, which is a set of pixels within a certain range that can be regarded as the same object, in the distance image D3 input from the separation unit 16. If the number is greater than or equal to the first threshold, the group is determined as a cluster.
- the second clustering unit 102 performs a second clustering process on the distance image D3 input from the separation unit 16 and the IR brightness image D2 input from the IR brightness acquisition unit. Specifically, when the number of pixels included in the group generated by the first clustering unit 101 is less than the first threshold value, the second clustering unit 102 selects the group of pixels of the IR luminance image D2. Then, when the luminance of the pixel group corresponding to the group is equal to or higher than the second threshold value, the group is determined as a cluster. This allows some or all of the objects with low reflectance to be detected as clusters in the second clustering process if they are not detected as clusters in the first clustering process.
- the brightness of the pixel group used in the second clustering process may be the average brightness of the pixels included in the pixel group, or may be the maximum brightness of the pixels included in the pixel group. In other words, for an object having a low reflectance, clusters may be detected based on the average brightness, or based on the maximum value.
- the brightness of the pixel group used in the second clustering process may include both the average brightness and the maximum brightness of the pixel group.
- the second threshold includes an average threshold and a maximum threshold. Second clustering section 102 may determine the group as a cluster when the average luminance is equal to or higher than the average threshold and the maximum luminance is equal to or higher than the maximum threshold. In this way, for objects with low reflectance, cluster detection capability can be enhanced since both the average luminance and the maximum luminance are referenced.
- first threshold and the second threshold can be predetermined experimentally, statistically, or through simulation.
- the three-dimensional object detection unit 17 detects an object from the point cloud data d3 input from the separation unit 16 based on the clusters determined by the first clustering unit 101 and the second clustering unit 102, and detects the detected object 3D object information indicating a target is generated.
- the three-dimensional object detection unit 17 receives the point cloud data d ⁇ b>3 from which the peripheral region has been removed from the separation unit 16 .
- the three-dimensional object detection unit 17 detects, for example, one cluster or two or more consecutive is detected as an object and output as 3D object information.
- the fusion unit 18 fuses or combines 2D object information, which is the detection result of the 2D object detection unit 20, and 3D object information, which is the detection result of the 3D object detection unit 17.
- the two-dimensional object detection unit 20 detects an object from the two-dimensional composite image output from the first synthesis unit 21, and generates 2D object information indicating the detected object.
- the composite image is either an image obtained by combining the BW luminance image D1 and the IR luminance image D2 by weighting, the BW luminance image D1, or the IR luminance image D2.
- the object detection by the two-dimensional object detection unit 20 may use AI technology (that is, artificial intelligence technology), for example. A "type” and an "attribute" may be determined for the detected object.
- the classification of the object includes whether it is a person, whether it is a moving body (a person, a car, a bicycle, etc.) or a fixed object, and whether it is a roadside tree, a traffic light, a guardrail, or the like.
- the "attribute” of an object also includes the size, color, movement (change), etc. of the object. Furthermore, if the object is a person, the “attribute” of the object may include the sex, height, body type, age group, etc., and if the object is a moving body, the moving direction, moving speed, etc. may be included in the "attribute" of the object.
- the first synthesizing unit 21 synthesizes the BW luminance image D1 and the IR luminance image D2 from the BW luminance obtaining unit 11 and the IR luminance obtaining unit 12. Since both the BW luminance image D1 and the IR luminance image D2 are two-dimensional images, a composite image is generated by synthesizing the BW luminance image D1 and the IR luminance image D2. "Synthesis" as used in the present disclosure includes weighted synthesis. For example, if the weight coefficients of the BW luminance image D1 and the IR luminance image D2 are "1:0", the synthesized image from the first synthesizing unit 21 The BW luminance image D1 is output as it is.
- the IR luminance image D2 is directly output from the first synthesizing unit 21 as the synthesized image.
- the first synthesizer 21 has a function as a selector that selectively outputs the BW luminance image D1 and the IR luminance image D2.
- a synthesized image output from the first synthesis unit 21 is input to the two-dimensional object detection unit 20 . Therefore, the first synthesizing unit 21 may be appropriately controlled so that the output of the first synthesizing unit 21 becomes a synthetic image suitable for the operation of the two-dimensional object detecting unit 20 .
- the synthesized image output from the first synthesizing unit 21 may be synthesized by appropriately changing the weighting factor according to the white line conditions such as daytime/nighttime or weather (rain, fog). .
- FIG. 3 is a flow chart showing specific examples of the first clustering process and the second clustering process according to the first embodiment.
- Steps S101 to S103 and S107 in the figure roughly correspond to the first clustering process by the first clustering unit 101.
- Steps S ⁇ b>104 , S ⁇ b>107 and S ⁇ b>108 roughly correspond to the second clustering process by the second clustering unit 102 .
- the first clustering unit 101 generates a group, which is a set of pixels within a certain range that can be regarded as the same object, in the distance image D3 input from the separation unit 16 (S101). Further, the first clustering unit 101 performs loop 1 processing (S103 to S109) for repeating the processing for each generated group.
- the first clustering unit 101 determines whether or not the number of pixels included in the group is greater than or equal to the first threshold (S103). If yes in S103), the group is determined as a cluster.
- “determined as a cluster” means that a set of pixels forming the partial loop is detected as a cluster corresponding to part or all of the object.
- the first clustering unit 101 determines that the group is not equal to or greater than the first threshold value (no in S103)
- the group is not determined as a cluster and becomes a processing target of the second clustering unit 102.
- the second clustering unit 102 performs the following processing for groups that have not been determined to be clusters by the first clustering unit 101 . That is, the second clustering unit 102 extracts a group of pixels in the IR brightness image D2 that corresponds to the group that has not been determined to be a cluster, and the IR brightness of the extracted pixel group is It is determined whether or not it is equal to or greater than 2 thresholds (S104). Furthermore, when the second clustering unit 102 determines that the brightness of the pixel group is equal to or higher than the second threshold value (yes in S104), the second clustering unit 102 determines the group as a cluster (S107). On the other hand, when the second clustering unit 102 determines that the brightness of the pixel group is less than or equal to the second threshold value (no in S104), it determines that the group is not a cluster (S108).
- Such a second clustering process of the second clustering unit 102 is performed when part or all of an object having a low reflectance is not detected as a cluster in the first clustering process. Allows detection as a cluster. In this way, the second clustering unit 102 can enhance the ability to detect clusters.
- FIG. 4 is a flow chart showing a modification of the first clustering process and the second clustering process according to the first embodiment. The figure differs from the flowchart of FIG. 3 in that steps S105 and S106 are added. In the following, the description will focus on the points of difference, avoiding duplication of the description of the same points.
- the second clustering unit 102 determines that the brightness of the pixel group is equal to or higher than the second threshold value (yes in S104).
- the second clustering unit 102 tentatively determines the group as a cluster (S105).
- the result of projecting the object in the 3D object information and generated based on the group (that is, the tentatively determined cluster) into the 2D image is projected by the 2D object detection unit 20. It is determined whether or not it overlaps with the object indicated by the detected 2D object information (S106). Further, when judging that they overlap (yes in S106), the fusion unit 18 decides the group tentatively decided as a cluster as a cluster (S107).
- the fusion unit 18 determines that they do not overlap (no in S106), it determines that the group provisionally determined as a cluster is not a cluster (S108). That is, it is determined that the object generated based on the group does not exist, and the object is deleted from the 3D object information.
- the second clustering unit 102 can further enhance the cluster detection capability and accuracy of cluster detection for objects having low reflectance.
- the first clustering unit 101, the second clustering unit 102, and the fusion unit 18 target the distance image D3 for the clustering process for determining clusters, but the point cloud data d3 may be the target.
- the object detection device 100 includes the distance acquisition unit 13 that acquires a distance image, and the first luminance image that acquires the first luminance image corresponding to the same imaging region as the distance image. 1.
- a brightness acquisition unit generates a group that is a set of pixels within a certain range that can be regarded as the same object in the range image, and if the number of pixels included in the group is equal to or greater than a first threshold value, the group is selected.
- the brightness of the pixel group in the first brightness image may be the average brightness of the pixels included in the pixel group.
- the brightness of the pixel group in the first brightness image may be the maximum brightness of the pixels included in the pixel group.
- second clustering section 102 may determine the group as a cluster when the average luminance is equal to or higher than the average threshold and the maximum luminance is equal to or higher than the maximum threshold.
- the object detection apparatus 100 detects an object included in a 2D image corresponding to an imaging region, generates 2D object information indicating the detected object, and 3D object information and 2D object information. and a fusion portion 18 that fuses.
- the ability to detect objects can be made more accurate through fusion.
- the fusion unit 18 is determined as a 3D object, and if the object corresponding to the cluster determined by the second clustering unit does not overlap the object indicated by the 2D object information, the cluster determined by the second clustering unit is not a cluster and may determine that the object is not a 3D object.
- the 2D image may be the first luminance image.
- an image sensor mainly composed of pixels sensitive to infrared light it is possible to use an image sensor mainly composed of pixels sensitive to infrared light.
- the object detection device 100 includes a second brightness acquisition unit that acquires a second brightness image for light with a wavelength different from that of the first brightness image, and the 2D image is the second brightness image or the first brightness image. It may be any of the third luminance images obtained by synthesizing the image and the second luminance image.
- an image sensor mainly including pixels sensitive to infrared light, or to combine pixels sensitive to infrared light and pixels sensitive to visible light. It is also possible to use an image sensor containing
- the object detection device 100 uses the light emitting unit 4 that emits infrared light, the image sensor 3 that receives the reflected light of the infrared light, and the light emitting unit 4 and the image sensor 3 to generate the first luminance image
- the first brightness acquisition section and the distance acquisition section may acquire the first brightness image and the range image from the signal processing section 5 .
- the object detection device 100 includes a light emitting unit 4 that emits infrared light, an image sensor 3 that has first pixels that are sensitive to infrared light, and second pixels that are sensitive to visible light, and a light emitting unit. 4 and an image sensor 3 to generate a first luminance image and a distance image from the pixel values of the first pixels, and a signal processing unit 5 that generates a second luminance image from the pixel values of the second pixels,
- the first luminance acquisition section, the second luminance acquisition section, and the distance acquisition section may acquire the first luminance image, the second luminance image, and the distance image from the signal processing section 5 .
- the image sensor 3 including first pixels sensitive to infrared light and second pixels sensitive to visible light.
- an object detection method acquires a first luminance image and a distance image corresponding to the same imaging area, and sets pixels within a certain range that can be regarded as the same object in the distance image. and if the number of pixels included in the group is equal to or greater than a first threshold value, a first clustering process is performed to determine the group as a cluster constituting an object, and the group is divided into if the number of included pixels is less than a first threshold, and if the luminance in a group of pixels in the first luminance image and corresponding to the group is equal to or greater than a second threshold, A second clustering process is performed to determine the group as a cluster, an object in the distance image is detected based on the cluster, and 3D object information indicating the detected object is generated.
- the brightness of the pixel group in the first brightness image may be the average brightness of the pixels included in the pixel group.
- the brightness of the pixel group in the first brightness image may be the maximum brightness of the pixels included in the pixel group.
- the luminance of the group of pixels in the first luminance image includes both the average luminance and the maximum luminance of the group of pixels
- the second threshold comprises the average threshold and the maximum threshold.
- an object included in a 2D image corresponding to the imaging area may be detected, 2D object information indicating the detected object may be generated, and the 3D object information and the 2D object information may be merged.
- the ability to detect objects can be made more accurate through fusion.
- the object detection method if an object corresponding to the cluster determined by the second clustering process among the objects included in the 3D object information overlaps with the object indicated in the 2D object information, If the object is determined to be a 3D object, and if the object corresponding to the cluster determined in the second clustering process does not overlap the object indicated in the 2D object information, the cluster determined in the second clustering process is not a cluster and the object is not a 3D object.
- the 2D image may be the first luminance image.
- an image sensor mainly composed of pixels sensitive to infrared light it is possible to use an image sensor mainly composed of pixels sensitive to infrared light.
- the object detection method includes a second brightness acquisition unit that acquires a second brightness image for light of a wavelength different from that of the first brightness image, and the 2D image is the second brightness image or the first brightness image. and the second luminance image.
- an image sensor mainly including pixels sensitive to infrared light, or to combine pixels sensitive to infrared light and pixels sensitive to visible light. It is also possible to use an image sensor containing
- a computer program according to one aspect of Embodiment 1 is a program that causes a computer to execute the object detection method described above.
- the moving object here includes, for example, vehicles such as automobiles, agricultural machinery, and two-wheeled vehicles, ships, and flying objects such as drones.
- FIG. 5 is a block diagram showing a configuration example of the object detection device 100 according to the second embodiment.
- An object detection device 100 shown in the figure includes a sensor system 10 and a control system 2 .
- the sensor system 10 includes an image sensor 3, a light emitting section 4, a signal processing section 5, and an information processing system 1a.
- the image sensor 3, the light emitting unit 4 and the signal processing unit 5 may be the same as those in the first embodiment.
- the information processing system 1a has the same functions as the information processing system 1 of Embodiment 1, and furthermore, has an additional function for in-vehicle use.
- control system 2 appropriately presents information to the driver of the mobile object, such as displaying information for supporting the driving of the mobile object on the display device according to the information processing result from the information processing system 1a. It is an information presentation unit that performs Note that the control system 2 may assist the driving (steering) of the moving body by controlling the steering or braking of the moving body according to the information processing result from the information processing system 1 .
- FIG. 6 is a block diagram showing a detailed configuration example of the information processing system 1a according to the second embodiment.
- the information processing system 1a shown in FIG. Compared to the information processing system 1 shown in FIG. 1, the information processing system 1a shown in FIG. , a white line bird's-eye view portion 224, a free space detection portion 225, a parking frame detection portion 226, and an output portion 227 are added.
- the description will focus on the points of difference, avoiding duplication of the description of the same points.
- Odometry information D4, confidence information D5, and reference information D6 are input to the information processing system 1a. That is, the BW luminance image D1, the IR luminance image D2, the distance image D3, the odometry information D4, the confidence information D5, and the reference information D6 are input as input data to the information processing system 1a.
- the odometry information D4 includes, for example, the tilt angle of the moving body, the traveling direction of the moving body, the moving speed of the moving body, the acceleration of the moving body, the amount of depression of the accelerator pedal (accelerator opening), the amount of depression of the brake pedal, or the steering angle. etc., includes information that can be detected by a sensor mounted on a mobile object. Furthermore, the odometry information D4 is information based on the current position of a mobile object that can be detected using a GPS (Global Positioning System) or the like. , roadway width, presence or absence of sidewalks, slope or curve curvature, etc.
- GPS Global Positioning System
- Confidence information D5 is information about data reliability. As an example, the confidence information D5 is used to determine whether the distance image D3 corresponds to a pseudo-range image affected by interference, multipath, or the like. Similarly, the confidence information D5 is used to determine whether the BW luminance image D1 or the IR luminance image D2 corresponds to pseudo luminance information.
- the reference information D6 is information for changing the weighting factor for synthesizing the BW luminance image D1 and the IR luminance image D2. That is, the reference information D6 is information about the state of the white line such as daytime/nighttime or weather (rain, fog), and includes, for example, information about the illuminance and/or humidity around the object.
- the noise processing unit 214 corrects the distance image D3 using the distance image D3 and one or more pieces of information selected from the group consisting of the BW luminance image D1 and the IR luminance image D2.
- One or more pieces of information selected from the group consisting of the BW luminance image D1 and the IR luminance image D2 are the BW luminance image D1, the IR luminance image D2, and luminance information obtained by synthesizing the BW luminance image D1 and the IR luminance image D2 (hereinafter , also referred to as a “composite image”).
- the noise processing section 214 is connected to the distance acquisition section 13 .
- the noise processor 214 is connected to the IR luminance acquisition section 12 via the first synthesis section 21 .
- the BW luminance image D1 or the IR luminance image D2 is not directly input to the noise processing unit 214 but is indirectly input via the first synthesizing unit 21 .
- the distance image D3 by itself has a relatively low SN ratio and a high noise ratio, so the noise processing unit 214 reduces the noise of the distance image D3.
- the tracking unit 219 tracks objects existing within the monitoring area.
- the tracking unit 219 tracks the object by comparing the 3D object information fused by the fusion unit 18 between multiple frames of the output of the image sensor 3, for example. As a result, even when the object moves within the monitoring area, the tracking unit 219 can recognize the object before movement and the object after movement as the same object.
- the tracking result of the tracking unit 219 is output to the output unit 227 as target object information D11. Odometry information D4 is also input to the tracking unit 219 .
- the second synthesizing unit 222 synthesizes the BW luminance image D1 and the IR luminance image D2 from the BW luminance obtaining unit 11 and the IR luminance obtaining unit 12. Similarly to the first synthesizing unit 21, the second synthesizing unit 222 also has a function as a selector that selectively outputs the BW luminance image D1 and the IR luminance image D2. The BW luminance image D1 and the IR luminance image D2 synthesized by the second synthesis section 222 are input to the white line detection section 223 . Therefore, the second synthesizing unit 222 is appropriately controlled so that the output of the second synthesizing unit 222 is a synthesized image suitable for the operation of the white line detecting unit 223 . Further, the second synthesizing unit 222 receives the reference information D6, and performs synthesizing by appropriately changing the weighting factor according to the white line conditions such as daytime/nighttime or weather (rain, fog).
- the white line conditions such as daytime
- the output of the first synthesis section 21 is input to the two-dimensional object detection section 20, and the output of the second synthesis section 222 is input to the white line detection section 223.
- the first synthesizing unit 21 and the second synthesizing unit 222 may have different weighting factors during synthesis. .
- both the first synthesizing unit 21 and the second synthesizing unit 222 function as a "synthesizing unit” that synthesizes the BW luminance image D1 and the IR luminance image D2.
- the synthesizing unit (the first synthesizing unit 21 and the second synthesizing unit 222) has a function of synthesizing the BW luminance image D1 and the IR luminance image D2 so as to correct the information regarding the positions of the first pixels 31 and the second pixels 32. have.
- the synthesizing units (the first synthesizing unit 21 and the second synthesizing unit 222) change weight coefficients for synthesizing the BW luminance image D1 and the IR luminance image D2 according to the input reference information D6.
- the white line detection unit 223 detects areas that are candidates for white lines drawn on the road surface.
- the white line detection unit 223 detects white line candidate areas based on the composite image of the BW luminance image D1 and the IR luminance image D2 input from the second synthesis unit 222 .
- Confidence information D5 is also input to the white line detection unit 223 . Detection of white lines is performed, for example, by performing edge extraction using a filter on a combined image of the BW luminance image D1 and the IR luminance image D2, thereby detecting regions where pixel values (brightness) change suddenly. is realized by
- the white line to be detected by the white line detection unit 223 is not limited to a white line, and may be, for example, a yellow line (yellow line), a picture or a pattern.
- the white line bird's-eye view section 224 is provided after the white line detection section 223 .
- the white line bird's-eye view unit 224 adjusts the coordinates of the synthesized image of the BW brightness image D1 and the IR brightness image D2 so that at least the white line candidate area detected by the white line detection unit 223 and its surroundings are a bird's-eye view image viewed from directly above. do the conversion.
- the free space detection unit 225 detects a free space within the white line, that is, a vacant space, based on the distance image D3.
- the distance image D ⁇ b>3 separated into the object and the peripheral area located around the object is input from the separation unit 16 to the free space detection unit 225 .
- the free space detection unit 225 uses the distance image D3 input from the separation unit 16 to detect the free space within the white line.
- the detection result of free space detection section 225 is output to output section 227 as free space information D13.
- the free space detector 225 also receives odometry information D4.
- the parking frame detection unit 226 detects vacant parking spaces within the white lines, that is, vacant parking spaces where no other vehicle is parked. In general, parking lots of commercial facilities, hospitals, parks, stadiums, halls, public transportation facilities, etc. are provided with a plurality of parking spaces. Park the own vehicle (moving body) in the frame. According to the parking frame detection unit 226, it is possible to automatically search for an empty parking frame in such a case.
- the detection result of the free space detection unit 225 and the output of the white line bird's eye view unit 224 (white line candidate area after coordinate conversion) are input to the parking frame detection unit 226 .
- the parking frame detection unit 226 performs pairing between the detection result of the free space detection unit 225 and the output of the white line bird's-eye view unit 224 (white line candidate area after coordinate conversion), and determines an empty parking frame. For example, the parking frame detection unit 226 determines, as an empty parking frame, a parking frame that positionally overlaps with a free space among parking spaces that are large enough to park a moving object within the white line. The detection result of the parking frame detection unit 226 is output to the output unit 227 as the vacant parking frame information D14.
- the output unit 227 outputs information processing results regarding the state of white lines within the angle of view of the image sensor 3 based on the BW luminance image D1, the IR luminance image D2, and the distance image D3. That is, the information processing system 1a according to the present embodiment executes various information processing regarding the state of the white line based on the BW luminance image D1, the IR luminance image D2, and the distance image D3 obtained from the image sensor 3, and the result is is output from the output unit 227 .
- the output unit 227 obtains target information D11, road surface information D12, free space information D13 and The vacant parking space information D14 is output. The output unit 227 outputs these pieces of information to the control system 2 .
- the information processing result includes at least one of information regarding the presence or absence of an object on the white line around the moving object, information regarding the position of the object existing on the white line within the white line, and information regarding the attribute of the object.
- the target object information D11 includes all information regarding the presence or absence of an object on the white line around the moving object, information regarding the position of the object existing on the white line within the white line, and information regarding the attribute of the object. .
- the "attribute” referred to in the present disclosure includes, for example, the type of object, ie, whether it is a person or not, whether it is a moving object (a person, a car, a bicycle, etc.) or a fixed object, or whether it is a street tree, a traffic light, a guardrail, or the like.
- the "attribute” of an object also includes the size, color, movement (change), etc. of the object.
- the “attribute” of the object includes the sex, height, body type, age group, etc., and if the object is a moving body, the moving direction, moving speed, etc. are included in the "attribute" of the object.
- the information output by the output unit 227 changes as appropriate according to the request of the output destination. For example, when the outputs of the output unit 227 are aggregated from a plurality of moving bodies for a cloud (cloud computing) or the like, the output unit 227 may output the meta information. .
- the fusion unit 18 may output the feedback signal Si1 to the sensor system 10 including the image sensor 3.
- the image sensor 3 outputs an electrical signal in which one or more parameters selected from the group consisting of exposure time and frame rate are changed by the feedback signal Si1. That is, the feedback signal Si1 output from the fusion section 18 is fed back to the signal processing section 5 as shown in FIG.
- the feedback signal Si1 includes the detection result of the three-dimensional object detection section 17, which is the output of the fusion section 18.
- the image sensor 3 changes the exposure time and/or frame rate according to this feedback signal Si1.
- FIG. 7 is a flow chart showing an operation example of the object detection device according to the second embodiment.
- the information processing system 1a has a plurality of operation modes including at least a parking space detection mode and an object detection mode. These multiple operation modes can be individually enabled/disabled. For example, if the parking space detection mode is enabled and all other operation modes are disabled, the information processing system 1a detects the parking space. Works only in detection mode.
- the parking space detection mode is an operation mode for detecting an empty parking space.
- the object detection mode is an operation mode for detecting an object within the monitoring area.
- the information processing system 1a performs BW brightness acquisition processing (S1) for acquiring a BW brightness image D1, IR brightness acquisition processing (S2) for acquiring an IR brightness image D2, and acquisition of a distance image D3.
- BW brightness acquisition processing S1 for acquiring a BW brightness image D1
- IR brightness acquisition processing S2 for acquiring an IR brightness image D2
- distance acquisition processing S3 is executed.
- Information processing system 1a executes BW luminance acquisition processing, IR luminance acquisition processing, and distance acquisition processing (S1 to S3) in BW luminance acquisition unit 11, IR luminance acquisition unit 12, and distance acquisition unit 13 as needed.
- the information processing system 1a corrects the distance image D3 using the distance image D3 and one or more pieces of information selected from the group consisting of the BW luminance image D1 and the IR luminance image D2. is performed to reduce noise in the distance image D3 (S4).
- the information processing system 1a performs a separation process for separating the distance image D3 after the coordinate transformation into the object and the surrounding area located around the object in the separating unit 16 (S5).
- the information processing system 1a determines whether the object detection mode is valid (S6). If the object detection mode is valid (S6: Yes), the information processing system 1a executes a series of processes (S6a, S7 to S11) for detecting an object. That is, the information processing system 1a executes the first clustering process and the second clustering process with the first clustering unit 101 and the second clustering unit 102 (S6a), and the three-dimensional object detection unit 17 is executed (S7), and a two-dimensional object detection process is executed to detect an object by the two-dimensional object detection unit 20 (S8).
- step S6a may be the same as the flowchart in FIG. 3 or FIG.
- the fusion unit 18 uses the detection result (two-dimensional detection result) of the two-dimensional object detection unit 20 to correct the detection result (three-dimensional detection result) of the three-dimensional object detection unit 17. Then, the fusion process is executed (S9).
- step S9 it is determined in step S9 whether the corresponding three-dimensional object detection result and two-dimensional object detection result overlap, according to FIG.
- the information processing system 1a determines the presence or absence of an object based on the result of fusion processing (S10). If an object exists (S10: Yes), the information processing system 1a outputs the target information D11 from the output unit 227 (S11), and determines whether the parking space detection mode is valid (S12). If the object does not exist (S10: Yes), the information processing system 1a proceeds to processing S12 without outputting the target object information D11.
- the information processing system 1a executes a series of processes (S13-S16) for detecting an empty parking space. That is, in the information processing system 1a, the white line detection unit 223 detects a white line candidate area (S13), and the free space detection unit 225 detects a free space (S14). Based on these results, the information processing system 1a uses the parking frame detection unit 226 to determine whether or not there is an empty parking frame within the monitoring area (S15).
- the information processing system 1a If there is a vacant parking space (S15: Yes), the information processing system 1a outputs vacant parking space information D14 from the output unit 227 (S16), and ends the series of processes. If there is no vacant parking space (S15: Yes), the information processing system 1a terminates the series of processes without outputting the vacant parking space information D14.
- the information processing system 1a skips a series of processes (S7 to S11) for detecting an object, and proceeds to process S12. If the parking space detection mode is disabled (S12: No), the information processing system 1a skips a series of processes (S13 to S16) for detecting an empty parking space and ends the process.
- the information processing system 1a repeatedly executes the series of processes S1 to S16 as described above.
- the flowchart of FIG. 7 is merely an example of the overall operation of the information processing system 1a, and processes may be omitted or added as appropriate, and the order of the processes may be changed as appropriate.
- the order of the processes S1 to S3 may be changed, and after acquiring the IR brightness image D2 and the distance image D3 (S2, S3), the BW brightness image D1 may be acquired (S1).
- the object detection device 100 provides information presentation for presenting information for supporting movement of a moving object on which the object detection device is mounted, based on 3D object information. It has a control system 2 as a part.
- the object detection device 100 mounted on a moving body.
- the image sensor 3 may be configured with two solid-state imaging devices instead of one solid-state imaging device, or may be configured with three solid-state imaging devices. However, it is necessary that the pixels of the BW luminance image D1, the IR luminance image D2, and the distance image D3 can be associated with each other.
- each component may be configured with dedicated hardware or implemented by executing a software program suitable for each component.
- Each component may be realized by reading and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory by a program execution unit such as a CPU or processor.
- the object detection device 100 the object detection method, and the program according to one or more aspects have been described above based on the embodiments, the present disclosure is not limited to these embodiments. As long as it does not deviate from the spirit of the present disclosure, various modifications that a person skilled in the art can think of are applied to this embodiment, and a form constructed by combining the components of different embodiments is also within the scope of one or more aspects may be included within
- the present disclosure can be used for the object detection device 100 that detects objects in luminance images and range images.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Measurement Of Optical Distance (AREA)
- Geophysics And Detection Of Objects (AREA)
Abstract
Description
[1.1 構成]
まず、本実施の形態に係る物体検知装置100の構成について説明する。図1は、実施の形態1に係る物体検知装置100の構成例を示すブロック図である。同図の物体検知装置100は、例えば移動体に搭載され、移動体の前方、側方、後方などに設定された監視領域を撮像し、撮像した二次元画像および三次元画像に含まれる物体を検知する装置である。そのため、物体検知装置100は、イメージセンサ3、発光部4、信号処理部5、および、情報処理システム1を備える。以下では、二次元画像を2D画像と略し、三次元画像を3D画像と略すことがある。
まず、本実施の形態に係る物体検知装置100の動作について説明する。ここでは、物体検知装置100の動作の第1クラスタ化処理および第2クラスタ化処理の具体例について説明する。図3は、実施の形態1に係る第1クラスタ化処理および第2クラスタ化処理の具体例を示すフローチャートである。
次に、本実施の形態におけるクラスタ化処理の変形例について説明する。ここでは、物体検知装置100の動作の第1クラスタ化処理および第2クラスタ化処理の変形例について説明する。図4は、実施の形態1に係る第1クラスタ化処理および第2クラスタ化処理の変形例を示すフローチャートである。同図は、図3のフローチャートと比較してステップS105およびS106が追加されている点が異なる。以下、同じ点の説明の重複を避けて、異なる点を中心に説明する。
本実施の形態では、物体検知装置100を移動体に搭載する構成例について説明する。ここでいう移動体は、例えば、自動車、農機、二輪車などの車両や、船舶や、ドローンなど飛行体などを含む。
図5は、実施の形態2に係る物体検知装置100の構成例を示すブロック図である。同図の物体検知装置100は、センサシステム10と制御システム2とを備える。
次に、本実施の形態における物体検知装置100についてその動作を説明する。図7は、実施の形態2に係る物体検知装置の動作例を示すフローチャートである。
2 制御システム
3 イメージセンサ
4 発光部
5 信号処理部
10 センサシステム
11 BW輝度取得部
12 IR輝度取得部
13 距離取得部
15 座標変換部
16 分離部
17 三次元物体検知部
18 フュージョン部
20 二次元物体検知部
21 第1合成部
214 ノイズ処理部
219 トラッキング部
222 第2合成部
223 白線検出部
224 白線鳥瞰部
225 フリースペース検知部
226 駐車枠検出部
227 出力部
31 第1画素
32 第2画素
100 物体検知装置
101 第1クラスタ化部
102 第2クラスタ化部
D1 BW輝度画像
D2 IR輝度画像
d3 点群データ
D3 距離画像
D4 オドメトリ情報
D5 コンフィデンス情報
D6 参照情報
D11 物標情報
D12 路面情報
D13 フリースペース情報
D14 駐車枠情報
Si1 帰還信号
Claims (20)
- 距離画像を取得する距離取得部と、
前記距離画像と同じ撮像領域に対応する第1輝度画像を取得する第1輝度取得部と、
前記距離画像において、同一物体とみなせる一定範囲内にある画素の集合であるグループを生成し、前記グループに含まれる画素の数が第1しきい値以上である場合、当該グループを、クラスタと決定する第1クラスタ化部と、
前記グループに含まれる画素の数が第1しきい値より少ない場合、前記第1輝度画像中の画素群であって前記グループに対応する画素群の輝度が第2しきい値以上である場合に、当該グループをクラスタと決定する第2クラスタ化部と、
前記クラスタに基づいて前記距離画像中の物体を検知し、検知した物体を示す3D物体情報(3Dは三次元の略)を生成する3D物体検知部と、を備える
物体検知装置。 - 前記第1輝度画像中の画素群の前記輝度は、前記画素群に含まれる画素の平均的な輝度である
請求項1に記載の物体検知装置。 - 前記第1輝度画像中の画素群の前記輝度は、前記画素群に含まれる画素の最大輝度である
請求項1に記載の物体検知装置。 - 前記第1輝度画像中の画素群の前記輝度は、前記画素群の平均輝度および最大輝度の両者を含み、
前記第2しきい値は、平均用しきい値と、最大用しきい値とを含み、
前記第2クラスタ化部は、前記平均輝度が前記平均用しきい値以上であり、かつ、前記最大輝度が前記最大用しきい値以上であるとき、当該グループをクラスタと決定する
請求項1に記載の物体検知装置。 - 前記撮像領域に対応する2D画像(2Dは二次元の略)に含まれる物体を検知し、検知した物体を示す2D物体情報を生成する2D物体検知部と、
前記3D物体情報と前記2D物体情報とを融合するフュージョン部と、を備える
請求項1に記載の物体検知装置。 - 前記フュージョン部は、さらに、前記3D物体情報に含まれる物体のうち前記第2クラスタ化部が決定したクラスタに対応する物体が、前記2D物体情報に示される物体と重なる場合には、当該物体を3D物体と決定し、前記第2クラスタ化部が決定したクラスタに対応する物体が前記2D物体情報に示される物体と重ならない場合には、前記第2クラスタ化部が決定した当該クラスタをクラスタではないと決定し、かつ、当該物体を3D物体でないと決定する
請求項5に記載の物体検知装置。 - 前記2D画像は、前記第1輝度画像である
請求項5または6に記載の物体検知装置。 - 前記第1輝度画像とは異なる波長の光に対する第2輝度画像を取得する第2輝度取得部を備え、
前記2D画像は、前記第2輝度画像、または、第1輝度画像と第2輝度画像とを合成した第3輝度画像のうちのいずれかである
請求項5または6に記載の物体検知装置。 - 赤外光を発する発光部と、
前記赤外光による反射光を受光するイメージセンサと、
前記発光部と前記イメージセンサを用いて、前記第1輝度画像と、前記距離画像とを生成する信号処理部を備え、
前記第1輝度取得部、および、距離取得部は、前記信号処理部から前記第1輝度画像、および、前記距離画像を取得する
請求項1に記載の物体検知装置。 - 赤外光を発する発光部と、
赤外光に感度を持つ第1画素、および、可視光に感度を持つ第2画素を持つイメージセンサと、
前記発光部と前記イメージセンサを用いて、前記第1画素の画素値から前記第1輝度画像と、前記距離画像とを生成し、前記第2画素の画素値から前記第2輝度画像を生成する信号処理部を備え、
前記第1輝度取得部、第2輝度取得部および、距離取得部は、前記信号処理部から前記第1輝度画像、前記第2輝度画像、および前記距離画像を取得する
請求項8に記載の物体検知装置。 - 前記3D物体情報に基づいて、前記物体検知装置を搭載する移動体の移動を支援する情報を提示する情報提示部を備える
請求項1に記載の物体検知装置。 - 同じ撮像領域に対応する第1輝度画像と距離画像とを取得し、
前記距離画像において同一物体とみなせる一定範囲内にある画素の集合であるグループを生成し、
前記グループに含まれる画素の数が第1しきい値以上である場合、当該グループを、物体を構成するクラスタと決定する第1クラスタ化処理を実施し、
前記グループに含まれる画素の数が第1しきい値より少ない場合に、前記第1輝度画像中の画素群であって、前記グループに対応する画素群における輝度が第2しきい値以上である場合に、当該グループをクラスタと決定する第2クラスタ化処理を実施し、
前記クラスタに基づいて前記距離画像中の物体を検知し、検知した物体を示す3D物体情報を生成する
物体検知方法。 - 前記第1輝度画像中の画素群の前記輝度は、前記画素群に含まれる画素の平均的な輝度である
請求項12に記載の物体検知方法。 - 前記第1輝度画像中の画素群の前記輝度は、前記画素群に含まれる画素の最大輝度である
請求項12に記載の物体検知方法。 - 前記第1輝度画像中の画素群の前記輝度は、前記画素群の平均輝度および最大輝度の両者を含み、
前記第2しきい値は、平均用しきい値と、最大用しきい値とを含み、
前記平均輝度が前記平均用しきい値以上であり、かつ、前記最大輝度が前記最大用しきい値以上であるとき、当該グループをクラスタと決定する
請求項12に記載の物体検知方法。 - 前記撮像領域に対応する2D画像に含まれる物体を検知し、
検知した物体を示す2D物体情報を生成し、
前記3D物体情報と前記2D物体情報とを融合する
請求項12に記載の物体検知方法。 - 前記3D物体情報に含まれる物体のうち前記第2クラスタ化処理で決定されたクラスタに対応する物体が前記2D物体情報に示される物体と重なる場合には、当該物体を3D物体と決定し、
前記第2クラスタ化処理で決定されたクラスタに対応する物体が前記2D物体情報に示される物体と重ならない場合には、前記第2クラスタ化処理で決定された当該クラスタをクラスタではないと決定し、かつ、当該物体を3D物体でないと決定する
請求項16に記載の物体検知方法。 - 前記2D画像は、前記第1輝度画像である
請求項16または17に記載の物体検知方法。 - 前記第1輝度画像とは異なる波長の光に対する第2輝度画像を取得し、
前記2D画像は、前記第2輝度画像、または、第1輝度画像と第2輝度画像とを合成した第3輝度画像のうちのいずれかである
請求項16または17に記載の物体検知方法。 - 請求項12に記載の物体検知方法をコンピュータに実行させるプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22824886.0A EP4357814A4 (en) | 2021-06-17 | 2022-06-08 | OBJECT DETECTION DEVICE AND METHOD, AND PROGRAM |
JP2022539302A JP7138265B1 (ja) | 2021-06-17 | 2022-06-08 | 物体検知装置、物体検知方法およびプログラム |
CN202280041885.2A CN117480409A (zh) | 2021-06-17 | 2022-06-08 | 物体检测装置、物体检测方法及程序 |
US18/538,638 US20240112478A1 (en) | 2021-06-17 | 2023-12-13 | Object detecting device, object detecting method, and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-100600 | 2021-06-17 | ||
JP2021100600 | 2021-06-17 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/538,638 Continuation US20240112478A1 (en) | 2021-06-17 | 2023-12-13 | Object detecting device, object detecting method, and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022264894A1 true WO2022264894A1 (ja) | 2022-12-22 |
Family
ID=84527445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/023092 WO2022264894A1 (ja) | 2021-06-17 | 2022-06-08 | 物体検知装置、物体検知方法およびプログラム |
Country Status (2)
Country | Link |
---|---|
TW (1) | TW202305740A (ja) |
WO (1) | WO2022264894A1 (ja) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6312987A (ja) * | 1986-07-04 | 1988-01-20 | Hitachi Ltd | 移動物体の検知方法 |
JP2016088183A (ja) * | 2014-10-31 | 2016-05-23 | 株式会社Ihi | 支障物検知システムおよび鉄道車両 |
CN106296721A (zh) * | 2015-05-14 | 2017-01-04 | 株式会社理光 | 基于立体视觉的对象聚集检测方法和装置 |
JP2017016584A (ja) * | 2015-07-06 | 2017-01-19 | 株式会社リコー | 物体検出装置、物体検出方法及びプログラム |
WO2017145600A1 (ja) * | 2016-02-23 | 2017-08-31 | 株式会社リコー | 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及びプログラム |
JP2019149149A (ja) | 2017-12-29 | 2019-09-05 | バイドゥ オンライン ネットワーク テクノロジー(ペキン) カンパニー リミテッド | 点群データを復旧するための方法及び装置 |
JP2019533133A (ja) | 2017-08-25 | 2019-11-14 | ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド | 車両の環境情報を検出するための方法およびシステム |
WO2020129522A1 (ja) * | 2018-12-18 | 2020-06-25 | 日立オートモティブシステムズ株式会社 | 画像処理装置 |
-
2022
- 2022-06-08 TW TW111121263A patent/TW202305740A/zh unknown
- 2022-06-08 WO PCT/JP2022/023092 patent/WO2022264894A1/ja active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6312987A (ja) * | 1986-07-04 | 1988-01-20 | Hitachi Ltd | 移動物体の検知方法 |
JP2016088183A (ja) * | 2014-10-31 | 2016-05-23 | 株式会社Ihi | 支障物検知システムおよび鉄道車両 |
CN106296721A (zh) * | 2015-05-14 | 2017-01-04 | 株式会社理光 | 基于立体视觉的对象聚集检测方法和装置 |
JP2017016584A (ja) * | 2015-07-06 | 2017-01-19 | 株式会社リコー | 物体検出装置、物体検出方法及びプログラム |
WO2017145600A1 (ja) * | 2016-02-23 | 2017-08-31 | 株式会社リコー | 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及びプログラム |
JP2019533133A (ja) | 2017-08-25 | 2019-11-14 | ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド | 車両の環境情報を検出するための方法およびシステム |
JP2019149149A (ja) | 2017-12-29 | 2019-09-05 | バイドゥ オンライン ネットワーク テクノロジー(ペキン) カンパニー リミテッド | 点群データを復旧するための方法及び装置 |
WO2020129522A1 (ja) * | 2018-12-18 | 2020-06-25 | 日立オートモティブシステムズ株式会社 | 画像処理装置 |
Also Published As
Publication number | Publication date |
---|---|
TW202305740A (zh) | 2023-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10445928B2 (en) | Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types | |
US20220120910A1 (en) | Information processing system, sensor system, information processing method, and non-transitory computer readable storage medium | |
JP4970516B2 (ja) | 周囲確認支援装置 | |
CN104512411B (zh) | 车辆控制系统及图像传感器 | |
JP5714940B2 (ja) | 移動体位置測定装置 | |
US20160252905A1 (en) | Real-time active emergency vehicle detection | |
CN103874931B (zh) | 用于求取车辆的环境中的对象的位置的方法和设备 | |
JP6358552B2 (ja) | 画像認識装置および画像認識方法 | |
JP2014006885A (ja) | 段差認識装置、段差認識方法及び段差認識用プログラム | |
CN115104138A (zh) | 多模态、多技术载具信号检测 | |
CN110293973B (zh) | 驾驶支援系统 | |
JP6278790B2 (ja) | 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラムならびに車両位置検出システム | |
WO2020116205A1 (ja) | 情報処理装置、および情報処理方法、並びにプログラム | |
WO2022153896A1 (ja) | 撮像装置、画像処理方法及び画像処理プログラム | |
US11120292B2 (en) | Distance estimation device, distance estimation method, and distance estimation computer program | |
JP7138265B1 (ja) | 物体検知装置、物体検知方法およびプログラム | |
WO2022264894A1 (ja) | 物体検知装置、物体検知方法およびプログラム | |
JP2014016981A (ja) | 移動面認識装置、移動面認識方法及び移動面認識用プログラム | |
US20210354634A1 (en) | Electronic device for vehicle and method of operating electronic device for vehicle | |
CN116892949A (zh) | 地上物检测装置、地上物检测方法以及地上物检测用计算机程序 | |
KR101340014B1 (ko) | 위치 정보 제공 장치 및 방법 | |
US11392134B1 (en) | System for tuning parameters of a thermal sensor based on a region of interest | |
JP4506299B2 (ja) | 車両周辺監視装置 | |
US20230286548A1 (en) | Electronic instrument, movable apparatus, distance calculation method, and storage medium | |
US20230062562A1 (en) | Sensing system and distance measuring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022539302 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22824886 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280041885.2 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022824886 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022824886 Country of ref document: EP Effective date: 20240117 |