Nothing Special   »   [go: up one dir, main page]

WO2015001747A1 - Travel road surface indication detection device and travel road surface indication detection method - Google Patents

Travel road surface indication detection device and travel road surface indication detection method Download PDF

Info

Publication number
WO2015001747A1
WO2015001747A1 PCT/JP2014/003301 JP2014003301W WO2015001747A1 WO 2015001747 A1 WO2015001747 A1 WO 2015001747A1 JP 2014003301 W JP2014003301 W JP 2014003301W WO 2015001747 A1 WO2015001747 A1 WO 2015001747A1
Authority
WO
WIPO (PCT)
Prior art keywords
range
detection
bird
overlapping
image
Prior art date
Application number
PCT/JP2014/003301
Other languages
French (fr)
Japanese (ja)
Inventor
丙辰 王
展彦 井上
宗昭 松本
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2015001747A1 publication Critical patent/WO2015001747A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present disclosure relates to a traveling road marking detection apparatus that detects a road marking object by performing image processing on a captured image captured by a camera that captures the periphery of a vehicle.
  • an overlapping portion is generated in the captured image and the bird's-eye image converted from the captured image.
  • the same road surface is photographed in the overlapping portion, if the image processing for detecting the sign object is performed on all the overlapping portions, the load on the processing apparatus may be increased.
  • the present disclosure has been made in view of the above points, and an object of the present disclosure is to provide a traveling road marking detection device and a traveling road marking detection method that can reduce the detection load of road marking objects.
  • the traveling road surface marking detection device includes a generation unit and a marking object detection unit.
  • the generation unit generates a bird's-eye image viewed from a predetermined virtual viewpoint, based on a plurality of captured images captured by a plurality of cameras that capture the periphery of the vehicle and at least a part of the imaging range does not overlap.
  • the sign object detection means detects a sign object on the road surface from the bird's-eye view image generated by the generation means.
  • the sign object detection means detects a sign object in each of the overlapping portions where the shooting ranges in two or more bird's-eye images overlap each other in the assigned range assigned to each of the overlapping portions.
  • the detected object is detected in the allocated range in the overlapping portion of the plurality of bird's-eye images, it is not necessary to perform the detection process of the labeled object in the portion other than the allocated range, and the detection load by the image processing Can be reduced.
  • the traveling road surface marking detection device includes a generation unit, a range setting unit, and a marking object detection unit.
  • the generating means includes a first photographed image taken by a first camera having a first photographing range and a second photograph taken by a second camera having a second photographing range that partially overlaps the first photographing range. Based on the image, a first bird's-eye image and a second bird's-eye image viewed from a predetermined virtual viewpoint are generated.
  • the first bird's-eye image has a first overlapping portion in which an overlapping shooting range partially overlapping in the first shooting range and the second shooting range is shot, and the second bird's-eye image is the overlapping shooting range. Has a second overlapping portion taken.
  • the range setting means is configured to detect a detection target range and a remaining portion of the detection target range in each of the first overlapping portion and the second overlapping portion based on the resolution of the first captured image and the resolution of the second captured image. A detection unnecessary range is set.
  • the sign detection means detects a road surface sign from the remaining portion excluding the first overlapping portion and the detection target range of the first overlapping portion, and in the second captured image, The sign on the road surface is detected from the remaining part excluding the second overlapping part and the detection target range of the second overlapping part.
  • the range setting means sets the detection target range of the first overlapping portion to be the same range as the detection unnecessary range of the second overlapping portion, and the detection unnecessary range of the first overlapping portion is the second overlapping portion. It is set to indicate the same range as the detection target range.
  • the resolution of the portion corresponding to the detection target range of the first overlapping portion in the first captured image is larger than the resolution of the portion corresponding to the detection unnecessary range of the second overlapping portion in the second captured image.
  • the resolution of the portion corresponding to the detection target range of the second overlapping portion in the second captured image is larger than the resolution of the portion corresponding to the detection unnecessary range of the first overlapping portion in the first captured image.
  • the same functions and effects as the traveling road marking detection device according to the first aspect of the present disclosure can be achieved.
  • a traveling road marking detection device that detects a road marking object by performing image processing on at least a plurality of captured images captured by a plurality of cameras that capture the periphery of the vehicle according to the third aspect of the present disclosure.
  • the road marking detection method used in the method generates a plurality of bird's-eye images viewed from a predetermined virtual viewpoint based on each of a plurality of photographed images, detects a sign from the generated bird's-eye images, and detects two or more For each of the overlapping portions, which are portions where the shooting ranges in the bird's-eye view image overlap each other, a sign object is detected from the assigned range assigned to each of the overlapping portions.
  • the same operations and effects as the traveling road marking detection device according to the first aspect of the present disclosure can be achieved.
  • FIG. 1 is a block diagram illustrating a configuration of a sign recognition system according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart for explaining the processing procedure of the lane departure warning process.
  • FIG. 3A is a diagram illustrating an example of an image obtained by converting a captured image around the vehicle into a bird's-eye view image
  • FIG. 3B is a diagram of the captured image around the vehicle illustrated in FIG. Yes
  • FIG. 4A is an enlarged view in which a part of an overlapping part in one bird's-eye image is enlarged
  • FIG. 4B is an enlarged view in which a part of the overlapping part in another bird's-eye image is enlarged.
  • 4 (c) is a diagram of the bird's-eye image shown in FIG. 4 (a)
  • FIG. 4 (d) is a diagram of the bird's-eye image shown in FIG. 4 (b)
  • FIG. 5A and FIG. 5B are diagrams for explaining an example of an allocation range setting method.
  • FIG. 6A and FIG. 6B are diagrams for explaining the handling of inappropriate parts in setting the allocation range.
  • FIG. 7A and FIG. 7B are diagrams for explaining an example of an allocation range setting method.
  • the sign object recognition system 1 is a system used by being mounted on a vehicle such as an automobile. As shown in FIG. 1, a plurality of in-vehicle cameras 3, display means 5, And a road surface marking detection device 7.
  • the sign recognition system 1 detects a white line such as a roadway center line or a roadway outer line, and displays a warning when a traveling vehicle is predicted to deviate from the lane.
  • a white line such as a roadway center line or a roadway outer line
  • displays a warning when a traveling vehicle is predicted to deviate from the lane In the present embodiment, a configuration that targets a white line as an example of a road surface marking object is illustrated, but it can be configured to detect a lane marking other than the white line, a roadway, or the like.
  • the in-vehicle camera 3 is a photographing device for photographing the periphery of the vehicle, and for example, a known Charge-Coupled Device (CCD) image sensor, Complementary Metal-Oxide-Semiconductor (CMOS) image sensor, or the like can be used.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide-Semiconductor
  • the in-vehicle camera 3 is disposed on the front, rear, left, and right sides of the vehicle, and photographs the front diagonally lower, the rear diagonally downward, the left diagonally downward, and the right diagonally downward of the vehicle, respectively.
  • Each in-vehicle camera 3 captures the periphery of the vehicle at a predetermined time interval (1/15 s as an example), and outputs the captured image to the traveling road marking detection device 7.
  • the imaging range of the in-vehicle camera 3 is arranged so that a part thereof overlaps with the imaging range of the other in-vehicle camera 3 and the other parts except for the part do not overlap with the imaging range of the other in-vehicle camera 3.
  • the display means 5 is a liquid crystal display that displays an image, and displays the image according to a signal input from the traveling road marking detection device 7. In this embodiment, a warning screen is displayed when it is predicted that the traveling vehicle will deviate from the traveling lane.
  • the traveling road surface marking detection device 7 acquires a photographed image photographed by the in-vehicle camera 3, performs image processing to detect a white line, and predicts whether or not the vehicle deviates from the traveling lane.
  • This road surface sign detection device 7 includes a CPU (not shown), a ROM that stores a program executed by the CPU, a RAM that is used as a work area when the CPU executes a program, and a flash memory that can electrically rewrite data. And a computer system including an NVRAM as a nonvolatile memory such as an EEPROM, and executes a predetermined process by executing a program.
  • This traveling road surface marking detection device 7 inputs traveling information 9 from an ECU and various sensors mounted on the vehicle.
  • the travel information 9 includes information such as a shift range, vehicle speed, steering, or yaw rate.
  • the road surface sign detection device 7 includes a vehicle motion calculation unit 11, a camera-mounted parameter correction unit 12, a viewpoint conversion unit 13, a resolution distribution calculation unit 14, an image range selection processing unit 15, an image detection processing unit 16, and a sign recognition result.
  • the integrated determination unit 17 functions.
  • the camera-mounted parameter correction unit 12 stores information that can specify the shooting range of each vehicle-mounted camera 3 as the camera-mounted parameter 12a indicating the mounting position of the vehicle-mounted camera 3. Specifically, the information on the position information (three-dimensional coordinates) and the photographing direction (pitch, roll, yaw angle) of each in-vehicle camera 3 with respect to the vehicle is stored.
  • the acquired photographed image is converted into a bird's-eye view image. This process is realized by the viewpoint conversion unit 13 described above.
  • the viewpoint conversion unit 13 acquires captured images from each of the in-vehicle cameras 3, and generates a plurality of bird's-eye images viewed from a predetermined virtual viewpoint based on the acquired captured images.
  • the virtual viewpoint is directly above the vehicle, and a bird's-eye view image in which the road surface is viewed vertically is generated using a predetermined conversion formula corresponding to the camera mounting parameter 12a indicating the camera mounting position.
  • a method for generating a bird's-eye view image is known, for example, a technique described in JP-A-10-211849 can be used.
  • the viewpoint conversion unit 13 is an example of a generation unit in the present disclosure.
  • the bird's-eye images 21, 22, 23, and 24 shown in FIG. 3A are bird's-eye images showing the front, rear, left, and right sides of the vehicle, respectively, and are taken by each vehicle-mounted camera 3 at the same timing. It is a bird's-eye view image based on an image. The same road surface around the vehicle is photographed in the overlapping portion where the shooting ranges in each bird's-eye view image overlap each other.
  • the bird's-eye view images in FIG. 3 (a) are merely arranged according to the shooting direction centered on the vehicle, and are not synthesized according to the actual position.
  • the image detection processing unit 16 executes three processes: (a) fixed noise detection, (b) edge suitability value calculation, and (c) white line detection. Since the above (a) and (b) are executed in S3, these processes will be described here.
  • the image detection processing unit 16 detects fixed noise generated continuously in the same part such as lens dirt of the in-vehicle camera 3 based on the photographed image.
  • a method for detecting fixed noise such as dirt is known, but as an example, the method described in Japanese Patent Application Laid-Open No. 2003-259358 can be used. This fixed noise detection may be performed based on a bird's-eye view image.
  • the image detection processing unit 16 is an example of a noise detection unit in the present disclosure.
  • the image detection processing unit 16 detects an edge from each bird's-eye view image, and a parameter indicating the degree of ease of detecting a white line for the detected edge (hereinafter, also simply referred to as an edge suitability value). ) Is calculated. This parameter is obtained from the edge strength, the number of edges, and the edge angle.
  • the edge strength is the contrast (brightness difference) between the white line and the road surface.
  • the greater the contrast the greater the edge suitability value.
  • the number of edges is the number of detected edges, and the edge suitability value increases as this number increases.
  • the edge angle is the direction in which the edges are arranged, and the edge suitability value increases as the edges are arranged at an appropriate angle in consideration of the traveling direction of the vehicle and the direction of the white line detected in the past.
  • a part with a high edge suitability value is a part suitable for white line detection with few problems that are inappropriate for white line detection, for example, overexposure, excessive darkness, and many inappropriate edges are detected. It is.
  • the camera-mounted parameter correction unit 12 estimates current position information and current shooting direction information of the in-vehicle camera 3 based on the shot image.
  • the estimated camera mounting parameter is compared with the stored camera mounting parameter 12a. If the estimated camera mounting parameter is different by a predetermined threshold or more, the newly estimated parameter is updated as the camera mounting parameter 12a.
  • the camera-mounted parameter correction unit 12 is an example of a position detection unit in the present disclosure.
  • the resolution distribution calculation unit 14 calculates the resolution distribution using the camera mounting parameters 12a.
  • the resolution distribution is information indicating the original resolution of each pixel of the bird's-eye view image.
  • the original resolution here refers to a portion per pixel of a portion corresponding to one pixel of a bird's-eye image after conversion in a captured image before conversion by the viewpoint conversion unit 13 (hereinafter also simply referred to as an original captured image). This parameter indicates the actual distance on the road surface.
  • the above parameters are merely examples of parameters that can be used as the original resolution.
  • the number of pixels of a captured image per unit length (for example, 1 cm) of the road surface can be used as the original resolution.
  • the original resolution may be calculated for each set of pixels, not for each pixel. The resolution distribution may be calculated only in the overlapping portion described later.
  • the road surface relatively close to the vehicle-mounted camera 3 has a shorter road surface distance per pixel than the road surface relatively far from the vehicle-mounted camera 3.
  • the road surface relatively closer to the in-vehicle camera 3 has a smaller relative degree of enlargement and the road surface farther has a larger degree of relative expansion. The smaller the magnification, the finer the screen.
  • FIG. 4 (a) and 4 (b) show enlarged images obtained by enlarging portions 22a and 23a where the same photographing range is shown in the bird's-eye images 22 and 23 of FIG. 3 (a).
  • the portion 23a of the bird's-eye image 23 has a relatively small original resolution, so the degree of enlargement is large, and the boundary between the white line 31 and the surrounding road surface portion 33 is ambiguous.
  • the portion 22a of the bird's-eye image 22 has a relatively high original resolution, so the degree of enlargement is small, and the boundary between the white line 31 and the road surface portion 33 is relatively clear.
  • edges are detected by image processing, but when the boundary between the white line and the road surface is clear, for example, when there is a large difference in brightness between the white line and the road surface, the edge should be detected with high accuracy. Can do. Therefore, even when images are taken of the same road surface, edges can be detected well by using an image with a higher original resolution.
  • the resolution distribution calculation unit 14 described above is an example of calculation means in the present disclosure.
  • an image processing range for each bird's-eye view image corresponding to each vehicle-mounted camera 3 is determined.
  • the image processing range in the bird's-eye image includes a portion where the shooting range does not overlap with another bird's-eye image, and a portion set as an allocation range by the image range selection processing unit 15 among overlapping portions where the shooting range overlaps with another bird's-eye image. , And the combined range.
  • the image range selection processing unit 15 sets an allocation range for each overlapping portion of the two bird's-eye images.
  • the bird's-eye images 41 and 42 are obtained by converting captured images captured by different on-vehicle cameras 3 into bird's-eye images. Portions overlapping each other in the bird's-eye images 41 and 42 are referred to as overlapping portions 41a and 42a, respectively. Since these are the same size, the pixel arrangement is the same, and the pixels at the same position correspond to the same shooting range.
  • the original resolution of each pixel of the overlapping portions 41 a and 42 a is obtained by the resolution distribution calculation unit 14.
  • a portion where the original resolution of the corresponding pixel is larger than that of the overlapping portion 42a is set as the allocation range 41b.
  • a portion having an original resolution larger than the overlapping portion 41a in the overlapping portion 42a is set as the allocation range 42b.
  • the bird's-eye image 41 is an example of the first bird's-eye image in the present disclosure
  • the bird's-eye image 42 is an example of the second bird's-eye image in the present disclosure
  • the overlapping part 41a of the bird's-eye image 41 is an example of a first overlapping part in the present disclosure
  • the overlapping part 42a of the bird's-eye image 42 is an example of a second overlapping part in the present disclosure.
  • the allocation range 41b is an example of a detection target range in the present disclosure
  • the remaining portion is an example of a detection unnecessary range in the present disclosure.
  • the allocation range 42b is an example of a detection target range in the present disclosure
  • the remaining portion is an example of a detection unnecessary range in the present disclosure.
  • the detection target range 41b of the first overlapping portion 41a is set to indicate the same range as the unnecessary detection range of the second overlapping portion 42a, and the first overlapping
  • the detection unnecessary range of the portion 41a is set to indicate the same range as the detection target range 42b of the second overlapping portion 42a.
  • the actual distance of the road surface per pixel of the portion corresponding to the detection target range 41b of the first overlapping portion 41a in the first captured image is the portion of the portion corresponding to the detection unnecessary range of the second overlapping portion 42a in the second captured image. It is shorter than the actual distance of the road surface per pixel.
  • the actual distance of the road surface per pixel of the portion corresponding to the detection target range 42b of the second overlapping portion 42a in the second captured image is the portion of the portion corresponding to the detection unnecessary range of the first overlapping portion 41a in the first captured image. It is shorter than the actual distance of the road surface per pixel.
  • the portion where the fixed noise is detected in S3 and the portion whose edge suitability value calculated in S3 is smaller than the predetermined threshold (hereinafter, any one of these portions will be simply referred to as an inappropriate portion) is the target of the allocation range. Excluded from. That is, the allocation range is set from a portion where no fixed noise is detected and the edge suitability value is equal to or greater than a predetermined threshold.
  • the inappropriate portion 41c when there is an inappropriate portion 41c in the overlapping portion 41a, the inappropriate portion 41c is not set as the allocation range 41b, and that portion is the original resolution in the overlapping portion 42a. Regardless, it is set as the allocation range 42b.
  • an allocation range may be set based on the original resolution.
  • the allocation range is not set strictly based on the comparison results of the original resolutions of a plurality of overlapping portions, and is determined based on the comparison results of the original resolutions as shown in FIGS. 7 (a) and 7 (b).
  • the allocated range may be set with the boundary line 43 as a boundary.
  • the boundary line 43 is set so that a portion (pixel) having a higher original resolution than the other overlapping portion in one overlapping portion is included in the allocation range of the one overlapping portion.
  • the portion having the original resolution higher than the other overlapping portion in one overlapping portion occupies the main portion of the allocation range in one overlapping portion.
  • it may be set such that a portion having an original resolution higher than that of the other overlapping portion in one overlapping portion is included in the allocation range of the one overlapping portion by a predetermined ratio or more.
  • the image range selection processing unit 15 described above is an example of a range setting unit in the present disclosure.
  • a pixel or a set of pixels having a higher original resolution than the overlapping portion 42a in the overlapping portion 41a is an example of the first region in the present disclosure when the overlapping portion 41a is used as a reference, and is more than the overlapping portion 41a in the overlapping portion 42a.
  • a pixel or a set of pixels with a small original resolution is an example of the second region in the present disclosure.
  • the image detection processing unit 16 extracts an edge from each bird's-eye view image and performs white line detection. Since a specific method is publicly known, description thereof is omitted. In each overlapping portion where the shooting ranges in each bird's-eye view image overlap each other, white line detection is performed in the portion set as the allocation range.
  • the vehicle motion calculation unit 11 calculates vehicle movement parameters (traveling direction and speed) based on the travel information 9. Using this calculated movement parameter, an edge movement destination based on the vehicle over time is estimated.
  • the image detection processing unit 16 is an example of a sign detection unit in the present disclosure.
  • the sign recognition result integration determination unit 17 determines the position of the white line based on the vehicle based on the white line information detected in each bird's-eye view image by the image detection processing unit 16. Specifically, the likelihood of the white line detected from each bird's-eye view image, the degree of coincidence and difference with the white line detected from other bird's-eye images, and the information of the white line calculated in the past in the same manner as in S7 are considered. Then, the current position of the white line is determined. As the likelihood of the white line, for example, an edge suitability value can be used.
  • the white line is detected in the allocation range in the overlapping portion where a plurality of bird's-eye images overlap. Since the overlapping ranges of the overlapping portions do not overlap each other, it is possible to suppress the detection of white lines overlapping the same imaging range, and to reduce the detection load due to image processing.
  • the allocation range is set based on the original resolution that is the resolution of the original captured image of the overlapping portion, based on the image of the plurality of overlapping portions that has a small degree of enlargement and the detection target is not blurred. A white line can be detected, and a decrease in white line detection accuracy can be suppressed.
  • the fixing device for fixing the in-vehicle camera 3 to the vehicle is aged.
  • the assigned range can be appropriately changed based on the deterioration or the change in the road surface angle accompanying the traveling of the vehicle, and the decrease in the detection accuracy of the white line can be suppressed.
  • the white line is detected by excluding the part having fixed noise such as dirt in the overlapping part or the part having a low edge suitability value from the allocation range, it is possible to suppress the decrease in the white line detection accuracy.
  • the setting method of the above allocation range is not particularly limited.
  • the first area that occupies a part or all of the overlapping part, and the original resolution that is the resolution of the part corresponding to the first area in the captured image is the same as the first area and the capturing range in the other bird's-eye images.
  • the allocation range may be set so as to include the first area larger than the original resolution of the overlapping second area.
  • the road surface shot by the camera attached to the vehicle is larger as the distance from the camera is closer, that is, the road surface is shot with a larger resolution. Therefore, when a captured image is converted into a bird's-eye view image, the distance from the camera on the road surface is farther, and the degree of magnification increases as the image has a smaller resolution. Since the image is blurred at the greatly enlarged portion, there is a possibility that the accuracy of detection of the sign object is lowered.
  • the sign object is detected based on the bird's-eye image with a smaller degree of enlargement and a detection target that is not blurred. It is possible to suppress the decrease in the detection accuracy of the sign object.
  • the configuration in which the portion where the fixed noise is detected and the portion having a low edge suitability value are excluded from the allocation range is exemplified. Only one of the configurations may be considered, or the allocation range may be set based on only the original resolution without considering either.
  • the image range selection processing unit 15 may set the allocation range based on the edge suitability value without considering the original resolution. For example, it is conceivable to set, as the allocation range, a range including a portion having a larger edge suitability value compared to other overlapping portions showing the same imaging range among the overlapping portions.
  • the camera mounting parameter 12a is updated as needed, and an appropriate allocation range is calculated based on the camera mounting parameter 12a.
  • the allocation range does not change and is constant. Also good.
  • each bird's-eye view image is synthesize
  • the configuration in which the allocation range is set so as not to overlap in a plurality of overlapping portions is exemplified, but a configuration in which some overlap may be employed.
  • the present disclosure includes, in addition to the above-described traveling road marking detection device and traveling road marking detection method, a system including the traveling road marking detection device as a component, a program for causing a computer to execute the traveling road marking detection method, It can be realized in various forms such as a recording medium on which the program is recorded.
  • each means is expressed as, for example, S1. Furthermore, each means can be divided into a plurality of sub means, while a plurality of means can be combined into one means. Further, each means configured in this way can be referred to as a module, means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A travel road surface indication detection device includes a generation means (13, S2) and an indicator detection means (16, S7). The generation means generates a bird's-eye image as seen from a prescribed virtual viewpoint, on the basis of multiple photographic images which have been photographed by multiple cameras (3) that photograph the periphery of a vehicle, and at least a portion of the photographic areas of which do not overlap. The indicator detection means detects road surface indicators from the bird's-eye view image generated by the generation means. For each overlapping portion (41a, 42a), which is a portion where the photographic areas in two or more bird's-eye images overlap each other, the indicator detection means detects indicators in an assigned area which has been assigned to the respective overlapping portion.

Description

走行路面標示検知装置および走行路面標示検知方法Travel road marking detection device and travel road marking detection method 関連出願の相互参照Cross-reference of related applications
 本開示は、2013年7月2日に出願された日本出願番号2013-138970号に基づくもので、ここにその記載内容を援用する。 This disclosure is based on Japanese Application No. 2013-138970 filed on July 2, 2013, the contents of which are incorporated herein by reference.
 本開示は、車両周辺を撮影するカメラにより撮影された撮影画像を画像処理することにより路面の標示物を検知する走行路面標示検知装置に関する。 The present disclosure relates to a traveling road marking detection apparatus that detects a road marking object by performing image processing on a captured image captured by a camera that captures the periphery of a vehicle.
 従来、車両に搭載されたカメラを用いて、白線等の区画線や道路鋲などの路面の標示物を検出するシステムが提案されている。車両と標示物の位置関係を計算するために、複数のカメラにて撮影した画像を鳥瞰画像に変換する場合がある(特許文献1参照)。 Conventionally, a system for detecting markings on road markings such as lane markings such as white lines and roadsides using a camera mounted on a vehicle has been proposed. In order to calculate the positional relationship between the vehicle and the sign object, images taken by a plurality of cameras may be converted into a bird's-eye view image (see Patent Document 1).
 車両周囲の標示物の検出漏れを抑制するためには車両の周囲を複数のカメラで隙間無く撮影することが好ましい。そのため、撮影範囲に隙間が生じないように複数のカメラはその撮影範囲の一部が重複するように配置することが考えられる。 In order to suppress the detection omission of signs around the vehicle, it is preferable to photograph the periphery of the vehicle with a plurality of cameras without any gaps. Therefore, it is conceivable that a plurality of cameras are arranged so that a part of the shooting range overlaps so that no gap occurs in the shooting range.
 上述したようにカメラを配置すると、撮影された撮影画像、およびその撮影画像から変換された鳥瞰画像にも重複する部分が生じる。重複する部分には同じ路面が撮影されているが、重複する部分すべてに対して標示物を検出するための画像処理を行うと処理装置の負荷が大きくなってしまう虞がある。 When the camera is arranged as described above, an overlapping portion is generated in the captured image and the bird's-eye image converted from the captured image. Although the same road surface is photographed in the overlapping portion, if the image processing for detecting the sign object is performed on all the overlapping portions, the load on the processing apparatus may be increased.
特開2010-245802号公報JP 2010-245802 A
 本開示は、上記点に鑑みてなされたものであり、その目的は、路面の標示物の検出負荷を低減できる走行路面標示検知装置および走行路面標示検知方法を提供することである。 The present disclosure has been made in view of the above points, and an object of the present disclosure is to provide a traveling road marking detection device and a traveling road marking detection method that can reduce the detection load of road marking objects.
 本開示の第1態様による走行路面標示検知装置は、生成手段および標示物検出手段を備える。生成手段は、車両周辺を撮影する複数のカメラにより撮影された少なくとも撮影範囲の一部が重複しない複数の撮影画像に基づいて、所定の仮想視点から見た鳥瞰画像を生成する。標示物検出手段は、生成手段により生成された鳥瞰画像から路面の標示物を検出する。 The traveling road surface marking detection device according to the first aspect of the present disclosure includes a generation unit and a marking object detection unit. The generation unit generates a bird's-eye image viewed from a predetermined virtual viewpoint, based on a plurality of captured images captured by a plurality of cameras that capture the periphery of the vehicle and at least a part of the imaging range does not overlap. The sign object detection means detects a sign object on the road surface from the bird's-eye view image generated by the generation means.
 標示物検出手段は、2つ以上の鳥瞰画像における撮影範囲が互いに重複する部分である重複部分それぞれについては、該重複部分それぞれに対して割り当てられた割り当て範囲において標示物の検出を行う。 The sign object detection means detects a sign object in each of the overlapping portions where the shooting ranges in two or more bird's-eye images overlap each other in the assigned range assigned to each of the overlapping portions.
 上記走行路面標示検知装置によると、複数の鳥瞰画像の重複部分では割り当て範囲において標示物の検出を行うため、割り当て範囲以外の部分は標示物の検出処理を行う必要がなく、画像処理による検出負荷を低減することができる。 According to the above road surface sign detection device, since the detected object is detected in the allocated range in the overlapping portion of the plurality of bird's-eye images, it is not necessary to perform the detection process of the labeled object in the portion other than the allocated range, and the detection load by the image processing Can be reduced.
 本開示の第2態様による走行路面標示検知装置は、生成手段と、範囲設定手段と、標示物検出手段と、を備える。 The traveling road surface marking detection device according to the second aspect of the present disclosure includes a generation unit, a range setting unit, and a marking object detection unit.
 生成手段は、第1撮影範囲を有する第1カメラにより撮影された第1撮影画像と、前記第1撮影範囲と部分的に重複する第2撮影範囲を有する第2カメラにより撮影された第2撮影画像に基づいて、所定の仮想視点から見た第1鳥瞰画像と第2鳥瞰画像をそれぞれ生成する。前記第1鳥瞰画像は、前記第1撮影範囲と前記第2撮影範囲において部分的に重複する重複撮影範囲が撮影された第1重複部分を有し、前記第2鳥瞰画像は、前記重複撮影範囲が撮影された第2重複部分を有する。 The generating means includes a first photographed image taken by a first camera having a first photographing range and a second photograph taken by a second camera having a second photographing range that partially overlaps the first photographing range. Based on the image, a first bird's-eye image and a second bird's-eye image viewed from a predetermined virtual viewpoint are generated. The first bird's-eye image has a first overlapping portion in which an overlapping shooting range partially overlapping in the first shooting range and the second shooting range is shot, and the second bird's-eye image is the overlapping shooting range. Has a second overlapping portion taken.
 範囲設定手段は、前記第1撮影画像の解像度と前記第2撮影画像の解像度に基づいて、前記第1重複部分と前記第2重複部分のそれぞれにおいて、検出対象範囲と検出対象範囲の残りの部分である検出不要範囲を設定する。 The range setting means is configured to detect a detection target range and a remaining portion of the detection target range in each of the first overlapping portion and the second overlapping portion based on the resolution of the first captured image and the resolution of the second captured image. A detection unnecessary range is set.
 標示物検出手段は、前記第1撮影画像において、前記第1重複部分を除く残りの部分と前記第1重複部分の検出対象範囲から路面の標示物を検出し、前記第2撮影画像において、前記第2重複部分を除く残りの部分と前記第2重複部分の検出対象範囲から前記路面の標示物を検出する。 In the first captured image, the sign detection means detects a road surface sign from the remaining portion excluding the first overlapping portion and the detection target range of the first overlapping portion, and in the second captured image, The sign on the road surface is detected from the remaining part excluding the second overlapping part and the detection target range of the second overlapping part.
 前記範囲設定手段は、前記第1重複部分の検出対象範囲が前記第2重複部分の検出不要範囲と同じ範囲を示すように設定し、前記第1重複部分の検出不要範囲が前記第2重複部分の検出対象範囲と同じ範囲を示すように設定する。 The range setting means sets the detection target range of the first overlapping portion to be the same range as the detection unnecessary range of the second overlapping portion, and the detection unnecessary range of the first overlapping portion is the second overlapping portion. It is set to indicate the same range as the detection target range.
 前記第1撮影画像における前記第1重複部分の検出対象範囲に対応する部分の解像度は、前記第2撮影画像における前記第2重複部分の検出不要範囲に対応する部分の解像度より大きい。 The resolution of the portion corresponding to the detection target range of the first overlapping portion in the first captured image is larger than the resolution of the portion corresponding to the detection unnecessary range of the second overlapping portion in the second captured image.
 前記第2撮影画像における前記第2重複部分の検出対象範囲に対応する部分の解像度は、前記第1撮影画像における前記第1重複部分の検出不要範囲に対応する部分の解像度より大きい。 The resolution of the portion corresponding to the detection target range of the second overlapping portion in the second captured image is larger than the resolution of the portion corresponding to the detection unnecessary range of the first overlapping portion in the first captured image.
 上記走行路面標示検知装置によると、本開示の第1態様による走行路面標示検知装置と同様の作用および効果を奏することができる。 According to the traveling road marking detection device, the same functions and effects as the traveling road marking detection device according to the first aspect of the present disclosure can be achieved.
 本開示の第3態様による車両周辺を撮影する複数のカメラにより撮影された少なくとも撮影範囲の一部が重複しない複数の撮影画像を画像処理することにより路面の標示物を検知する走行路面標示検知装置で用いられる走行路面標示検知方法は、複数の撮影画像それぞれに基づいて、所定の仮想視点から見た複数の鳥瞰画像を生成し、生成された鳥瞰画像から標示物を検出し、2つ以上の鳥瞰画像における撮影範囲が互いに重複する部分である重複部分それぞれについては、該重複部分それぞれに対して割り当てられた割り当て範囲から標示物を検出することを特徴とする。 A traveling road marking detection device that detects a road marking object by performing image processing on at least a plurality of captured images captured by a plurality of cameras that capture the periphery of the vehicle according to the third aspect of the present disclosure. The road marking detection method used in the method generates a plurality of bird's-eye images viewed from a predetermined virtual viewpoint based on each of a plurality of photographed images, detects a sign from the generated bird's-eye images, and detects two or more For each of the overlapping portions, which are portions where the shooting ranges in the bird's-eye view image overlap each other, a sign object is detected from the assigned range assigned to each of the overlapping portions.
 上記の走行路面標示検知方法を用いることで、本開示の第1態様による走行路面標示検知装置と同様の作用および効果を奏することができる。 By using the traveling road marking detection method described above, the same operations and effects as the traveling road marking detection device according to the first aspect of the present disclosure can be achieved.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、本開示の一実施例による標示物認識システムの構成を説明するブロック図であり、 図2は、車線逸脱警告処理の処理手順を説明するフローチャートであり、 図3(a)は、車両周辺の撮影画像を鳥瞰画像に変換した画像の例を示す図であり、図3(b)は、図3(a)に示す車両周辺の撮影画像の線図であり、 図4(a)は、一つの鳥瞰画像における重複部分の一部を拡大した拡大図であり、図4(b)は他の鳥瞰画像における重複部分の一部を拡大した拡大図であり、図4(c)は、図4(a)に示す鳥瞰画像の線図であり、図4(d)は、図4(b)に示す鳥瞰画像の線図であり、 図5(a)と図5(b)は、割り当て範囲の設定方法の一例を説明する図であり、 図6(a)と図6(b)は、割り当て範囲の設定において不適部分の取り扱いを説明する図であり、 図7(a)と図7(b)は、割り当て範囲の設定方法の一例を説明する図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
FIG. 1 is a block diagram illustrating a configuration of a sign recognition system according to an embodiment of the present disclosure. FIG. 2 is a flowchart for explaining the processing procedure of the lane departure warning process. FIG. 3A is a diagram illustrating an example of an image obtained by converting a captured image around the vehicle into a bird's-eye view image, and FIG. 3B is a diagram of the captured image around the vehicle illustrated in FIG. Yes, FIG. 4A is an enlarged view in which a part of an overlapping part in one bird's-eye image is enlarged, and FIG. 4B is an enlarged view in which a part of the overlapping part in another bird's-eye image is enlarged. 4 (c) is a diagram of the bird's-eye image shown in FIG. 4 (a), FIG. 4 (d) is a diagram of the bird's-eye image shown in FIG. 4 (b), FIG. 5A and FIG. 5B are diagrams for explaining an example of an allocation range setting method. FIG. 6A and FIG. 6B are diagrams for explaining the handling of inappropriate parts in setting the allocation range. FIG. 7A and FIG. 7B are diagrams for explaining an example of an allocation range setting method.
 以下に本開示の実施形態を図面と共に説明する。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
 [実施例]
 (1)標示物認識システム1の構成
 標示物認識システム1は自動車等の車両に搭載されて用いられるシステムであって、図1に示すように、複数の車載カメラ3と、表示手段5と、走行路面標示検知装置7と、を備えている。
[Example]
(1) Configuration of Sign Object Recognition System 1 The sign object recognition system 1 is a system used by being mounted on a vehicle such as an automobile. As shown in FIG. 1, a plurality of in-vehicle cameras 3, display means 5, And a road surface marking detection device 7.
 本標示物認識システム1は、車道中央線または車道外側線などの白線を検出し、走行する車両が車線を逸脱すると予測された場合に警告表示を行う。なお本実施例では路面の標示物の一例として白線を対象とする構成を例示するが、白線以外の区画線や道路鋲などを検出するように構成することができる。 The sign recognition system 1 detects a white line such as a roadway center line or a roadway outer line, and displays a warning when a traveling vehicle is predicted to deviate from the lane. In the present embodiment, a configuration that targets a white line as an example of a road surface marking object is illustrated, but it can be configured to detect a lane marking other than the white line, a roadway, or the like.
 車載カメラ3は、車両周辺を撮影する撮影装置であって、例えば公知のCharge-Coupled Device(CCD)イメージセンサやComplementary Metal-Oxide-Semiconductor(CMOS)イメージセンサなどを用いることができる。車載カメラ3は車両の前方、後方、左方、および右方に配置され、それぞれ車両の前方斜め下、後方斜め下、左方斜め下、右方斜め下を撮影する。各車載カメラ3は、所定の時間間隔(一例として1/15s)で車両周辺を撮影し、撮影した撮影画像を走行路面標示検知装置7に出力する。 The in-vehicle camera 3 is a photographing device for photographing the periphery of the vehicle, and for example, a known Charge-Coupled Device (CCD) image sensor, Complementary Metal-Oxide-Semiconductor (CMOS) image sensor, or the like can be used. The in-vehicle camera 3 is disposed on the front, rear, left, and right sides of the vehicle, and photographs the front diagonally lower, the rear diagonally downward, the left diagonally downward, and the right diagonally downward of the vehicle, respectively. Each in-vehicle camera 3 captures the periphery of the vehicle at a predetermined time interval (1/15 s as an example), and outputs the captured image to the traveling road marking detection device 7.
 車載カメラ3の撮影範囲は、一部が他の車載カメラ3の撮影範囲と重複し、その一部を除く他の部分は他の車載カメラ3の撮影範囲と重複しないように配置されている。 The imaging range of the in-vehicle camera 3 is arranged so that a part thereof overlaps with the imaging range of the other in-vehicle camera 3 and the other parts except for the part do not overlap with the imaging range of the other in-vehicle camera 3.
 表示手段5は、画像を表示する液晶ディスプレイであって、走行路面標示検知装置7から入力される信号に従って画像を表示する。本実施例では、走行中の車両が走行車線から逸脱すると予測されたときに警告画面を表示する。 The display means 5 is a liquid crystal display that displays an image, and displays the image according to a signal input from the traveling road marking detection device 7. In this embodiment, a warning screen is displayed when it is predicted that the traveling vehicle will deviate from the traveling lane.
 走行路面標示検知装置7は、車載カメラ3にて撮影された撮影画像を取得し、画像処理して白線を検出すると共に、車両が走行車線を逸脱するか否かを予測する。 The traveling road surface marking detection device 7 acquires a photographed image photographed by the in-vehicle camera 3, performs image processing to detect a white line, and predicts whether or not the vehicle deviates from the traveling lane.
 この走行路面標示検知装置7は、図示しないCPUと、CPUが実行するプログラム等を記憶するROMと、CPUによるプログラム実行時に作業領域として使用されるRAMと、電気的にデータを書き換え可能なフラッシュメモリやEEPROM等の不揮発性メモリとしてのNVRAMなどを備えるコンピュータシステムとして構成されており、プログラムの実行により所定の処理を実行する。 This road surface sign detection device 7 includes a CPU (not shown), a ROM that stores a program executed by the CPU, a RAM that is used as a work area when the CPU executes a program, and a flash memory that can electrically rewrite data. And a computer system including an NVRAM as a nonvolatile memory such as an EEPROM, and executes a predetermined process by executing a program.
 この走行路面標示検知装置7は、車両に搭載されるECUや各種センサからの走行情報9を入力する。走行情報9としては、シフトレンジ、車速、ステアリングまたはヨーレートなどの情報が挙げられる。 This traveling road surface marking detection device 7 inputs traveling information 9 from an ECU and various sensors mounted on the vehicle. The travel information 9 includes information such as a shift range, vehicle speed, steering, or yaw rate.
 またこの走行路面標示検知装置7は、車両運動計算部11、カメラ搭載パラメータ補正部12、視点変換部13、解像度分布計算部14、画像範囲選択処理部15、画像検知処理部16、標示認識結果統合判定部17、として機能する。 In addition, the road surface sign detection device 7 includes a vehicle motion calculation unit 11, a camera-mounted parameter correction unit 12, a viewpoint conversion unit 13, a resolution distribution calculation unit 14, an image range selection processing unit 15, an image detection processing unit 16, and a sign recognition result. The integrated determination unit 17 functions.
 カメラ搭載パラメータ補正部12は、車載カメラ3の搭載位置を示すカメラ搭載パラメータ12aとして、各車載カメラ3の撮影範囲を特定可能な情報を記憶している。具体的には、車両を基準とした各車載カメラ3の位置情報(3次元座標)および撮影方向(ピッチ、ロール、ヨー、の角度)の情報を記憶している。 The camera-mounted parameter correction unit 12 stores information that can specify the shooting range of each vehicle-mounted camera 3 as the camera-mounted parameter 12a indicating the mounting position of the vehicle-mounted camera 3. Specifically, the information on the position information (three-dimensional coordinates) and the photographing direction (pitch, roll, yaw angle) of each in-vehicle camera 3 with respect to the vehicle is stored.
 (2)走行路面標示検知装置7による処理
 走行路面標示検知装置7により実行される車線逸脱警告処理について、図2に示すフローチャートに基づいて説明する。本処理は、一例として車両の走行が開始されたとき(車速が所定の閾値を超えたとき)に開始され、所定の時間間隔で繰り返し実行される。
(2) Processing by the road marking detection device 7 The lane departure warning processing executed by the road marking detection device 7 will be described based on the flowchart shown in FIG. As an example, this process is started when the vehicle starts running (when the vehicle speed exceeds a predetermined threshold) and is repeatedly executed at predetermined time intervals.
 S1では、車載カメラ3それぞれにて撮影した撮影画像を取得する。 In S1, a photographed image photographed by each vehicle-mounted camera 3 is acquired.
 S2では、取得した撮影画像を鳥瞰画像に変換する。この処理は、上述した視点変換部13により実現される。 In S2, the acquired photographed image is converted into a bird's-eye view image. This process is realized by the viewpoint conversion unit 13 described above.
 視点変換部13は、車載カメラ3それぞれから撮影画像を取得し、取得した撮影画像それぞれに基づいて、所定の仮想視点から見た複数の鳥瞰画像を生成する。本実施例における仮想視点は車両の真上であって、カメラ搭載位置を示すカメラ搭載パラメータ12aに応じた所定の変換式を用いて路面を鉛直に俯瞰した鳥瞰画像を生成する。鳥瞰画像の生成方法は公知であるが、例えば特開平10-211849号に記載された手法を用いることができる。この視点変換部13が、本開示における生成手段の一例である。 The viewpoint conversion unit 13 acquires captured images from each of the in-vehicle cameras 3, and generates a plurality of bird's-eye images viewed from a predetermined virtual viewpoint based on the acquired captured images. In this embodiment, the virtual viewpoint is directly above the vehicle, and a bird's-eye view image in which the road surface is viewed vertically is generated using a predetermined conversion formula corresponding to the camera mounting parameter 12a indicating the camera mounting position. Although a method for generating a bird's-eye view image is known, for example, a technique described in JP-A-10-211849 can be used. The viewpoint conversion unit 13 is an example of a generation unit in the present disclosure.
 図3(a)に示す鳥瞰画像21、22、23、24がそれぞれ車両の前方、後方、左方、右方を示す鳥瞰画像であって、同一のタイミングで各車載カメラ3により撮影された撮影画像に基づく鳥瞰画像である。各鳥瞰画像における撮影範囲が互いに重複する部分である重複部分には、車両周囲の同一の路面が写される。 The bird's- eye images 21, 22, 23, and 24 shown in FIG. 3A are bird's-eye images showing the front, rear, left, and right sides of the vehicle, respectively, and are taken by each vehicle-mounted camera 3 at the same timing. It is a bird's-eye view image based on an image. The same road surface around the vehicle is photographed in the overlapping portion where the shooting ranges in each bird's-eye view image overlap each other.
 なお確認のために記載すると、図3(a)の各鳥瞰画像は車両を中心とした撮影方向に合せて並べただけのものであって、実際の位置に合せて合成したものではない。 For confirmation, the bird's-eye view images in FIG. 3 (a) are merely arranged according to the shooting direction centered on the vehicle, and are not synthesized according to the actual position.
 続くS3では、汚れなどの固定ノイズ、およびエッジ適性値を検出する。この処理は、上述した画像検知処理部16により実現される。 In subsequent S3, fixed noise such as dirt and edge suitability values are detected. This process is realized by the image detection processing unit 16 described above.
 画像検知処理部16は、(a)固定ノイズ検知、(b)エッジ適性値計算、(c)白線検出、の3つの処理を実行する。S3では上記(a)、(b)を実行するため、ここではそれらの処理について説明する。 The image detection processing unit 16 executes three processes: (a) fixed noise detection, (b) edge suitability value calculation, and (c) white line detection. Since the above (a) and (b) are executed in S3, these processes will be described here.
 (a)固定ノイズ検知
 画像検知処理部16は、撮影画像に基づいて、車載カメラ3のレンズ汚れ等の同一の部分に継続して発生する固定ノイズを検出する。汚れ等の固定ノイズ検知の方法は公知であるが、一例として、特開2003-259358号公報に記載の手法を用いることができる。この固定ノイズ検出は鳥瞰画像に基づいて行ってもよい。画像検知処理部16が、本開示におけるノイズ検知手段の一例である。
(A) Fixed noise detection The image detection processing unit 16 detects fixed noise generated continuously in the same part such as lens dirt of the in-vehicle camera 3 based on the photographed image. A method for detecting fixed noise such as dirt is known, but as an example, the method described in Japanese Patent Application Laid-Open No. 2003-259358 can be used. This fixed noise detection may be performed based on a bird's-eye view image. The image detection processing unit 16 is an example of a noise detection unit in the present disclosure.
 (b)エッジ適性値計算
 画像検知処理部16は、各鳥瞰画像からエッジを検出し、検出されたエッジについて、白線の検出しやすさの度合を示すパラメータ(以降、単にエッジ適性値とも記載する)を算出する。このパラメータは、エッジ強度、エッジ数、およびエッジ角度から求まるパラメータである。
(B) Edge Suitability Calculation The image detection processing unit 16 detects an edge from each bird's-eye view image, and a parameter indicating the degree of ease of detecting a white line for the detected edge (hereinafter, also simply referred to as an edge suitability value). ) Is calculated. This parameter is obtained from the edge strength, the number of edges, and the edge angle.
 エッジの強度とは、白線と路面とのコントラスト(輝度差)であり、コントラストが大きいほどエッジ適性値は大きくなる。エッジ数とは検出されるエッジの数であり、この数が大きいほどエッジ適性値は大きくなる。またエッジ角度とはエッジの並ぶ方向であり、車両の進行方向や過去に検出された白線の方向を考慮して、適切な角度にエッジが並んでいるほどエッジ適性値は大きくなる。 The edge strength is the contrast (brightness difference) between the white line and the road surface. The greater the contrast, the greater the edge suitability value. The number of edges is the number of detected edges, and the edge suitability value increases as this number increases. The edge angle is the direction in which the edges are arranged, and the edge suitability value increases as the edges are arranged at an appropriate angle in consideration of the traveling direction of the vehicle and the direction of the white line detected in the past.
 つまりエッジ適性値が高い部分とは、白線の検出に不適となる問題、例えば白飛びがある、過度に暗い、不適切なエッジが多数検出される、といった問題が少ない、白線検出に適した部分である。 In other words, a part with a high edge suitability value is a part suitable for white line detection with few problems that are inappropriate for white line detection, for example, overexposure, excessive darkness, and many inappropriate edges are detected. It is.
 続くS4では、カメラ搭載パラメータ12aの正確さを確認し、誤差があれば補正を行う。この処理は、上述したカメラ搭載パラメータ補正部12により実現される。 In subsequent S4, the accuracy of the camera mounting parameter 12a is confirmed, and if there is an error, correction is performed. This process is realized by the above-described camera mounting parameter correction unit 12.
 カメラ搭載パラメータ補正部12は、撮影画像に基づいて、車載カメラ3の現在の位置情報および現在の撮影方向の情報を推定する。推定されたカメラ搭載パラメータと、記憶されているカメラ搭載パラメータ12aと、を比較し、所定の閾値以上相違していれば、新たに推定されたパラメータをカメラ搭載パラメータ12aとして更新する。 The camera-mounted parameter correction unit 12 estimates current position information and current shooting direction information of the in-vehicle camera 3 based on the shot image. The estimated camera mounting parameter is compared with the stored camera mounting parameter 12a. If the estimated camera mounting parameter is different by a predetermined threshold or more, the newly estimated parameter is updated as the camera mounting parameter 12a.
 このカメラ搭載パラメータ補正部12が、本開示における位置検知手段の一例である。 The camera-mounted parameter correction unit 12 is an example of a position detection unit in the present disclosure.
 続くS5では、カメラ搭載位置に基づいて解像度分布を計算する。この処理は、上述した解像度分布計算部14により実現される。 In subsequent S5, the resolution distribution is calculated based on the camera mounting position. This process is realized by the resolution distribution calculation unit 14 described above.
 解像度分布計算部14は、カメラ搭載パラメータ12aを用いて解像度分布を算出する。解像度分布とは、鳥瞰画像の各画素の元解像度を示す情報である。ここで言う元解像度とは、視点変換部13による変換前の撮影画像(以降、単に元の撮影画像ともいう)における、変換後の鳥瞰画像の1つの画素に対応する部分の、1ピクセル当りの路面の実際の距離を示すパラメータである。 The resolution distribution calculation unit 14 calculates the resolution distribution using the camera mounting parameters 12a. The resolution distribution is information indicating the original resolution of each pixel of the bird's-eye view image. The original resolution here refers to a portion per pixel of a portion corresponding to one pixel of a bird's-eye image after conversion in a captured image before conversion by the viewpoint conversion unit 13 (hereinafter also simply referred to as an original captured image). This parameter indicates the actual distance on the road surface.
 なお、上記パラメータは元解像度として利用できるパラメータの一例に過ぎず、例えば路面の単位長さ(例えば1cm)当りの撮影画像のピクセル数を元解像度として用いることもできる。また元解像度は、画素ごとでなく、一定の画素の集合ごとに算出しても良い。解像度分布は、後述する重複部分においてのみ算出しても良い。 The above parameters are merely examples of parameters that can be used as the original resolution. For example, the number of pixels of a captured image per unit length (for example, 1 cm) of the road surface can be used as the original resolution. Also, the original resolution may be calculated for each set of pixels, not for each pixel. The resolution distribution may be calculated only in the overlapping portion described later.
 撮影画像において、車載カメラ3から相対的に近い路面は、車載カメラ3から相対的に遠い路面に比べて1ピクセルあたりの路面距離が短くなる。鳥瞰画像に変換すると車載カメラ3から相対的に近い路面ほど相対的な拡大度合が小さく、遠い路面ほど相対的な拡大度合が大きくなる。拡大度合が小さいほど画面が精細になる。 In the captured image, the road surface relatively close to the vehicle-mounted camera 3 has a shorter road surface distance per pixel than the road surface relatively far from the vehicle-mounted camera 3. When converted into a bird's-eye view image, the road surface relatively closer to the in-vehicle camera 3 has a smaller relative degree of enlargement and the road surface farther has a larger degree of relative expansion. The smaller the magnification, the finer the screen.
 図4(a)と図4(b)に、図3(a)の鳥瞰画像22、23において、同一の撮影範囲が示される部分22a、23aを拡大した拡大画像を示す。鳥瞰画像23の上記部分23aは元解像度が相対的に小さいため拡大の度合が大きく、白線31とその周囲の路面部分33との境界が曖昧になっている。一方鳥瞰画像22の上記部分22aは、元解像度が相対的に大きいため拡大の度合が小さく、白線31と路面部分33との境界が比較的明確になっている。 4 (a) and 4 (b) show enlarged images obtained by enlarging portions 22a and 23a where the same photographing range is shown in the bird's- eye images 22 and 23 of FIG. 3 (a). The portion 23a of the bird's-eye image 23 has a relatively small original resolution, so the degree of enlargement is large, and the boundary between the white line 31 and the surrounding road surface portion 33 is ambiguous. On the other hand, the portion 22a of the bird's-eye image 22 has a relatively high original resolution, so the degree of enlargement is small, and the boundary between the white line 31 and the road surface portion 33 is relatively clear.
 白線検出を行う際には画像処理によりエッジを検出するが、白線と路面との境界が明確である場合、例えば白線と路面との輝度の差が大きい場合に、エッジを高い精度で検出することができる。よって、同一の路面を撮影した画像であっても、より元解像度が大きい画像を用いることでエッジを良好に検出することができる。 When performing white line detection, edges are detected by image processing, but when the boundary between the white line and the road surface is clear, for example, when there is a large difference in brightness between the white line and the road surface, the edge should be detected with high accuracy. Can do. Therefore, even when images are taken of the same road surface, edges can be detected well by using an image with a higher original resolution.
 上述した解像度分布計算部14が、本開示における算出手段の一例である。 The resolution distribution calculation unit 14 described above is an example of calculation means in the present disclosure.
 続くS6では、車載カメラ3それぞれに対応する鳥瞰画像ごとの画像処理範囲を決定する。鳥瞰画像における画像処理範囲は、他の鳥瞰画像と撮影範囲が重複しない部分と、他の鳥瞰画像と撮影範囲が重複する重複部分のうち画像範囲選択処理部15により割り当て範囲として設定される部分と、を合せた範囲である。 In subsequent S6, an image processing range for each bird's-eye view image corresponding to each vehicle-mounted camera 3 is determined. The image processing range in the bird's-eye image includes a portion where the shooting range does not overlap with another bird's-eye image, and a portion set as an allocation range by the image range selection processing unit 15 among overlapping portions where the shooting range overlaps with another bird's-eye image. , And the combined range.
 画像範囲選択処理部15は、上記2つの鳥瞰画像の重複部分それぞれに対して割り当て範囲を設定する。 The image range selection processing unit 15 sets an allocation range for each overlapping portion of the two bird's-eye images.
 図5(a)と図5(b)を用いて具体的な割り当て範囲の設定方法を説明する。鳥瞰画像41、42はそれぞれ異なる車載カメラ3にて撮影された撮影画像を鳥瞰画像に変換したものである。鳥瞰画像41、42において互いに重複する部分をそれぞれ重複部分41a、42aとする。これらは同一の大きさであるため画素の配置が等しくなっており、同じ位置の画素は同じ撮影範囲に対応する。 A specific allocation range setting method will be described with reference to FIGS. 5 (a) and 5 (b). The bird's- eye images 41 and 42 are obtained by converting captured images captured by different on-vehicle cameras 3 into bird's-eye images. Portions overlapping each other in the bird's- eye images 41 and 42 are referred to as overlapping portions 41a and 42a, respectively. Since these are the same size, the pixel arrangement is the same, and the pixels at the same position correspond to the same shooting range.
 重複部分41a、42aの各画素は、解像度分布計算部14によってそれぞれ元解像度が求められている。重複部分41aのうち、重複部分42aよりも対応する画素の元解像度が大きい部分が、割り当て範囲41bとして設定される。一方、重複部分42aのうち重複部分41aよりも元解像度が大きい部分が、割り当て範囲42bとして設定される。 The original resolution of each pixel of the overlapping portions 41 a and 42 a is obtained by the resolution distribution calculation unit 14. Of the overlapping portion 41a, a portion where the original resolution of the corresponding pixel is larger than that of the overlapping portion 42a is set as the allocation range 41b. On the other hand, a portion having an original resolution larger than the overlapping portion 41a in the overlapping portion 42a is set as the allocation range 42b.
 鳥瞰画像41が、本開示における第1鳥瞰画像の一例であり、鳥瞰画像42が、本開示における第2鳥瞰画像の一例である。鳥瞰画像41の重複部分41aが、本開示における第1重複部分の一例であり、鳥瞰画像42の重複部分42aが、本開示における第2重複部分の一例である。第1重複部分41aにおいて、割り当て範囲41bが、本開示における検出対象範囲の一例であり、残りの部分が、本開示における検出不要範囲の一例である。第2重複部分42aにおいて、割り当て範囲42bが、本開示における検出対象範囲の一例であり、残りの部分が、本開示における検出不要範囲の一例である。 The bird's-eye image 41 is an example of the first bird's-eye image in the present disclosure, and the bird's-eye image 42 is an example of the second bird's-eye image in the present disclosure. The overlapping part 41a of the bird's-eye image 41 is an example of a first overlapping part in the present disclosure, and the overlapping part 42a of the bird's-eye image 42 is an example of a second overlapping part in the present disclosure. In the first overlapping portion 41a, the allocation range 41b is an example of a detection target range in the present disclosure, and the remaining portion is an example of a detection unnecessary range in the present disclosure. In the second overlapping portion 42a, the allocation range 42b is an example of a detection target range in the present disclosure, and the remaining portion is an example of a detection unnecessary range in the present disclosure.
 図5(a)と図5(b)に示すように、第1重複部分41aの検出対象範囲41bは、第2重複部分42aの検出不要範囲と同じ範囲を示すように設定され、第1重複部分41aの検出不要範囲は、第2重複部分42aの検出対象範囲42bと同じ範囲を示すように設定されている。 As shown in FIGS. 5 (a) and 5 (b), the detection target range 41b of the first overlapping portion 41a is set to indicate the same range as the unnecessary detection range of the second overlapping portion 42a, and the first overlapping The detection unnecessary range of the portion 41a is set to indicate the same range as the detection target range 42b of the second overlapping portion 42a.
 第1撮影画像における第1重複部分41aの検出対象範囲41bに対応する部分の1ピクセル当たりの路面の実際の距離は、第2撮影画像における第2重複部分42aの検出不要範囲に対応する部分の1ピクセル当たりの路面の実際の距離より短い。第2撮影画像における第2重複部分42aの検出対象範囲42bに対応する部分の1ピクセル当たりの路面の実際の距離は、第1撮影画像における第1重複部分41aの検出不要範囲に対応する部分の1ピクセル当たりの路面の実際の距離より短い。 The actual distance of the road surface per pixel of the portion corresponding to the detection target range 41b of the first overlapping portion 41a in the first captured image is the portion of the portion corresponding to the detection unnecessary range of the second overlapping portion 42a in the second captured image. It is shorter than the actual distance of the road surface per pixel. The actual distance of the road surface per pixel of the portion corresponding to the detection target range 42b of the second overlapping portion 42a in the second captured image is the portion of the portion corresponding to the detection unnecessary range of the first overlapping portion 41a in the first captured image. It is shorter than the actual distance of the road surface per pixel.
 但し、S3において固定ノイズが検知された部分、およびS3において算出されたエッジ適性値が所定の閾値より小さい部分(以降、これらいずれかの部分を単に不適部分とも記載する)は、割り当て範囲の対象から除外される。即ち、固定ノイズが検知されていない部分でありエッジ適性値が所定の閾値以上である部分から割り当て範囲が設定される。 However, the portion where the fixed noise is detected in S3 and the portion whose edge suitability value calculated in S3 is smaller than the predetermined threshold (hereinafter, any one of these portions will be simply referred to as an inappropriate portion) is the target of the allocation range. Excluded from. That is, the allocation range is set from a portion where no fixed noise is detected and the edge suitability value is equal to or greater than a predetermined threshold.
 図6(a)と図6(b)に示すように、重複部分41aにおいて不適部分41cが存在する場合、その不適部分41cは割り当て範囲41bとして設定されず、その部分は重複部分42aにおいて元解像度に関わらず割り当て範囲42bとして設定される。 As shown in FIGS. 6A and 6B, when there is an inappropriate portion 41c in the overlapping portion 41a, the inappropriate portion 41c is not set as the allocation range 41b, and that portion is the original resolution in the overlapping portion 42a. Regardless, it is set as the allocation range 42b.
 なお、元解像度を複数の画素の集合ごとに算出した場合には、その元解像度に基づいて割り当て範囲を設定してもよい。 When the original resolution is calculated for each set of a plurality of pixels, an allocation range may be set based on the original resolution.
 また、複数の重複部分の元解像度の比較結果に基づいて厳密に割り当て範囲を設定せずに、図7(a)と図7(b)に示すように、元解像度の比較結果に基づいて定められる境界線43を境界として割り当て範囲を設定してもよい。この境界線43は、一方の重複部分における他方の重複部分よりも元解像度の大きい部分(画素)が、その一方の重複部分の割り当て範囲に含まれるように設定される。 In addition, the allocation range is not set strictly based on the comparison results of the original resolutions of a plurality of overlapping portions, and is determined based on the comparison results of the original resolutions as shown in FIGS. 7 (a) and 7 (b). The allocated range may be set with the boundary line 43 as a boundary. The boundary line 43 is set so that a portion (pixel) having a higher original resolution than the other overlapping portion in one overlapping portion is included in the allocation range of the one overlapping portion.
 なお図7のように重複部分を設定する際には、一方の重複部分における他方の重複部分よりも元解像度の大きい部分が、一方の重複部分における割り当て範囲の主たる部分を占めるように設定するとよい。また、一方の重複部分における他方の重複部分よりも元解像度の大きい部分が、一方の重複部分の割り当て範囲において所定の割合以上含まれるように設定してもよい。 When setting the overlapping portion as shown in FIG. 7, it is preferable that the portion having the original resolution higher than the other overlapping portion in one overlapping portion occupies the main portion of the allocation range in one overlapping portion. . In addition, it may be set such that a portion having an original resolution higher than that of the other overlapping portion in one overlapping portion is included in the allocation range of the one overlapping portion by a predetermined ratio or more.
 なお重複部分41a、42aで同一の元解像度の部分が生じた場合は、その部分については重複部分41a、42aのいずれかの割り当て範囲に含めることができる。 In addition, when the part of the same original resolution arises in the overlapping parts 41a and 42a, it can be included in the allocation range of either of the overlapping parts 41a and 42a about the part.
 上述した画像範囲選択処理部15が、本開示における範囲設定手段の一例である。また、重複部分41aにおいて重複部分42aよりも元解像度の大きい画素または画素の集合が、重複部分41aを基準としたときの本開示における第1領域の一例であり、重複部分42aにおける重複部分41aよりも元解像度の小さい画素または画素の集合が本開示における第2領域の一例である。 The image range selection processing unit 15 described above is an example of a range setting unit in the present disclosure. In addition, a pixel or a set of pixels having a higher original resolution than the overlapping portion 42a in the overlapping portion 41a is an example of the first region in the present disclosure when the overlapping portion 41a is used as a reference, and is more than the overlapping portion 41a in the overlapping portion 42a. Also, a pixel or a set of pixels with a small original resolution is an example of the second region in the present disclosure.
 続くS7では、鳥瞰画像ごとにS6にて定めた画像処理範囲において白線検出処理を行う。この処理は、上述した画像検知処理部16により実現される(c)の処理である。 In S7, white line detection processing is performed in the image processing range determined in S6 for each bird's-eye view image. This process is the process (c) realized by the image detection processing unit 16 described above.
 (c)白線検出
 画像検知処理部16は、各鳥瞰画像からエッジを抽出し、白線検出を行う。具体的な方法は公知であるため説明を割愛する。各鳥瞰画像における撮影範囲が互いに重複する重複部分それぞれにおいては、割り当て範囲として設定された部分において白線検出を行う。
(C) White Line Detection The image detection processing unit 16 extracts an edge from each bird's-eye view image and performs white line detection. Since a specific method is publicly known, description thereof is omitted. In each overlapping portion where the shooting ranges in each bird's-eye view image overlap each other, white line detection is performed in the portion set as the allocation range.
 白線検出を行う場合には、過去に撮影された撮影画像に基づいて検出されたエッジも利用する。車両運動計算部11は、走行情報9に基づいて車両の移動パラメータ(進行方向および速度)を算出するする。この算出された移動パラメータを利用して、時間の経過による車両を基準としたエッジの移動先を推定する。 When white line detection is performed, an edge detected based on a previously captured image is also used. The vehicle motion calculation unit 11 calculates vehicle movement parameters (traveling direction and speed) based on the travel information 9. Using this calculated movement parameter, an edge movement destination based on the vehicle over time is estimated.
 上記画像検知処理部16が、本開示における標示物検出手段の一例である。 The image detection processing unit 16 is an example of a sign detection unit in the present disclosure.
 続くS8では、S7にて白線検出処理を行った結果を統合し、車両を基準とした白線の位置を判定する。この処理は、上述した標示認識結果統合判定部17により実現される。 In subsequent S8, the results of the white line detection processing in S7 are integrated to determine the position of the white line with reference to the vehicle. This process is realized by the above-described sign recognition result integration determination unit 17.
 標示認識結果統合判定部17は、画像検知処理部16により各鳥瞰画像それぞれにおいて検出された白線の情報に基づいて、車両を基準とした白線の位置を判定する。具体的には、各鳥瞰画像から検出された白線の尤度、他の鳥瞰画像から検出された白線との一致および相違の度合、および上記S7と同様に過去に算出した白線の情報などを考慮して、現在の白線の位置を判定する。白線の尤度は、例えばエッジ適性値を用いることができる。 The sign recognition result integration determination unit 17 determines the position of the white line based on the vehicle based on the white line information detected in each bird's-eye view image by the image detection processing unit 16. Specifically, the likelihood of the white line detected from each bird's-eye view image, the degree of coincidence and difference with the white line detected from other bird's-eye images, and the information of the white line calculated in the past in the same manner as in S7 are considered. Then, the current position of the white line is determined. As the likelihood of the white line, for example, an edge suitability value can be used.
 続くS9では、S8により判定された白線の位置と、車両運動計算部11により算出された車両の移動パラメータと、に基づいて、所定の時間内に車両が車線を逸脱する(白線を交差して通過する)か否かを判定する車線逸脱判定を行い、車線を逸脱すると判定された場合には、運転者に危険を示すための警告画面を表示手段5に表示させる。この処理もS8と同様に、上述した標示認識結果統合判定部17により実現される。 In subsequent S9, based on the position of the white line determined in S8 and the vehicle movement parameter calculated by the vehicle motion calculation unit 11, the vehicle departs from the lane within a predetermined time (crossing the white line). A lane departure determination is performed to determine whether or not the vehicle is passing, and if it is determined that the vehicle will deviate from the lane, a warning screen for indicating a danger to the driver is displayed on the display means 5. This process is also realized by the above-described sign recognition result integration determination unit 17 as in S8.
 (3)効果
 本実施例の標示物認識システム1では、複数の鳥瞰画像が重複する重複部分では割り当て範囲において白線の検出を行う。各重複部分の割り当て範囲同士は互いに重複しないため、同一の撮影範囲に対して重複して白線の検出を行うことを抑制でき、画像処理による検出負荷を低減することができる。
(3) Effect In the sign recognition system 1 of the present embodiment, the white line is detected in the allocation range in the overlapping portion where a plurality of bird's-eye images overlap. Since the overlapping ranges of the overlapping portions do not overlap each other, it is possible to suppress the detection of white lines overlapping the same imaging range, and to reduce the detection load due to image processing.
 また、割り当て範囲を重複部分の元の撮影画像における解像度である元解像度に基づいて設定しているため、複数の重複部分のうち、拡大度合が小さく検知対象がぼけていない方の画像に基づいて白線を検出することができ、白線の検出精度の低下を抑制することができる。 In addition, since the allocation range is set based on the original resolution that is the resolution of the original captured image of the overlapping portion, based on the image of the plurality of overlapping portions that has a small degree of enlargement and the detection target is not blurred. A white line can be detected, and a decrease in white line detection accuracy can be suppressed.
 また、車載カメラ3の搭載位置を検知して、検知された搭載位置に基づいて重複部分の元解像度を算出し、割り当て範囲を設定することから、車載カメラ3を車両に固定する固定具の経年劣化や車両の走行に伴う路面角度の変化に基づいて、割り当て範囲を適切に変化させることができ、白線の検出精度の低下を抑制することができる。 Further, since the mounting position of the in-vehicle camera 3 is detected, the original resolution of the overlapped portion is calculated based on the detected mounting position, and the allocation range is set, the fixing device for fixing the in-vehicle camera 3 to the vehicle is aged. The assigned range can be appropriately changed based on the deterioration or the change in the road surface angle accompanying the traveling of the vehicle, and the decrease in the detection accuracy of the white line can be suppressed.
 また、重複部分における汚れ等の固定ノイズがある部分やエッジ適性値が低い部分を割り当て範囲から除外して白線を検出するため、これによっても白線の検出精度の低下を抑制することができる。 Further, since the white line is detected by excluding the part having fixed noise such as dirt in the overlapping part or the part having a low edge suitability value from the allocation range, it is possible to suppress the decrease in the white line detection accuracy.
 上記割り当て範囲の設定方法は特に限定されない。例えば、重複部分の一部または全部を占める第1領域であって、撮影画像における上記第1領域と対応する部分の解像度である元解像度が、他の鳥瞰画像における上記第1領域と撮影範囲が重複する第2領域の元解像度よりも大きい上記第1領域を含むように、割り当て範囲が設定されていてもよい。 The setting method of the above allocation range is not particularly limited. For example, the first area that occupies a part or all of the overlapping part, and the original resolution that is the resolution of the part corresponding to the first area in the captured image is the same as the first area and the capturing range in the other bird's-eye images. The allocation range may be set so as to include the first area larger than the original resolution of the overlapping second area.
 車両に取り付けられたカメラにより撮影される路面は、カメラからの距離が近いほど大きく、即ち大きな解像度で撮影される。よって撮影画像を鳥瞰画像に変換すると、路面におけるカメラからの距離が遠く、解像度が小さい部分の画像ほど拡大度合が大きくなる。大きく拡大された部分は画像がぼけてしまうので、標示物の検知の精度が低下する虞がある。 The road surface shot by the camera attached to the vehicle is larger as the distance from the camera is closer, that is, the road surface is shot with a larger resolution. Therefore, when a captured image is converted into a bird's-eye view image, the distance from the camera on the road surface is farther, and the degree of magnification increases as the image has a smaller resolution. Since the image is blurred at the greatly enlarged portion, there is a possibility that the accuracy of detection of the sign object is lowered.
 しかしながら、上述したように重複部分の元の撮影画像における解像度である元解像度に基づいて割り当て範囲を設定することで、拡大度合がより小さく検知対象がぼけていない鳥瞰画像に基づいて標示物を検出することができ、標示物の検出精度の低下を抑制することができる。 However, as described above, by setting the allocation range based on the original resolution that is the resolution of the original captured image of the overlapped portion, the sign object is detected based on the bird's-eye image with a smaller degree of enlargement and a detection target that is not blurred. It is possible to suppress the decrease in the detection accuracy of the sign object.
 [変形例]
 以上本開示の実施例について説明したが、本開示は、上記実施例に何ら限定されることはなく、本開示の技術的範囲に属する限り種々の形態をとり得ることはいうまでもない。
[Modification]
Although the embodiments of the present disclosure have been described above, the present disclosure is not limited to the above embodiments, and it is needless to say that various forms can be employed as long as they belong to the technical scope of the present disclosure.
 例えば上記実施例においては、重複部分の元解像度に基づいて割り当て範囲を設定するにあたり、固定ノイズが検知された部分およびエッジ適性値が低い部分を割り当て範囲から除外する構成を例示したが、いずれか一方のみ考慮する構成であってもよいし、いずれも考慮せず元解像度のみに基づいて割り当て範囲を設定する構成であってもよい。 For example, in the above embodiment, in setting the allocation range based on the original resolution of the overlapping portion, the configuration in which the portion where the fixed noise is detected and the portion having a low edge suitability value are excluded from the allocation range is exemplified. Only one of the configurations may be considered, or the allocation range may be set based on only the original resolution without considering either.
 また、画像範囲選択処理部15が、元解像度を考慮せず、エッジ適性値に基づいて割り当て範囲を設定する構成としてもよい。例えば、重複部分のうち、同一の撮影範囲を示す他の重複部分と比較してエッジ適性値が大きい部分を含む範囲を、割り当て範囲として設定することが考えられる。 Further, the image range selection processing unit 15 may set the allocation range based on the edge suitability value without considering the original resolution. For example, it is conceivable to set, as the allocation range, a range including a portion having a larger edge suitability value compared to other overlapping portions showing the same imaging range among the overlapping portions.
 また上記実施例においては、カメラ搭載パラメータ12aを随時更新して、そのカメラ搭載パラメータ12aに基づき適切な割り当て範囲を算出する構成を例示したが、割り当て範囲は変化せず一定とする構成であってもよい。 In the above embodiment, the camera mounting parameter 12a is updated as needed, and an appropriate allocation range is calculated based on the camera mounting parameter 12a. However, the allocation range does not change and is constant. Also good.
 また上記実施例においては、鳥瞰画像ごとに白線を検出する構成を例示したが、白線を検出する前に各鳥瞰画像を一枚の画像に合成し、その合成した鳥瞰画像から白線を検出するように構成してもよい。一枚の画像に合成する際には、重複部分については割り当て範囲を合成に用いるように構成するとよい。 Moreover, in the said Example, although the structure which detects a white line for every bird's-eye view image was illustrated, before detecting a white line, each bird's-eye view image is synthesize | combined to one image, and a white line is detected from the synthesized bird's-eye image. You may comprise. When combining the images, it is preferable to use an allocation range for combining the overlapping portions.
 また上記実施例においては、割り当て範囲は複数の重複部分で重複しないように設定される構成を例示したが、一部が重複する構成であってもよい。 In the above-described embodiment, the configuration in which the allocation range is set so as not to overlap in a plurality of overlapping portions is exemplified, but a configuration in which some overlap may be employed.
 また、本開示は、前述した走行路面標示検知装置および走行路面標示検知方法の他、当該走行路面標示検知装置を構成要素とするシステム、当該走行路面標示検知方法をコンピュータに実行させるためのプログラム、当該プログラムが記録された記録媒体など、種々の形態で実現することができる。 Further, the present disclosure includes, in addition to the above-described traveling road marking detection device and traveling road marking detection method, a system including the traveling road marking detection device as a component, a program for causing a computer to execute the traveling road marking detection method, It can be realized in various forms such as a recording medium on which the program is recorded.
 本開示に記載されるフローチャート、あるいは、フローチャートの処理は、複数の手段(あるいはステップと言及される)から構成され、各手段は、たとえば、S1と表現される。さらに、各手段は、複数のサブ手段に分割されることができる、一方、複数の手段が合わさって一つの手段にすることも可能である。さらに、このように構成される各手段は、モジュール、ミーンズとして言及されることができる。 The flowchart or the process of the flowchart described in the present disclosure includes a plurality of means (or referred to as steps), and each means is expressed as, for example, S1. Furthermore, each means can be divided into a plurality of sub means, while a plurality of means can be combined into one means. Further, each means configured in this way can be referred to as a module, means.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described based on the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (10)

  1.  車両周辺を撮影する複数のカメラ(3)により撮影された少なくとも撮影範囲の一部が重複しない複数の撮影画像に基づいて、所定の仮想視点から見た鳥瞰画像を生成する生成手段(13、S2)と、
     前記生成手段により生成された前記鳥瞰画像から路面の標示物を検出する標示物検出手段(16、S7)と、を備え、
     前記標示物検出手段は、2つ以上の前記鳥瞰画像における撮影範囲が互いに重複する部分である重複部分(41a、42a)それぞれについては、該重複部分それぞれに対して割り当てられた割り当て範囲において前記標示物の検出を行う
     走行路面標示検知装置。
    Generating means (13, S2) for generating a bird's-eye image viewed from a predetermined virtual viewpoint, based on a plurality of captured images captured by a plurality of cameras (3) that capture the periphery of the vehicle and at least a part of the imaging range does not overlap. )When,
    Marking object detection means (16, S7) for detecting a road surface marking object from the bird's-eye view image generated by the generation means,
    The sign object detection means, for each of the overlapping portions (41a, 42a) in which the shooting ranges in two or more bird's-eye images overlap each other, in the assigned range assigned to each of the overlapping portions, A road marking detector that detects objects.
  2.  前記重複部分の一部または全部を占める第1領域であって、前記撮影画像における前記第1領域と対応する部分の解像度である元解像度が、他の前記鳥瞰画像における前記第1領域と撮影範囲が重複する第2領域の前記元解像度よりも大きい前記第1領域を含むように、前記割り当て範囲が設定される
     請求項1に記載の走行路面標示検知装置。
    The first region occupying part or all of the overlapping portion, and the original resolution that is the resolution of the portion corresponding to the first region in the captured image is the first region and the capturing range in the other bird's-eye image The road marking detection device according to claim 1, wherein the assigned range is set so as to include the first region that is larger than the original resolution of the second region in which the two overlap.
  3.  前記重複部分それぞれの前記元解像度に基づいて、前記割り当て範囲を設定する範囲設定手段(15、S6)を備える
     請求項2に記載の走行路面標示検知装置。
    The road marking detection device according to claim 2, further comprising range setting means (15, S6) for setting the allocation range based on the original resolution of each of the overlapping portions.
  4.  前記カメラの搭載位置を検知する位置検知手段(12、S4)と、
     前記位置検知手段により検知された前記搭載位置に基づいて、前記重複部分の前記元解像度を算出する算出手段(14、S5)と、を備え、
     前記範囲設定手段は、前記算出手段により算出された前記元解像度に基づいて前記重複部分の前記割り当て範囲を設定する
     請求項3に記載の走行路面標示検知装置。
    Position detection means (12, S4) for detecting the mounting position of the camera;
    Calculation means (14, S5) for calculating the original resolution of the overlapping portion based on the mounting position detected by the position detection means;
    The travel road marking detection device according to claim 3, wherein the range setting means sets the allocation range of the overlapping portion based on the original resolution calculated by the calculation means.
  5.  前記標示物検出手段は、前記鳥瞰画像からエッジを検出して該エッジに基づいて前記標示物を検出するものであり、
     前記範囲設定手段は、前記重複部分のうち、前記標示物検出手段により検出されるエッジの前記標示物の検出しやすさの度合を示すパラメータが所定の閾値以上である部分から、当該重複部分の前記割り当て範囲を設定する
     請求項3または請求項4に記載の走行路面標示検知装置。
    The sign detection means detects an edge from the bird's-eye image and detects the sign based on the edge,
    The range setting means is configured such that, from among the overlapped portions, a parameter indicating a degree of ease of detection of the sign detected at the edge detected by the sign detection means is equal to or greater than a predetermined threshold value. The travel road marking detection device according to claim 3 or 4, wherein the allocation range is set.
  6.  前記標示物検出手段は、前記鳥瞰画像からエッジを検出して該エッジに基づいて前記標示物を検出するものであり、
     前記重複部分のうち、他の前記鳥瞰画像における前記重複部分と比較して、前記標示物検出手段により検出されるエッジの前記標示物の検出しやすさの度合を示すパラメータが大きい部分を含む範囲を、前記割り当て範囲として設定する範囲設定手段(15)を備える
     請求項1に記載の走行路面標示検知装置。
    The sign detection means detects an edge from the bird's-eye image and detects the sign based on the edge,
    A range including a portion in which the parameter indicating the degree of ease of detection of the sign object of the edge detected by the sign object detection unit is larger than the overlap part in the other bird's-eye view image among the overlap parts. The travel road marking detection device according to claim 1, further comprising range setting means (15) that sets the allocation range as the allocation range.
  7.  前記撮影画像における同一の部分に継続して発生するノイズである固定ノイズを検知するノイズ検知手段(16、S3)を備え、
     前記範囲設定手段は、前記重複部分のうち前記ノイズ検知手段により前記固定ノイズが検知されていない部分の中から前記割り当て範囲を設定する
     請求項3から請求項6のいずれか1項に記載の走行路面標示検知装置。
    Noise detection means (16, S3) for detecting fixed noise, which is noise generated continuously in the same part of the captured image,
    The travel according to any one of claims 3 to 6, wherein the range setting unit sets the allocation range from a portion of the overlapping portion where the fixed noise is not detected by the noise detection unit. Road marking detection device.
  8.  第1撮影範囲を有する第1カメラ(3)により撮影された第1撮影画像と、前記第1撮影範囲と部分的に重複する第2撮影範囲を有する第2カメラ(3)により撮影された第2撮影画像に基づいて、所定の仮想視点から見た第1鳥瞰画像と第2鳥瞰画像をそれぞれ生成する生成手段(13、S2)と、前記第1鳥瞰画像は、前記第1撮影範囲と前記第2撮影範囲において部分的に重複する重複撮影範囲が撮影された第1重複部分を有し、前記第2鳥瞰画像は、前記重複撮影範囲が撮影された第2重複部分を有し、
     前記第1撮影画像の解像度と前記第2撮影画像の解像度に基づいて、前記第1重複部分と前記第2重複部分のそれぞれにおいて、検出対象範囲と検出対象範囲の残りの部分である検出不要範囲を設定する範囲設定手段(15、S6)と、
     前記第1撮影画像において、前記第1重複部分を除く残りの部分と前記第1重複部分の検出対象範囲から路面の標示物を検出し、前記第2撮影画像において、前記第2重複部分を除く残りの部分と前記第2重複部分の検出対象範囲から前記路面の標示物を検出する標示物検出手段(16、S7)と、を備える
     走行路面標示検知装置。
    The first photographed image taken by the first camera (3) having the first photographing range and the second photographed by the second camera (3) having the second photographing range partially overlapping with the first photographing range. Two generation means (13, S2) for generating a first bird's-eye image and a second bird's-eye image viewed from a predetermined virtual viewpoint based on two photographed images, and the first bird's-eye image includes the first photographing range and the A first overlapping portion in which an overlapping imaging range partially overlapping in the second imaging range is captured, and the second bird's-eye image has a second overlapping portion in which the overlapping imaging range is captured;
    Based on the resolution of the first captured image and the resolution of the second captured image, a detection unnecessary range that is a detection target range and a remaining portion of the detection target range in each of the first overlap portion and the second overlap portion. Range setting means (15, S6) for setting
    In the first photographed image, a road surface marking object is detected from the remaining part excluding the first overlapping part and the detection target range of the first overlapping part, and the second overlapping part is excluded from the second photographed image. A road surface sign detection device comprising: sign object detection means (16, S7) for detecting an object on the road surface from the detection target range of the remaining part and the second overlapping part.
  9.  前記範囲設定手段は、前記第1重複部分の検出対象範囲が前記第2重複部分の検出不要範囲と同じ範囲を示すように設定し、前記第1重複部分の検出不要範囲が前記第2重複部分の検出対象範囲と同じ範囲を示すように設定し、
     前記第1撮影画像における前記第1重複部分の検出対象範囲に対応する部分の解像度は、前記第2撮影画像における前記第2重複部分の検出不要範囲に対応する部分の解像度より大きく、
     前記第2撮影画像における前記第2重複部分の検出対象範囲に対応する部分の解像度は、前記第1撮影画像における前記第1重複部分の検出不要範囲に対応する部分の解像度より大きい
     請求項8に記載の走行路面標示検知装置。
    The range setting means sets the detection target range of the first overlapping portion to be the same range as the detection unnecessary range of the second overlapping portion, and the detection unnecessary range of the first overlapping portion is the second overlapping portion. Set to indicate the same range as the detection target range of
    The resolution of the portion corresponding to the detection target range of the first overlapping portion in the first captured image is larger than the resolution of the portion corresponding to the detection unnecessary range of the second overlapping portion in the second captured image,
    The resolution of the portion corresponding to the detection target range of the second overlapping portion in the second captured image is larger than the resolution of the portion corresponding to the detection unnecessary range of the first overlapping portion in the first captured image. The described road surface marking detection device.
  10.  車両周辺を撮影する複数のカメラにより撮影された少なくとも撮影範囲の一部が重複しない複数の撮影画像を画像処理することにより路面の標示物を検知する走行路面標示検知装置で用いられる走行路面標示検知方法であって、
     前記複数の撮影画像それぞれに基づいて、所定の仮想視点から見た複数の鳥瞰画像を生成し(S2)、
     生成された2つ以上の前記鳥瞰画像における撮影範囲が互いに重複する部分である重複部分それぞれについては、該重複部分それぞれに対して割り当てられた割り当て範囲から前記標示物を検出する(S7)ことを含む
     走行路面標示検知方法。
    Running road marking detection used in a running road marking detection apparatus that detects a road marking object by image processing a plurality of captured images taken by a plurality of cameras that capture the periphery of the vehicle and at least a part of the shooting range does not overlap. A method,
    Based on each of the plurality of captured images, a plurality of bird's-eye images viewed from a predetermined virtual viewpoint are generated (S2),
    For each of the overlapping portions, which are portions where the shooting ranges of the two or more generated bird's-eye images overlap each other, the sign is detected from the assigned range assigned to each of the overlapping portions (S7). Including road surface marking detection method.
PCT/JP2014/003301 2013-07-02 2014-06-19 Travel road surface indication detection device and travel road surface indication detection method WO2015001747A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-138970 2013-07-02
JP2013138970A JP6032141B2 (en) 2013-07-02 2013-07-02 Travel road marking detection device and travel road marking detection method

Publications (1)

Publication Number Publication Date
WO2015001747A1 true WO2015001747A1 (en) 2015-01-08

Family

ID=52143350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/003301 WO2015001747A1 (en) 2013-07-02 2014-06-19 Travel road surface indication detection device and travel road surface indication detection method

Country Status (2)

Country Link
JP (1) JP6032141B2 (en)
WO (1) WO2015001747A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509942A (en) * 2018-05-28 2018-09-07 清华大学 Verifying attachment and its method of inspection
US12142143B1 (en) 2020-07-30 2024-11-12 Terry Tedford Vehicle alert system and method of use

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102375411B1 (en) * 2015-05-11 2022-03-18 삼성전자주식회사 Method and apparatus for providing around view of vehicle
JP6594039B2 (en) * 2015-05-20 2019-10-23 株式会社東芝 Image processing apparatus, method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010245802A (en) * 2009-04-06 2010-10-28 Sanyo Electric Co Ltd Control support device
JP2011077772A (en) * 2009-09-30 2011-04-14 Hitachi Automotive Systems Ltd Apparatus for assisting checking of surroundings
JP2011166351A (en) * 2010-02-08 2011-08-25 Fujitsu Ltd Video compositing device and video compositing program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010245802A (en) * 2009-04-06 2010-10-28 Sanyo Electric Co Ltd Control support device
JP2011077772A (en) * 2009-09-30 2011-04-14 Hitachi Automotive Systems Ltd Apparatus for assisting checking of surroundings
JP2011166351A (en) * 2010-02-08 2011-08-25 Fujitsu Ltd Video compositing device and video compositing program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509942A (en) * 2018-05-28 2018-09-07 清华大学 Verifying attachment and its method of inspection
US12142143B1 (en) 2020-07-30 2024-11-12 Terry Tedford Vehicle alert system and method of use

Also Published As

Publication number Publication date
JP6032141B2 (en) 2016-11-24
JP2015011665A (en) 2015-01-19

Similar Documents

Publication Publication Date Title
US10970568B2 (en) Vehicular vision system with object detection
EP2546602B1 (en) Stereo camera apparatus
JP2020501423A (en) Camera means and method for performing context-dependent acquisition of a surrounding area of a vehicle
US10099617B2 (en) Driving assistance device and driving assistance method
JP2010016805A (en) Image processing apparatus, driving support system, and image processing method
JP6944328B2 (en) Vehicle peripheral monitoring device and peripheral monitoring method
JP3847547B2 (en) Vehicle periphery monitoring support device
JP5937832B2 (en) In-vehicle camera exposure control system
JP2017069852A (en) Information display device and information display method
JP2012166705A (en) Foreign matter attachment determining system for on-vehicle camera lens
CN110378836B (en) Method, system and equipment for acquiring 3D information of object
JP6375633B2 (en) Vehicle periphery image display device and vehicle periphery image display method
JP2016149613A (en) Camera parameter adjustment device
JP2012252501A (en) Traveling path recognition device and traveling path recognition program
WO2015001747A1 (en) Travel road surface indication detection device and travel road surface indication detection method
JP6407596B2 (en) Image processing apparatus and driving support system
JP6327115B2 (en) Vehicle periphery image display device and vehicle periphery image display method
JP2011033594A (en) Distance calculation device for vehicle
JP5477394B2 (en) Vehicle periphery monitoring device
JP2006072757A (en) Object detection system
JP2008042759A (en) Image processing apparatus
JP4598011B2 (en) Vehicle display device
JP4040620B2 (en) Vehicle periphery monitoring device
JP4584277B2 (en) Display device
CN113170057A (en) Image pickup unit control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14819655

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14819655

Country of ref document: EP

Kind code of ref document: A1