Nothing Special   »   [go: up one dir, main page]

CN111717201B - Vehicle system, control method for vehicle system, and storage medium - Google Patents

Vehicle system, control method for vehicle system, and storage medium Download PDF

Info

Publication number
CN111717201B
CN111717201B CN202010193773.1A CN202010193773A CN111717201B CN 111717201 B CN111717201 B CN 111717201B CN 202010193773 A CN202010193773 A CN 202010193773A CN 111717201 B CN111717201 B CN 111717201B
Authority
CN
China
Prior art keywords
vehicle
overhead view
data
images
view data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010193773.1A
Other languages
Chinese (zh)
Other versions
CN111717201A (en
Inventor
安井裕司
土屋成光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN111717201A publication Critical patent/CN111717201A/en
Application granted granted Critical
Publication of CN111717201B publication Critical patent/CN111717201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a vehicle system, a method for controlling the vehicle system, and a storage medium, which can generate a travel path for vehicle travel with higher accuracy. A vehicle system is provided with: an image acquisition device that acquires an image of a traveling direction of a vehicle; an identification unit that identifies the surrounding environment of the vehicle; and a driving control unit that performs driving control of the vehicle based on speed control and steering control based on a recognition result of the recognition unit, wherein the recognition unit includes an image processing unit that generates overhead view data based on the image, and the driving control unit generates a track for causing the vehicle to travel in the future based on the overhead view data and causes the vehicle to travel with the generated track.

Description

Vehicle system, control method for vehicle system, and storage medium
Technical Field
The invention relates to a vehicle system, a control method of the vehicle system, and a storage medium.
Background
In recent years, research on a technique for automatically controlling a vehicle to run the vehicle has been advanced, and a part of the technique has been developed for practical use. Further, among technologies developed toward practical use, there is a technology of generating a travel path for a vehicle to reach a destination. In the present technology, a travel path is generated based on an image captured by a camera provided to a vehicle. Therefore, the vehicle travels in the depth direction of the image. In a two-dimensional image obtained by capturing a three-dimensional space with a camera, the distance of the real space represented by 1 pixel in the depth direction is longer than the distance of the real space represented by 1 pixel in the near direction, and therefore the resolution of the distance decreases as the depth direction is approached.
A technique is disclosed for an image conversion method and a vehicle surroundings monitoring apparatus for making an overhead view based on an image captured by a monocular camera and using the overhead view for monitoring the surroundings of a vehicle (for example, refer to japanese patent application laid-open No. 2002-034035).
However, in the technique disclosed in japanese patent application laid-open No. 2002-034035, an overhead view is created based on an image captured while the vehicle is stopped. Therefore, in the technique disclosed in japanese patent application laid-open No. 2002-034035, an overhead view is used for generating a travel path for continuing a traveling vehicle next, and no sufficient study is made.
The present invention has been made based on the above-described problem recognition, and an object thereof is to provide a vehicle system, a method of controlling the vehicle system, and a storage medium, which can generate a travel path for traveling a vehicle with higher accuracy.
Disclosure of Invention
Problems to be solved by the invention
The vehicle system, the control method of the vehicle system, and the storage medium of the present invention adopt the following configurations.
(1): a vehicle system according to an aspect of the present invention includes: an image acquisition device that acquires an image of a traveling direction of a vehicle; an identification unit that identifies a surrounding environment of the vehicle; and a driving control unit that performs driving control of the vehicle based on speed control and steering control based on a recognition result of the recognition unit, wherein the recognition unit includes an image processing unit that generates overhead view data based on the image, and the driving control unit generates a track for causing the vehicle to travel in the future based on the overhead view data and causes the vehicle to travel in the generated track.
(2): in the aspect of (1) above, the image processing unit extracts edge points based on differences between the respective data included in the overhead view data and surrounding data, and the identifying unit identifies lanes included in the overhead view data based on the edge points extracted by the image processing unit.
(3): in the aspect of (1) above, the image processing unit extracts edge points based on a difference between data included in the image and surrounding data, and specifies lanes in the overhead view data using the extracted edge points.
(4): in addition to any one of the above (1) to (3), the image acquisition device acquires a plurality of the images in time series, and the image processing unit generates one piece of the overhead view data using the plurality of the images acquired in time series by the image acquisition device.
(5): in the aspect (4), the image processing unit may generate data of a remote area in the overhead view data by using data of a remote area in the plurality of images acquired in time series by the image acquisition device.
(6): in addition to any one of the above (1) to (5), the image processing unit specifies a shielded area in which a road surface included in the overhead view data is shielded by an object, supplements the specified shielded area with pixel data around the shielded area in the overhead view data, generates a track for causing the vehicle to travel in the future using the supplemented overhead view, and causes the vehicle to travel with the generated track.
(7): in the aspect of (6) above, the image processing unit supplements data representing the road surface area of the overhead view data using map information stored separately at the time of supplementing the shielded area.
(8): a control method of a vehicle system according to an aspect of the present invention causes a computer of the vehicle system to perform: acquiring an image of a traveling direction of the vehicle from an image acquisition device; identifying a surrounding environment of the vehicle; performing driving control of the vehicle based on speed control and steering control based on the recognition result; generating overhead view data based on the image; and generating a track for causing the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated track.
(9): a storage medium according to an aspect of the present invention stores a program that causes a computer of a vehicle system to perform: acquiring an image of a traveling direction of the vehicle from an image acquisition device; identifying a surrounding environment of the vehicle; performing driving control of the vehicle based on speed control and steering control based on the recognition result; generating overhead view data based on the image; and generating a track for causing the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated track.
Effects of the invention
According to the aspects (1) to (8) described above, the travel path for the vehicle travel can be generated with higher accuracy.
Drawings
Fig. 1 is a structural diagram of a vehicle system of a first embodiment.
Fig. 2 is a functional configuration diagram of the first control unit and the second control unit.
Fig. 3 is a flowchart showing an example of a series of processes executed by the image processing unit and the action plan generating unit according to the first embodiment.
Fig. 4 is a diagram schematically showing an example of a series of processes performed by the image processing unit and the action plan generating unit according to the first embodiment.
Fig. 5 is a diagram schematically showing an example of processing of supplementing a road surface area corresponding to a mask area included in overhead view data by the image processing unit according to the first embodiment.
Fig. 6 is a flowchart showing an example of a series of processes executed by the image processing unit and the action plan generating unit according to the second embodiment.
Fig. 7 is a diagram schematically showing an example of a series of processes performed by the image processing unit and the action plan generating unit according to the second embodiment.
Fig. 8 is a diagram showing an example of a hardware configuration of the automatic driving control device according to the embodiment.
Detailed Description
Embodiments of a vehicle system, a method for controlling the vehicle system, and a storage medium according to the present invention are described below with reference to the accompanying drawings. Hereinafter, a case where the left-hand regulation is applied will be described, but when the right-hand regulation is applied, the left-hand and right-hand regulation may be read.
< first embodiment >
[ integral Structure ]
Fig. 1 is a structural diagram of a vehicle system 1 of a first embodiment. The vehicle on which the vehicle system 1 is mounted is, for example, a two-wheeled, three-wheeled, four-wheeled or the like vehicle, and the driving source thereof is an internal combustion engine such as a diesel engine or a gasoline engine, an electric motor, or a combination thereof. The motor operates using generated power generated by a generator connected to the internal combustion engine or discharge power of the secondary battery or the fuel cell.
The vehicle system 1 includes, for example, a camera 10, a radar device 12, a detector 14, an object recognition device 16, communication devices 20 and HMI (Human Machine Interface), vehicle sensors 40, navigation devices 50 and MPU (Map Positioning Unit), a driving operation element 80, an automatic driving control device (automated driving control device) 100, a running driving force output device 200, a braking device 210, and a steering device 220. These devices and apparatuses are connected to each other via a multi-way communication line such as CAN (Controller Area Network) communication line, a serial communication line, a wireless communication network, and the like. The configuration shown in fig. 1 is merely an example, and a part of the configuration may be omitted or another configuration may be added.
The camera 10 is, for example, a digital camera using a solid-state imaging device such as CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor). The camera 10 is mounted on an arbitrary portion of a vehicle (hereinafter referred to as the host vehicle M) on which the vehicle system 1 is mounted. When photographing the front, the camera 10 is mounted on the upper part of the front windshield, the rear view mirror of the vehicle interior, or the like. The camera 10, for example, periodically and repeatedly photographs the surroundings of the vehicle M. The camera 10 may also be a stereoscopic camera. The camera 10 is an example of an "image acquisition device".
The radar device 12 emits radio waves such as millimeter waves to the periphery of the host vehicle M, and detects at least the position (distance and azimuth) of the object by detecting the radio waves (reflected waves) reflected by the object. The radar device 12 is mounted on an arbitrary portion of the host vehicle M. The radar device 12 may also detect the position and velocity of an object by the FM-CW (Frequency Modulated Continuous Wave) method.
The detector 14 is LIDAR (Light Detection and Ranging). The detector 14 irradiates light around the vehicle M, and measures scattered light. The detector 14 detects the distance to the object based on the time from light emission to light reception. The irradiated light is, for example, pulsed laser light. The detector 14 is mounted on an arbitrary portion of the host vehicle M.
The object recognition device 16 performs a sensor fusion process on the detection results detected by some or all of the camera 10, the radar device 12, and the detector 14, to recognize the position, the type, the speed, and the like of the object. The object recognition device 16 outputs the recognition result to the automatic driving control device 100. The object recognition device 16 may directly output the detection results of the camera 10, the radar device 12, and the detector 14 to the automatic driving control device 100. The object recognition device 16 may also be omitted from the vehicle system 1.
The communication device 20 communicates with other vehicles existing around the host vehicle M, for example, by using a cellular network, wi-Fi network, bluetooth (registered trademark), DSRC (Dedicated Short Range Communication), or the like, or communicates with various server devices via a wireless base station.
The HMI30 presents various information to the occupant of the own vehicle M, and accepts an input operation by the occupant. HMI30 includes various display devices, speakers, buzzers, touch panels, switches, keys, etc.
The vehicle sensor 40 includes a vehicle speed sensor that detects the speed of the host vehicle M, an acceleration sensor that detects acceleration, a yaw rate sensor that detects the angular velocity about the vertical axis, an azimuth sensor that detects the direction of the host vehicle M, and the like.
The navigation device 50 includes, for example, a GNSS (Global Navigation Satellite System) receiver 51, a navigation HMI52, and a route determination unit 53. The navigation device 50 holds the first map information 54 in a storage device such as HDD (Hard Disk Drive) or a flash memory. The GNSS receiver 51 determines the position of the own vehicle M based on the signals received from the GNSS satellites. The position of the host vehicle M may be determined or supplemented by INS (Inertial Navigation System) using the output of the vehicle sensor 40. The navigation HMI52 includes a display device, speakers, a touch panel, keys, etc. The navigation HMI52 may be partially or entirely shared with the HMI30 described above. The route determination unit 53 determines a route (hereinafter referred to as an on-map route) from the position of the host vehicle M (or an arbitrary position inputted thereto) specified by the GNSS receiver 51 to the destination inputted by the occupant using the navigation HMI52, for example, with reference to the first map information 54. The first map information 54 is, for example, information representing the shape of a road by a route representing the road and nodes connected by the route. The first map information 54 may also include curvature of a road, POI (Point Of Interest) information, and the like. The route on the map is output to the MPU 60. The navigation device 50 may perform route guidance using the navigation HMI52 based on the route on the map. The navigation device 50 may be realized by the functions of a terminal device such as a smart phone or a tablet terminal held by an occupant. The navigation device 50 may transmit the current position and the destination to the navigation server via the communication device 20, and acquire a route equivalent to the route on the map from the navigation server.
The MPU60 includes, for example, a recommended lane determining unit 61, and holds the second map information 62 in a storage device such as an HDD or a flash memory. The recommended lane determining unit 61 divides the route on the map supplied from the navigation device 50 into a plurality of sections (for example, every 100 m in the vehicle traveling direction), and determines a recommended lane for each section with reference to the second map information 62. The recommended lane determination unit 61 determines which lane from the left is to be driven. The recommended lane determining unit 61 determines the recommended lane so that the host vehicle M can travel on a reasonable route for traveling to the branching destination when the branching point exists on the route on the map.
The second map information 62 is map information of higher accuracy than the first map information 54. The second map information 62 includes, for example, information of the center of a lane, information of the boundary of a lane, and the like. The second map information 62 may include road information, traffic restriction information, residence information (residence, zip code), facility information, telephone number information, and the like. The second map information 62 may be updated at any time by the communication device 20 communicating with other devices.
The steering operation member 80 includes, for example, an accelerator pedal, a brake pedal, a shift lever, a steering wheel, a shaped steering wheel, a joystick, and other operation members. A sensor for detecting the amount of operation or the presence or absence of operation is attached to the driving operation element 80, and the detection result is output to the automatic driving control device 100, or to some or all of the running driving force output device 200, the brake device 210, and the steering device 220.
The automatic driving control device 100 includes, for example, a first control unit 120 and a second control unit 160. The first control unit 120 and the second control unit 160 are each realized by a hardware processor such as CPU (Central Processing Unit) executing a program (software). Some or all of these components may be realized by hardware (including a circuit unit) such as LSI (Large Scale Integration), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), GPU (Graphics Processing Unit), or by cooperation of software and hardware. The program may be stored in advance in a storage device such as an HDD or a flash memory (a storage device including a non-transitory storage medium) of the autopilot control device 100, or may be stored in a removable storage medium such as a DVD or a CD-ROM, and installed in the HDD or the flash memory of the autopilot control device 100 by being mounted on a drive device via the storage medium (the non-transitory storage medium).
Fig. 2 is a functional configuration diagram of the first control unit 120 and the second control unit 160. The first control unit 120 includes, for example, a recognition unit 130 and an action plan generation unit 140. The first control unit 120 realizes a function based on AI (Artificial Intelligence: artificial intelligence) and a function based on a predetermined model in parallel, for example. For example, the function of "identifying an intersection" may be realized by "performing, in parallel, identification of an intersection by deep learning or the like and identification based on a predetermined condition (presence of a signal, road sign, or the like capable of pattern matching), and scoring both sides to comprehensively evaluate. Thereby, reliability of automatic driving is ensured. The action plan generation unit 140 and the second control unit 160 are combined to form an example of a "driving control unit".
The recognition unit 130 recognizes the state of the position, speed, acceleration, and the like of the object located in the vicinity of the host vehicle M based on the information input from the camera 10, the radar device 12, and the detector 14 via the object recognition device 16. The position of the object is identified as a position on absolute coordinates with the representative point (center of gravity, drive shaft center, etc.) of the host vehicle M as an origin, for example, and is used for control. The position of the object may be represented by a representative point such as the center of gravity or the corners of the object, or may be represented by a represented area. The "state" of the object may also include acceleration, jerk, or "behavior state" of the object (e.g., whether a lane change is being made, or is to be made).
The identifying unit 130 identifies, for example, a lane (driving lane) in which the host vehicle M is driving. For example, the identifying unit 130 compares the pattern of the road dividing line (for example, the arrangement of the solid line and the broken line) obtained from the second map information 62 with the pattern of the road dividing line around the host vehicle M identified from the image captured by the camera 10, thereby identifying the driving lane. The identifying unit 130 is not limited to identifying the road dividing line, and may identify a travel path boundary (road boundary) including a road dividing line, a road shoulder, a curb, a center isolation belt, a guardrail, and the like, thereby identifying a travel lane. In this identification, the position of the host vehicle M acquired from the navigation device 50 and the processing result of the INS processing may be added. The identification unit 130 identifies a temporary stop line, an obstacle, a red light, a toll station, and other road phenomena.
When recognizing the driving lane, the recognition unit 130 recognizes the position and posture of the host vehicle M with respect to the driving lane. The identification unit 130 may identify, for example, a deviation of the reference point of the host vehicle M from the center of the lane and an angle formed by the traveling direction of the host vehicle M with respect to a line connecting the centers of the lanes, as the relative position and posture of the host vehicle M with respect to the traveling lane. Instead of this, the identification unit 130 may identify the position of the reference point of the host vehicle M with respect to any side end portion (road dividing line or road boundary) of the travel lane, or the like, as the relative position of the host vehicle M with respect to the travel lane.
The recognition unit 130 includes an image processing unit 132. The image processing unit 132 acquires an image captured by the camera 10, and generates overhead view data indicating a road surface area captured by the image in a virtual manner from directly above based on the acquired image. The overhead view data includes a first edge point indicating the boundary of the road surface area, and a second edge point indicating the position of a white line drawn on the road surface, such as a road dividing line and a temporary stop line. The second edge point differs greatly in the value of the data (e.g., brightness difference) from the first edge point. Hereinafter, the first edge point and the second edge point are referred to as edge points. The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140.
The image processing unit 132 acquires a plurality of images of a time series continuously captured by the camera 10 every predetermined time, and generates one overhead view data using the acquired plurality of images. The image processing unit 132 generates data of a distant area in one overhead view data by using data of a distant area far in front of the host vehicle M captured in each image of the time series. At this time, the image processing unit 132 uses the data of the vicinity area in front of the host vehicle M in each image for alignment when generating the data of the distant area. Thus, the captured remote areas of the respective images after alignment are slightly shifted, and the image processing unit 132 compensates (supplements) the data of the respective positions of the remote areas that were not generated from the 1-image by using the plurality of images, thereby generating one overhead view data.
The image processing unit 132 identifies a shielded area (so-called shielded area) in which a road surface area included in the generated overhead view data is shielded by an object such as a building, and supplements the data of the road surface area shielded by the shielded area in the overhead view data with map information including the shielded area. The image processing unit 132 uses the second map information 62 as map information. An example of the processing for generating the overhead view data in the image processing unit 132 will be described later.
The action plan generation unit 140 generates a target track for the vehicle M to automatically (automatically) travel in the future so as to be able to cope with the surrounding situation of the vehicle M while traveling on the recommended lane determined by the recommended lane determination unit 61 in principle. The action plan generation unit 140 generates a target orbit based on the bird's eye view data generated by the image processing unit 132. The target track includes, for example, a speed element. For example, the target track is represented by a track in which points (track points) where the host vehicle M should reach are sequentially arranged. The track point is a point where the own vehicle M should reach every predetermined travel distance (for example, several [ M ] level) in terms of the distance along the road, and is generated as a part of the target track at intervals of a predetermined sampling time (for example, several tenths [ sec ] level), unlike this point. The track point may be a position where the own vehicle M should reach at the sampling timing at every predetermined sampling time. In this case, the information of the target speed and the target acceleration is expressed by the interval of the track points.
The action plan generation unit 140 may set an event of automatic driving when generating the target trajectory. In the event of automatic driving, there are a constant speed travel event, a low speed follow-up travel event, a lane change event, a branching event, a merging event, a takeover event, and the like. The action plan generation unit 140 generates a target track corresponding to the started event.
The second control unit 160 controls the running driving force output device 200, the braking device 210, and the steering device 220 so that the vehicle M passes through the target track generated by the behavior plan generation unit 140 at a predetermined timing.
Returning to fig. 2, the second control unit 160 includes, for example, an acquisition unit 162, a speed control unit 164, and a steering control unit 166. The acquisition unit 162 acquires information of the target track (track point) generated by the action plan generation unit 140, and causes a memory (not shown) to store the information. The speed control unit 164 controls the traveling driving force output device 200 or the brake device 210 based on a speed element attached to the target track stored in the memory. The steering control unit 166 controls the steering device 220 according to the curved state of the target track stored in the memory. The processing by the speed control unit 164 and the steering control unit 166 is realized by a combination of feedforward control and feedback control, for example. As an example, the steering control unit 166 combines a feedforward control according to the curvature of the road ahead of the host vehicle M with a feedback control based on the deviation from the target track.
The running driving force output device 200 outputs a running driving force (torque) for running the vehicle to the driving wheels. The running driving force output device 200 includes, for example, a combination of an internal combustion engine, an electric motor, a transmission, and the like, and ECU (Electronic Control Unit) for controlling these. The ECU controls the above configuration in accordance with information input from the second control portion 160 or information input from the driving operation element 80.
The brake device 210 includes, for example, a caliper, a hydraulic cylinder that transmits hydraulic pressure to the caliper, an electric motor that generates hydraulic pressure in the hydraulic cylinder, and a brake ECU. The brake ECU controls the electric motor in accordance with information input from the second control portion 160 or information input from the driving operation member 80 so that a braking torque corresponding to a braking operation is output to each wheel. The brake device 210 may be provided with a mechanism for transmitting the hydraulic pressure generated by the operation of the brake pedal included in the drive operation element 80 to the hydraulic cylinder via the master cylinder. The brake device 210 is not limited to the above-described configuration, and may be an electronically controlled hydraulic brake device that transmits the hydraulic pressure of the master cylinder to the hydraulic cylinders by controlling the actuators in accordance with information input from the second control unit 160.
The steering device 220 includes, for example, a steering ECU and an electric motor. The electric motor applies a force to the rack-and-pinion mechanism to change the direction of the steered wheel, for example. The steering ECU drives the electric motor in accordance with information input from the second control unit 160 or information input from the driving operation element 80, and changes the direction of the steered wheels.
[ processing for generating overhead view data and target orbit ]
The following describes processing performed by the image processing unit 132 and the action plan generation unit 140 to generate a target track for future travel of the vehicle M. In the following description, the image processing unit 132 generates overhead view data based on 1 image captured by the camera 10, and the action plan generating unit 140 generates a target orbit based on the overhead view data. Fig. 3 is a flowchart showing an example of the flow of a series of processes executed by the image processing unit 132 and the action plan generation unit 140 according to the first embodiment.
The image processing unit 132 acquires an image captured by the camera 10 (step S100). Next, the image processing unit 132 converts the acquired image into a virtual overhead view (step S110).
Then, the image processing unit 132 determines whether or not there is an occlusion region in the transformed overhead view (step S120). When it is determined in step S120 that the blocking area does not exist in the overhead view, the image processing unit 132 advances the process to step S150.
On the other hand, when it is determined in step S120 that the shielded area exists in the overhead view, the image processing unit 132 determines the shielded area included in the overhead view that shields the road surface area (step S130). Thereafter, the image processing unit 132 supplements the data of the road surface area corresponding to the blocked area using the second map information 62 including the specified blocked area (step S140).
Next, the image processing unit 132 extracts edge points included in the overhead view (step S150). Then, the image processing unit 132 identifies the road surface region and the lane within the road surface region included in the overhead view data based on the extracted edge points (step S160).
Next, the image processing unit 132 generates overhead view data including the identified road surface region and the lanes within the road surface region (step S170). The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140.
Then, the action plan generation unit 140 generates a target orbit based on the overhead view data output from the image processing unit 132 (step S180). Thus, the second control unit 160 controls the travel of the host vehicle M to sequentially pass through the target track generated by the action plan generation unit 140.
[ Generation example of bird's-eye view data and target orbit ]
Fig. 4 is a diagram schematically showing an example of a series of processes performed by the image processing unit 132 and the action plan generation unit 140 according to the first embodiment. Fig. 4 shows an example of each process when the image processing unit 132 generates the overhead view data and an example of the target trajectory generated by the action plan generating unit 140. In the following description, the image processing unit 132 does not supplement the occlusion region.
When an image as shown in fig. 4 (a) is acquired from the camera 10, the image processing unit 132 converts the acquired image into a virtual overhead view as shown in fig. 4 (b). At this time, the image processing unit 132 converts the image acquired from the camera 10 into one overhead view by disposing the data of each pixel constituting the acquired image at a corresponding position in the virtual overhead view.
In a distant region (for example, an upper region in fig. 4 b) in the converted overhead view, there is a data blank position N where no pixel, that is, data is blank (NULL), is arranged. Then, the image processing unit 132 converts the data of the pixels of the distant region (for example, the upper region in fig. 4 (a)) included in the plurality of images acquired from the camera 10 in time series into one overhead view. In this way, the image processing unit 132 converts the plurality of images of the time series acquired from the camera 10 into a virtual one overhead view as shown in fig. 4 (c).
In addition, the following is also considered: even if a plurality of images acquired in time series from the camera 10 are used, data is not necessarily arranged at positions of all pixels constituting the overhead view, and a data blank position N remains. In this case, as shown in fig. 4 (b), the image processing unit 132 may supplement the data of the data space N based on the peripheral data of the data space N, that is, the data of the other pixels located in the periphery of the data space N. For example, the image processing unit 132 may perform arithmetic averaging of the values of 4 pixels located in the periphery (for example, in the upper, lower, left, and right) of the data space N, and set the value as the value of the pixel in the data space N. At this time, the image processing unit 132 may set the value of the pixel at the data space position N, which is orthogonal to the traveling direction of the vehicle M and is considered to be indicative of the span of the temporary stop line, to a value obtained by increasing the ratio of the values of the other pixels located at the left and right and performing arithmetic average. The image processing unit 132 may set the value of the pixel at the data space position N, which is parallel to the traveling direction of the vehicle M and is considered to be, for example, the boundary of the road surface area or the span of the road dividing line, to a value obtained by increasing the ratio of the values of the other pixels located above and below and performing arithmetic average. Thus, the image processing section 132 can more appropriately supplement the value of the pixel at the data blank position N.
In this way, the image processing unit 132 converts the image acquired from the camera 10 as shown in fig. 4 (a) into one overhead view in which the data of the data space N as shown in fig. 4 (c) is supplemented. In fig. 4 (b), the range of the data space position N is not shown as a range corresponding to the range of 1 pixel, but the image processing unit 132 performs processing of converting each pixel into the overhead view, so that the range of 1 data space position N is also a range corresponding to the range of 1 pixel.
Then, the image processing unit 132 extracts edge points included in the overhead view shown in fig. 4 (c), and generates overhead view data as shown in fig. 4 (d). At this time, the image processing unit 132 extracts pixels of the edge points based on the difference between the data of each pixel included in the overhead view and the data of other pixels located in the periphery of the pixel. More specifically, when a certain 1 pixel is considered, the luminance value of the 1 pixel is compared with the luminance values of other pixels located in the periphery of the pixel, and when the difference in luminance values is greater than a predetermined threshold value, the pixel is determined to be a pixel of an edge point, and when the difference in luminance value is equal to or less than the predetermined threshold value, the pixel is determined not to be a pixel of an edge point, whereby the pixel of the edge point is extracted. That is, the image processing unit 132 extracts the high-frequency component in the data of each pixel included in the overhead view as an edge point. The image processing unit 132 generates overhead view data including the edge point E as shown in fig. 4 (d).
The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140. Thus, the action plan generation unit 140 generates a target trajectory as shown in fig. 4 (e) based on the overhead view data output from the image processing unit 132. Fig. 4 (e) shows an example of a case where the target track is generated by arranging the track points K to be sequentially reached by the own vehicle M so that the own vehicle M is right at the next intersection.
[ supplementary example of occlusion region ]
Fig. 5 is a diagram schematically showing an example of processing for supplementing a road surface region corresponding to a shielded region (shielded region) included in overhead view data by the image processing unit 132 according to the first embodiment. Fig. 5 shows an example of a case where a blocking area exists at each corner portion of an intersection where the own vehicle M enters next. In the following description, it is assumed that the values of the pixels at the data blank position N in the overhead view converted by the image processing unit 132 have been already supplemented.
When the image obtained from the camera 10 is converted into the overhead view having the shielding regions O1 to O4 as shown in fig. 5 (b), the image processing unit 132 determines the shielding region included in the overhead view and shielding the road surface region. The image processing unit 132 supplements the data of each road surface area shielded by the shielded areas O1 to O4 with the second map information 62 including information corresponding to each of the identified shielded areas O1 to O4. More specifically, the image processing unit 132 obtains the position of the lane, the boundary of the lane, and the temporary stop line in the shielded area by applying the information on the center of the lane, the boundary of the lane, the temporary stop line, and the like included in the second map information 62 to the corresponding road surface area other than the shielded area, and supplements the obtained position information to the data of the road surface area. Fig. 5 (C) shows an example of the case where the data of the supplementary areas C1 to C4 are supplemented using the second map information 62.
The image processing unit 132 may select the occlusion region to be supplemented. For example, the image processing unit 132 may not supplement the shielding region that shields the road surface region that is not the traveling path of the host vehicle M traveling in the future.
In this way, the image processing unit 132 converts the image acquired from the camera 10 as shown in fig. 5 (a) into one overhead view in which no occlusion region exists as shown in fig. 5 (d). In fig. 5 (b) and 5 (C), the ranges of the blocked areas O1 to O4 and the complementary areas C1 to C4 are not shown as ranges corresponding to the range of 1 pixel, but the image processing unit 132 performs processing for each blocked area for each pixel, so the ranges of the blocked areas and the complementary areas are also ranges determined for each pixel.
As described above, according to the vehicle system 1 of the first embodiment, the image processing unit 132 generates bird's-eye view data that more clearly indicates the depth direction, the boundary of the road surface area in the shielded area, the road dividing line, the temporary stop line, and the like, which are included in the bird's-eye view obtained by converting the image, by using the plurality of images in the time series acquired from the camera 10 and the second map information 62, and outputs the bird's-eye view data to the action plan generating unit 140. In this way, the action plan generation unit 140 can generate a target trajectory that is more accurate in the region farther in the depth direction than the target trajectory generated for the image captured by the camera 10. That is, according to the vehicle system 1 of the first embodiment, the travel path for the host vehicle M to travel can be generated with higher accuracy.
< second embodiment >
Hereinafter, a second embodiment will be described. In the second embodiment, the order of processing for generating overhead view data is different from that of the first embodiment. More specifically, in the second embodiment, the road surface area and the lane within the road surface area identified from the transformed overhead view in the first embodiment are identified from the image acquired from the camera 10.
Fig. 6 is a flowchart showing an example of a series of processing flows executed by the image processing unit 132 and the action plan generation unit 140 according to the second embodiment. In the following description, the image processing unit 132 generates overhead view data based on 1 image captured by the camera 10, and the action plan generating unit 140 generates a target orbit based on the overhead view data.
The image processing unit 132 acquires an image captured by the camera 10 (step S200). Next, the image processing unit 132 extracts edge points included in the acquired image (step S210). Then, the image processing unit 132 identifies the road surface region and the lane within the road surface region included in the overhead view data based on the extracted edge points (step S220).
Next, the image processing unit 132 generates overhead view data indicating the identified road surface region and the lanes in the road surface region as an overhead view (step S230). Then, the image processing unit 132 determines whether or not there is a blocking area in the road surface area indicated by the generated overhead view data (step S240). When it is determined in step S240 that there is no occlusion region in the road surface region indicated by the overhead view data, the image processing unit 132 advances the process to step S270. That is, the image processing unit 132 outputs the generated overhead view data to the action plan generation unit 140.
On the other hand, when it is determined in step S240 that the shielding region exists in the road surface region indicated by the overhead view data, the image processing unit 132 determines the shielding region that shields the road surface region in the overhead view data (step S250). Then, the image processing unit 132 supplements the data of the covered road surface area with the second map information 62 including the specified covered area (step S260). The image processing unit 132 outputs overhead view data, which is obtained by supplementing the road surface area data, to the action plan generating unit 140.
Then, the action plan generation unit 140 generates a target orbit based on the overhead view data output from the image processing unit 132 (step S270). Thus, the second control unit 160 controls the travel of the host vehicle M to sequentially pass through the target track generated by the action plan generation unit 140.
[ Generation example of bird's-eye view data and target orbit ]
Fig. 7 is a diagram schematically showing an example of a series of processes performed by the image processing unit 132 and the action plan generating unit 140 according to the second embodiment. Fig. 7 shows an example of the respective processes when the image processing unit 132 generates the overhead view data and an example of the target trajectory generated by the action plan generating unit 140. The complementary processing of the occlusion region by the image processing unit 132 of the second embodiment can be easily understood by considering the same as the complementary processing of the occlusion region by the image processing unit 132 of the first embodiment. Therefore, in the following description, the description of the replenishment of the blocked area by the image processing unit 132 according to the second embodiment is omitted.
When acquiring an image as shown in fig. 7 (a) from the camera 10, the image processing unit 132 extracts a pixel of an edge point EP as shown in fig. 7 (b) based on a difference between data of each pixel included in the acquired image and data of other pixels located in the periphery of the pixel, that is, data of the periphery of the pixel.
In the second embodiment, too, the image processing unit 132 acquires a plurality of images from the camera 10 in time series. Therefore, the image processing unit 132 extracts pixels of the edge point EP from each of the acquired plurality of images. The processing of extracting the pixels of the edge point EP here may be performed by replacing the overhead view in the processing of extracting the pixels of the edge point E by the image processing unit 132 of the first embodiment with the image acquired from the camera 10.
Then, the image processing unit 132 generates overhead view data indicating the positions of the extracted edge points EP in one virtual overhead view as shown in fig. 7 (c). At this time, the image processing unit 132 generates overhead view data shown in the overhead view by arranging the positions of the edge points EP extracted from the respective images at corresponding positions in the virtual overhead view. Thus, in the overhead view data, the edge point EP is also arranged in a distant area (for example, an upper area in fig. 7 (c)). In this way, the image processing unit 132 generates overhead view data as shown in fig. 7 (d) in which the edge points EP extracted from the plurality of images of the time series acquired from the camera 10 are arranged.
In addition, the following is also considered: even when the overhead view data is generated by extracting the edge points EP from the plurality of images acquired from the camera 10 in time series, the edge points EP representing the road surface area and all the lanes in the road surface area are not necessarily arranged in the overhead view data, and the positions where the edge points EP are not arranged remain, that is, the same state as the data blank position N in the first embodiment. In this case, as shown in fig. 7 (c), the image processing unit 132 may supplement the edge point EP of the data space N based on the peripheral data of the data space N, that is, other edge points EP located in the periphery of the data space N. However, in the second embodiment, the edge point EP indicates only the position of the boundary indicating the road surface area, the road dividing line, and the temporary stop line, for example. Therefore, in the second embodiment, even if processing for arithmetic averaging of values of 4 pixels located in the periphery is not performed as in the first embodiment, if any one of the upper, lower, left, and right of the data space position N is an edge point EP, the data space position N may be considered to be the edge point EP, and the edge point EP at the data space position N may be supplemented.
In this way, the image processing unit 132 extracts an edge point EP from the image acquired from the camera 10 as shown in fig. 7 (a), and supplements the edge point EP at the data blank position N, thereby generating overhead view data shown in one overhead view as shown in fig. 7 (d). In fig. 7 (c), the range of the data space position N is not shown as a range corresponding to the range of 1 pixel, that is, the range of 1 edge point EP, but the image processing unit 132 generates overhead view data shown in the overhead view for each edge point EP, so the range of 1 data space position N is also a range corresponding to the range of 1 edge point EP, that is, the range of 1 pixel.
The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140. Thus, the action plan generation unit 140 generates a target trajectory as shown in fig. 7 (e) based on the overhead view data output from the image processing unit 132. Fig. 7 (e) also shows an example of a case where the target track is generated by arranging the track points K to be sequentially reached by the own vehicle M in order to turn the own vehicle M right at the next intersection.
As described above, according to the vehicle system 1 of the second embodiment, the image processing unit 132 uses the plurality of images of the time series acquired from the camera 10 and the second map information 62 to generate the overhead view data that more clearly indicates the depth direction included in the overhead view obtained by converting the images, the boundary of the road surface area at the shielding area, the road dividing line, the temporary stop line, and the like, and outputs the overhead view data to the action plan generating unit 140. In this way, the action plan generation unit 140 can generate a target trajectory that is more accurate in the region farther in the depth direction than the target trajectory generated for the image captured by the camera 10. That is, according to the vehicle system 1 of the second embodiment, the travel path for the host vehicle M to travel can be generated with higher accuracy.
[ hardware Structure ]
Fig. 8 is a diagram showing an example of a hardware configuration of the automatic drive control device 100 according to the embodiment. As shown in the figure, the automatic driving control device 100 is configured such that a communication controller 100-1, a CPU100-2, RAM (Random Access Memory) -3 used as a work memory, ROM (Read Only Memory) -4 for storing a boot program or the like, a storage device 100-5 such as a flash memory or HDD (Hard Disk Drive), a driving device 100-6, and the like are connected to each other via an internal bus or a dedicated communication line. The communication controller 100-1 performs communication with components other than the automatic driving control device 100. The program 100-5a executed by the CPU100-2 is stored in the storage device 100-5. The program is developed into the RAM100-3 by a DMA (Direct Memory Access) controller (not shown) or the like, and executed by the CPU 100-2. In this way, the first control unit 120 and the second control unit 160, more specifically, some or all of the image processing unit 132 and the action plan generating unit 140 are realized.
The embodiment described above can be expressed as follows.
A vehicle system is provided with:
a storage device storing a program; and
a hardware processor is provided with a processor that,
the hardware processor executes a program stored in the storage device to perform the following processing:
Acquiring an image of a traveling direction of the vehicle from an image acquisition device;
identifying a surrounding environment of the vehicle;
performing driving control of the vehicle based on speed control and steering control based on the recognition result;
generating overhead view data based on the image; and
a track for causing the vehicle to travel in the future is generated based on the overhead view data, and the vehicle is caused to travel with the generated track.
The specific embodiments of the present invention have been described above using the embodiments, but the present invention is not limited to such embodiments, and various modifications and substitutions can be made without departing from the scope of the present invention.

Claims (7)

1. A vehicle system, wherein,
the vehicle system includes:
an image acquisition device that acquires a plurality of images of a traveling direction of a vehicle in time series;
an identification unit that identifies a surrounding environment of the vehicle; and
a driving control unit that performs driving control of the vehicle based on speed control and steering control based on a recognition result of the recognition unit,
the identification unit includes an image processing unit that generates one overhead view data represented by a map obtained by virtually observing a road surface area captured in an image from directly above, using the plurality of images,
The driving control unit generates a track for causing the vehicle to travel in the future based on the overhead view data, and causes the vehicle to travel with the generated track,
the image processing unit uses data of a vicinity area in front of the vehicle in the plurality of images for alignment, and arranges data of each pixel constituting the plurality of images at a corresponding position in a virtual overhead view, thereby generating data of a distant area in the overhead view data based on data of a pixel of a distant area in the plurality of images.
2. The vehicle system of claim 1, wherein,
the image processing section extracts edge points based on differences between the respective data included in the overhead view data and peripheral data,
the recognition unit recognizes a lane included in the overhead view data based on the edge points extracted by the image processing unit.
3. The vehicle system of claim 1, wherein,
the image processing unit extracts edge points based on a difference between data included in the image and surrounding data, and specifies lanes in the overhead view data using the extracted edge points.
4. The vehicle system of claim 1, wherein,
the image processing unit identifies a masking region where the road surface included in the overhead view data is masked by an object,
supplementing the determined mask area with pixel data around the mask area in the overhead view data,
generating a track for the vehicle to travel in the future using the supplemented overhead view, and causing the vehicle to travel with the generated track.
5. The vehicle system of claim 4, wherein,
the image processing unit supplements data representing the road surface area of the overhead view data using map information stored separately when the mask area is supplemented.
6. A control method of a vehicle system, wherein,
the control method of the vehicle system causes a computer of the vehicle system to perform the following processing:
acquiring a plurality of images of a traveling direction of the vehicle in time series from an image acquisition device;
identifying a surrounding environment of the vehicle;
performing driving control of the vehicle based on speed control and steering control based on the recognition result;
generating one overhead view data represented by a way of virtually observing a road surface area photographed in an image from directly above, using the plurality of images;
Generating a track for causing the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated track; and
when generating the overhead view data, data of a vicinity area in front of the vehicle in the plurality of images is used for alignment, and data of each pixel constituting the plurality of images is arranged at a corresponding position of a virtual overhead view, whereby data of a distant area in the plurality of images acquired in time series by the image acquisition device is generated based on the data of the pixel of the distant area in the plurality of images.
7. A storage medium storing a program, wherein,
the program causes a computer of a vehicle system to perform the following processing:
acquiring a plurality of images of a traveling direction of the vehicle in time series from an image acquisition device;
identifying a surrounding environment of the vehicle;
performing driving control of the vehicle based on speed control and steering control based on the recognition result;
generating one overhead view data represented by a way of virtually observing a road surface area photographed in an image from directly above, using the plurality of images;
generating a track for causing the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated track; and
When generating the overhead view data, data of a vicinity area in front of the vehicle in the plurality of images is used for alignment, and data of each pixel constituting the plurality of images is arranged at a corresponding position of a virtual overhead view, whereby data of a distant area in the plurality of images acquired in time series by the image acquisition device is generated based on the data of the pixel of the distant area in the plurality of images.
CN202010193773.1A 2019-03-20 2020-03-18 Vehicle system, control method for vehicle system, and storage medium Active CN111717201B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019052471A JP7160730B2 (en) 2019-03-20 2019-03-20 VEHICLE SYSTEM, VEHICLE SYSTEM CONTROL METHOD, AND PROGRAM
JP2019-052471 2019-03-20

Publications (2)

Publication Number Publication Date
CN111717201A CN111717201A (en) 2020-09-29
CN111717201B true CN111717201B (en) 2024-04-02

Family

ID=72559202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010193773.1A Active CN111717201B (en) 2019-03-20 2020-03-18 Vehicle system, control method for vehicle system, and storage medium

Country Status (2)

Country Link
JP (1) JP7160730B2 (en)
CN (1) CN111717201B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011030140A (en) * 2009-07-29 2011-02-10 Hitachi Automotive Systems Ltd External world recognition device
CN102470832A (en) * 2009-07-15 2012-05-23 日产自动车株式会社 Vehicle-driving support system and vehicle-driving support method
CN103299617A (en) * 2011-01-11 2013-09-11 爱信精机株式会社 Image generating device
CN104660977A (en) * 2013-11-15 2015-05-27 铃木株式会社 Bird's eye view image generating device
CN104859542A (en) * 2015-05-26 2015-08-26 寅家电子科技(上海)有限公司 Vehicle monitoring system and vehicle monitoring processing method
CN105416394A (en) * 2014-09-12 2016-03-23 爱信精机株式会社 Control device and control method for vehicle
CN107274719A (en) * 2016-03-31 2017-10-20 株式会社斯巴鲁 Display device
WO2018179275A1 (en) * 2017-03-30 2018-10-04 本田技研工業株式会社 Vehicle control system, vehicle control method, and vehicle control program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5634046B2 (en) * 2009-09-25 2014-12-03 クラリオン株式会社 Sensor controller, navigation device, and sensor control method
JP2012195793A (en) * 2011-03-17 2012-10-11 Clarion Co Ltd Vehicle periphery monitoring device
WO2018230530A1 (en) * 2017-06-16 2018-12-20 本田技研工業株式会社 Vehicle control system, vehicle control method, and program
DE112017007964T5 (en) * 2017-08-25 2020-06-10 Honda Motor Co., Ltd. Display control device, display control method and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102470832A (en) * 2009-07-15 2012-05-23 日产自动车株式会社 Vehicle-driving support system and vehicle-driving support method
JP2011030140A (en) * 2009-07-29 2011-02-10 Hitachi Automotive Systems Ltd External world recognition device
CN103299617A (en) * 2011-01-11 2013-09-11 爱信精机株式会社 Image generating device
CN104660977A (en) * 2013-11-15 2015-05-27 铃木株式会社 Bird's eye view image generating device
CN105416394A (en) * 2014-09-12 2016-03-23 爱信精机株式会社 Control device and control method for vehicle
CN104859542A (en) * 2015-05-26 2015-08-26 寅家电子科技(上海)有限公司 Vehicle monitoring system and vehicle monitoring processing method
CN107274719A (en) * 2016-03-31 2017-10-20 株式会社斯巴鲁 Display device
WO2018179275A1 (en) * 2017-03-30 2018-10-04 本田技研工業株式会社 Vehicle control system, vehicle control method, and vehicle control program

Also Published As

Publication number Publication date
CN111717201A (en) 2020-09-29
JP7160730B2 (en) 2022-10-25
JP2020154708A (en) 2020-09-24

Similar Documents

Publication Publication Date Title
CN111201170B (en) Vehicle control device and vehicle control method
CN110895417B (en) Vehicle control device, vehicle control method, and storage medium
CN111223315B (en) Traffic guidance object recognition device, traffic guidance object recognition method, and storage medium
CN110341704B (en) Vehicle control device, vehicle control method, and storage medium
CN109693667B (en) Vehicle control device, vehicle control method, and storage medium
CN112486161B (en) Vehicle control device, vehicle control method, and storage medium
CN112677966B (en) Vehicle control device, vehicle control method, and storage medium
CN110217231B (en) Vehicle control device, vehicle control method, and storage medium
CN111273651B (en) Vehicle control device, vehicle control method, and storage medium
CN112026770B (en) Vehicle control device, vehicle control method, and storage medium
CN113525409B (en) Moving object control device, moving object control method, and storage medium
CN111824141B (en) Display control device, display control method, and storage medium
CN112124311A (en) Vehicle control device, vehicle control method, and storage medium
CN109559540B (en) Periphery monitoring device, periphery monitoring method, and storage medium
CN110341703B (en) Vehicle control device, vehicle control method, and storage medium
CN113525378B (en) Vehicle control device, vehicle control method, and storage medium
CN113492845B (en) Vehicle control device, vehicle control method, and storage medium
CN112172805B (en) Vehicle control device, vehicle control method, and storage medium
CN112677967B (en) Vehicle control device, vehicle control method, and storage medium
CN112677978B (en) Prediction device, vehicle system, prediction method, and storage medium
CN110816524B (en) Vehicle control device, vehicle control method, and storage medium
CN113479204B (en) Vehicle control device, vehicle control method, and storage medium
CN112298171B (en) Vehicle control device, vehicle control method, and storage medium
CN116513194A (en) Moving object control device, moving object control method, and storage medium
CN111717201B (en) Vehicle system, control method for vehicle system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant