CN101082502A - Parking assist method and parking assist apparatus - Google Patents
Parking assist method and parking assist apparatus Download PDFInfo
- Publication number
- CN101082502A CN101082502A CN 200710097353 CN200710097353A CN101082502A CN 101082502 A CN101082502 A CN 101082502A CN 200710097353 CN200710097353 CN 200710097353 CN 200710097353 A CN200710097353 A CN 200710097353A CN 101082502 A CN101082502 A CN 101082502A
- Authority
- CN
- China
- Prior art keywords
- mentioned
- vehicle
- image
- line
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Traffic Control Systems (AREA)
Abstract
In a control device of a parking assist system for outputting a synthesized image (52) of the periphery of a vehicle, which is formed into a bird's-eye view image using image data, to a display, turning guide lines based on the steering angle of the vehicle and straight driving guide lines (St) along which the vehicle drives in a straight line after turning in accordance with the turning guide lines (Rv) are output to the display together with the synthesized image (52).
Description
Technical field
The present invention relates to parking assist method and EPH.
Background technology
All the time, known have a kind of car-mounted device that will be presented at by the image that onboard camera is taken on the display.This device is from being installed in the onboard camera input signal of video signal of rear vehicle end, will together output near the display that is configured in the driver's seat based on the peripheral image of this signal of video signal and guide line.
In addition, patent documentation 1 has been put down in writing following image processing apparatus, and when it was operated in Parking, accumulation used the view data that is accumulated to carry out Flame Image Process from the view data of onboard camera input the time, shows the eye view image of looking down the vehicle-surroundings zone.
And then patent documentation 2 also proposed a kind ofly not only to show eye view image, and on eye view image the overlapping device of estimating track accordingly with rudder degree at that time and arriving the target trajectory of Parking frame.
Patent documentation 1: TOHKEMY 2002-373327 communique
Patent documentation 2: TOHKEMY 2004-114879 communique
But, in peripheral image 70 shown in Figure 22, to estimate under the overlapping situation of track 71 and this car image 72, though have the advantage of the target that becomes bearing circle operation etc., but can only find out the course and the deviation of Parking target area 73 when under keeping the state of current rudder angle, retreating (or advancing), and not know that concrete bearing circle operates.That is, because the relevant information of the driver behavior when carrying out Parking is insufficient, so the driver must carry out the inching of rudder angle repeatedly.On the other hand,, show the device of the target trajectory that arrives the Parking frame,, need manually specify the Parking space that becomes target by the user in order to calculate the target trajectory from the current location to the target location as patent documentation 2.Therefore, in the photography frame of camera, can not all comprise under the situation in this Parking space, owing to can not carry out the appointment in Parking space, so can not calculate target trajectory.
Summary of the invention
The present invention puts to finish in view of the above problems, its objective is parking assist method and EPH that a kind of auxiliary bearing circle operation when Parking is provided.
In order to address the above problem a little, the present invention's 1 is a kind of parking assist method, based on the view data that obtains from the camera head that is arranged at vehicle, peripheral image is outputed to display unit, it is characterized in that the craspedodrome guiding sign of the position that expression is begun to keep straight on to the Parking target area together outputs to above-mentioned display unit with above-mentioned peripheral image.
The present invention 2 be a kind of EPH, be installed in vehicle, it is characterized in that having: view data obtains the unit, obtains view data from the camera head that is arranged at above-mentioned vehicle; Output control unit will output to display unit based on the peripheral image of above-mentioned view data; With the 1st sign delineation unit, the craspedodrome guiding sign of the position that expression is begun to keep straight on to the Parking target area is together described with above-mentioned peripheral image.
The present invention's 3 is that 2 described EPHs as the present invention is characterized in that also having: the 2nd sign delineation unit, will identify with the corresponding prediction locus of the rudder angle of above-mentioned vehicle, and together describe with above-mentioned peripheral image.
The present invention's 4 is, 2 or 3 described EPHs as the present invention, it is characterized in that, also have: the line detecting unit, the line that the Parking target area is divided in detection, above-mentioned the 1st sign delineation unit calculate the above-mentioned line of expression straight line or with the straight line of this straight line parallel, with point of contact, and the position at above-mentioned point of contact described as the craspedodrome starting position based on the expectation course of the rudder angle of above-mentioned vehicle.
The present invention's 5 is, 4 described EPHs as the present invention, it is characterized in that, above-mentioned the 1st sign delineation unit will represent above-mentioned line straight line or with the straight line of this straight line parallel, with the above-mentioned point of contact of above-mentioned expectation course as cardinal extremity, show the craspedodrome course parallel with above-mentioned line.
The present invention's 6 is, 2 or 3 described EPHs as the present invention, it is characterized in that, also have: the line detecting unit, it is used to detect the line that the Parking target area is divided, above-mentioned the 1st sign delineation unit calculate the above-mentioned line of expression straight line or with the straight line of this straight line parallel, with point of contact based on the expectation course of the rudder angle of above-mentioned vehicle, simultaneously, calculating above-mentioned vehicle and above-mentioned line based on above-mentioned point of contact becomes vehicle location when parallel, and describes to represent the estimating position sign of the profile of above-mentioned vehicle at above-mentioned vehicle location.
The present invention's 7 is, any described EPH of 4~6 as the present invention, it is characterized in that, above-mentioned the 2nd sign delineation unit, with in the above-mentioned expectation course from above-mentioned rear view of vehicle, to the straight line of the above-mentioned line of expression or with the straight line of this straight line parallel and the above-mentioned point of contact of above-mentioned expectation course, identify as above-mentioned prediction locus and to describe.
The present invention's 8 is, as any described EPH of 2~7 of the present invention, it is characterized in that also having: delineation unit is put in this parking stall, and the current location sign of the current location of the above-mentioned vehicle of expression is presented at above-mentioned peripheral image.
The present invention's 9 is, any described EPH of 2~8 as the present invention is characterized in that also having: image data memory cell, will store as recording image data from the view data that above-mentioned camera head is obtained; The image synthesis unit, above-mentioned recording image data and up-to-date above-mentioned view data is synthetic, generate the dead angle area that shows current above-mentioned camera head and the generated data of camera watch region.
The present invention's 10 is, any described EPH of 2~9 as the present invention, it is characterized in that, also have: graphics processing unit, above-mentioned view data is carried out Flame Image Process, generate the data of looking down of looking down above-mentioned vehicle-surroundings, simultaneously, above-mentioned output control unit will be shown in above-mentioned display unit based on the above-mentioned eye view image of looking down data, and above-mentioned the 1st sign delineation unit is together described above-mentioned craspedodrome guiding sign with above-mentioned eye view image.
According to the present invention 1 owing to together show the guiding sign of keeping straight on peripheral image, thus the represented position of guiding sign of keeping straight on of turning to, after this in-line position, as long as keep straight on according to the guiding sign of keeping straight on.Therefore, when requiring the Parking of more highly difficult driver behavior, but the driver assistance operation.
According to the present invention 2, be used for together being displayed on display unit with peripheral image by the craspedodrome guiding sign that craspedodrome makes vehicle enter the Parking target area.Therefore, the driver only needs turn inside diameter is identified to the craspedodrome guiding, plays the craspedodrome of positive dirction dish and get final product after this sign.Therefore, when the Parking operation that requires driving efficiency, but the operation of the bearing circle of service vehicle.
According to the present invention 3 owing to together also describe to identify with the prediction locus of the corresponding vehicle of at that time rudder angle with peripheral image, so, can easier grasp towards the track of the vehicle of Parking target area.
According to the present invention 4 since the straight line of expression line or with the straight line of this straight line parallel and the position at the point of contact of the estimating course guiding sign of describing to keep straight on, so can express the craspedodrome starting position corresponding with current rudder angle.Therefore, the driver only needs turn inside diameter is arrived the guiding sign place that keeps straight on, and plays the craspedodrome of positive dirction dish and get final product after this sign.Perhaps, be judged as under the inappropriate situation in craspedodrome starting position, can passing through steering wheel rotation, change is kept straight on and is guided the position of sign.
According to the present invention 5 owing to become position when parallel with ruling, describe and the craspedodrome course of ruling parallel from vehicle, so, can hold the deviation of the white line in craspedodrome course and the peripheral image etc.
According to the present invention 6, describe vehicle and the estimating position sign that is scribed into when parallel.That is,, on directly perceived, understand the Parking starting position easily owing to utilize the profile of vehicle to represent the position that begins to keep straight on.
According to the present invention 7, utilize the 2nd sign delineation unit, the expectation course at straight line that will be from rear view of vehicle to expression line or its parallel lines and the point of contact of estimating course is described as the prediction locus sign.Therefore, owing to described and arrived the craspedodrome starting position corresponding prediction locus of rudder angle before, so but the course of easy to understand ground demonstration vehicle.
According to the present invention 8 owing in peripheral image, show the current location sign, so can confirm to be presented at the Parking target area in the peripheral image and the relative position of vehicle easily.In addition, on directly perceived, can understand each guide line that is depicted in the peripheral image easily.
According to the present invention 9, can use up-to-date view data and recording image data that past of being accumulated takes shows the dead angle area of current camera head.Therefore, can confirm the Parking target area in bigger scope.
According to the present invention 10 because the guiding sign of keeping straight on together shows with eye view image, so can understand the relative position of Parking target area and the guiding sign of keeping straight on easily.
Description of drawings
Fig. 1 is the block scheme of the Parking backup system of present embodiment.
Fig. 2 is the key diagram of vehicle image.
Fig. 3 is the key diagram of the image pickup scope of camera.
Fig. 4 (a) is a view data, (b) is the mode chart of looking down data.
Fig. 5 is the key diagram that writes processing of looking down data.
Fig. 6 (a) is that the white line on the road surface, calculating, (c) that (b) is guide line are the key diagrams of the guide line described.
Fig. 7 is the key diagram of the treatment step of present embodiment.
Fig. 8 is the key diagram of the treatment step of present embodiment.
Fig. 9 is the key diagram of the treatment step of present embodiment.
Figure 10 is the key diagram of the treatment step of present embodiment.
Figure 11 is the key diagram of the treatment step of present embodiment.
Figure 12 is the key diagram of composograph.
Figure 13 is the key diagram of Parking auxiliary image.
Figure 14 is the key diagram of Parking auxiliary image.
Figure 15 be the craspedodrome lead-frame describe processing spec figure.
Figure 16 is the key diagram of the Parking auxiliary image of the 2nd embodiment.
Figure 17 is the auxiliary key diagrams of other routine Parkings, (a) is that guide line, (b) that has described the 3rd line segment is that guide line, (c) that has omitted the 2nd line segment is the auxiliary key diagram of output voice.
Figure 18 is the key diagram of describing to handle of other routine guide lines.
Figure 19 is the key diagram of other routine guide lines.
Figure 20 (a) has omitted to estimate that guide line, (b) of axletree position are the key diagrams that has changed the guide line of craspedodrome lead-frame.
Figure 21 is the key diagram of other routine guide lines.
Figure 22 is the key diagram of guide line in the past.
Among the figure: 1-Parking backup system; 2-is as the control device of EPH; 8-is as the display of display unit; 14-obtains the image data input unit of unit as view data; 15-puts the image processor of delineation unit and image synthesis unit as graphics processing unit, output control unit, the 1st sign delineation unit, the 2nd sign delineation unit, line detecting unit, this parking stall; 25-is as the camera of camera head; 30-is as the vehicle image of current location sign; 50,68-is as the document image of peripheral image and eye view image; 51,67-is as the present image of peripheral image and eye view image; 52,66-is as the composograph of peripheral image and eye view image; 100-is as the white line of line; The C-vehicle; F-is as the craspedodrome lead-frame of keep straight on guiding sign and estimating position sign; F1-is as the expectation axletree position of craspedodrome starting position; The G-view data; G1-is as the data of looking down of recording image data; The G3-generated data; P1, P2-point of contact; Rv-is as the turning guide line of prediction locus sign; St-is as the craspedodrome guide line of the guiding sign of keeping straight on; St1-is as the 1st line segment of craspedodrome starting position; St2-is as the 2nd line segment of craspedodrome course; Tr-estimates course; The Z-shooting area.
Embodiment
(the 1st embodiment)
Below, with reference to Fig. 1~Figure 14 one embodiment of EPH of the present invention is described.Fig. 1 is the block scheme of the formation of explanation Parking backup system 1.
As shown in Figure 1, Parking backup system 1 has the control device 2 as EPH.Control device 2 has control part 3, primary memory 4 and ROM5.Control part 3 has not shown CPU etc., carries out the main control of various processing according to the various programs such as Parking auxiliary routine among the ROM5 of being stored in.The operation result of primary memory 4 temporary transient storage control parts 3, and preservation is used for the auxiliary various variablees of Parking, sign etc.
In ROM5, store vehicle image data 5a.Vehicle image data 5a is that the image that is used for to the vehicle (with reference to Fig. 3) of Parking backup system 1 to be installed outputs to the data as the display 8 of display unit.If 5a outputs to display 8 with these vehicle image data, then show vehicle image 30 as shown in Figure 2 as the current location sign.
In addition, control device 2 has speech processor 11.Speech processor 11 has the not shown storer of having preserved voice document and digital/analog converter etc., uses this voice document, loudspeaker 12 output suggestion voice or the warnings that possessed from Parking backup system 1.
And then control device 2 has vehicle side interface portion (hereinafter referred to as vehicle side I/F portion 13).Control part 3, by this vehicle side I/F portion 13, from the vehicle speed sensor 20 input vehicle speed pulse VP that are arranged at vehicle C, number of pulses is counted.In addition, control part 3 from gyroscope 21 input orientation detection signal GRP, is kept at current orientation as variable in primary memory 4 with renewal by vehicle side I/F portion 13.
In addition, control part 3 is by vehicle side I/F portion 13, and the neutral starting switch 22 input shift signal SPP from vehicle C upgrade the gear as variable that is stored in the primary memory 4.And then control part 3 is by vehicle side I/F portion 13, from turning to rotation angle sensor 23 input redirect sensor signal STP.Control part 3 upgrades the current rudder angle that is stored in the vehicle C in the primary memory 4 based on this rotation direction sensor signal STP.
If control part 3 when having imported the shift signal SPP that expression reverses gear, is set at the reference position with at that time vehicle location.And, based on vehicle speed pulse VP and rotation direction sensor signal STP, calculate the relative coordinate apart from the reference position, relative rudder angle.
In addition, control part 3 has the image data input unit 14 that obtains the unit as view data.Image data input unit 14 is under the control of control part 3, and drive controlling is set at the rear monitor camera as camera head (being designated hereinafter simply as camera 25) of vehicle C, obtains view data G successively.
As shown in Figure 3, camera 25 is installed in the rear end of the vehicles such as back door of vehicle C, and makes its optical axis towards the below.But this camera 25 is digital cameras of color image shot, has the optical facilities and the CCD imaging apparatus (all not shown) that are made of wide-angle lens, catoptron etc.The visual angles of 140 degree about camera 25 for example has, the back number formulary rice of rear end that will comprise vehicle C is as shooting area Z.Image data input unit 14 obtains by camera 25 and has carried out the view data G behind the analog/digital conversion according to the control of control part 3, and temporarily is kept in the video memory 17 as image data memory cell that control device 2 possessed.In addition, camera 25 also can be to image data input unit 14 output image signal, and image data input unit 14 is carried out analog/digital conversion according to signal of video signal, generate view data G.
In addition, as shown in Figure 1, control device 2 has the image processor 15 of putting delineation unit and image synthesis unit as graphics processing unit, output control unit, the 1st sign delineation unit, the 2nd sign delineation unit, line detecting unit, this parking stall.When image processor 15 has retreated image recording distance D 1 (in the present embodiment, being 100mm) at vehicle C from the reference position, by the image data input unit 14 view data Gs of input as Fig. 4 (a) modal representation.And, by each view data G is carried out known geometric transformation, generate and look down data G1 like that shown in Fig. 4 (b) as recording image data.Though the viewpoint of camera 25 is oblique uppers of phase road pavement, looks down data G1 and be converted into the view data of observing the road surface from the vertical direction top.
And then the coordinate data of imageing sensor 15 from control part 3 input expression apart from the relative coordinate of reference position and expression are the rudder angle data of the relative rudder angle of benchmark with the rudder angle of reference position.And, as shown in Figure 5, with look down among the storage area A that data G1 writes video memory 17, by in the represented zone of the solid line corresponding with these coordinate datas and rudder angle data.Writing the looking down after the data G1 of reference position, write the position that further retreats from the reference position look down data G1 the time, according to the relative coordinate and relative rudder angle that with the reference position are benchmark, add in the zone shown in the with dashed lines looking down data G1.In the zone of respectively looking down data G1 repetition, select to write the pixel value of looking down data G1 of renewal.Thus, based on the image of looking down data G1 that writes earlier with based on after the image of looking down data G1 that writes become continuous images.Like this, when each vehicle C had retreated image recording distance D 1, the data G1 that looks down that takes in each spot for photography was accumulated in the video memory 17.
And, look down data G1 if in video memory 17, accumulated more than the regulation width of cloth number, what then image processor 15 was read the regulation zone respectively looks down data G1.In addition, obtain the latest image data G (hereinafter referred to as current image date G2) that has reflected current vehicle-surroundings situation again from camera 25.Then, after current image date G2 has been carried out looking down conversion, the current image date G2 after looking down data G1 and looking down conversion is synthesized processing, and the generated data G3 that will synthesize after the processing is presented on the display 8.
In addition, image processor 15 is described guide line on the image that will look down after data G1 and current image date G2 synthesize.Say that at length at first, image processor 15 is divided the detection of the white line of Parking target area and handled.In this was handled, present embodiment was to use known edge extracting to handle.For example, the generated data G3 with coloured image converts gray-scale data to, the threshold value of setting average brightness and intermediate value etc.And the slope of the brightness between pixel etc. are during less than threshold value, do not detect to be the edge, detect to be the edge more than or equal to threshold value the time.Perhaps, detect the brightness of each pixel of looking down data G1 that is converted to gray-scale data and poor with the pixel value of each pixel pixel adjacent.Then that difference is big pixel is come out as rim detection.For example, shown in Fig. 6 (a), take the white line 100 of the conduct line of rear view of vehicle by camera 25, generated when respectively looking down data G1, by having used these rim detection of looking down the generated data G3 of data G1, shown in Fig. 6 (b), white line 100 is extracted out as edge EG.
As the result of edge extracting,,, edge EG is discerned as the straight line in the image space (line segment) so for example image processor 15 carries out Hough transformation according to this edge extracting result owing to detect discontinued edge EG sometimes.The straight line that image processor 15 will be tried to achieve like this (line segment) is as white line approximate expression Sg.
And image processor 15 is according to rotation direction sensor signal STP, and 1 couple of vehicle C that will be corresponding with overall width estimates that course Tr obtains as the formula in the image coordinate system (Xi, Yj).Then, obtain the point of contact of estimating course Tr and white line approximate expression Sg.As Fig. 6 (b), estimate course Tr not with the tangent situation of white line approximate expression Sg under, calculate parallel with white line approximate expression Sg, and with estimate the tangent run-in index Sg1 of course Tr.
And then, behind the coordinate of the point of contact P1, the P2 that calculate white line approximate expression Sg or run-in index Sg1 and expectation course Tr, shown in Fig. 6 (c), obtain the 1st line segment St1 with the conduct craspedodrome starting position of the roughly the same width of overall width of having by each point of contact P1, P2.The 1st line segment St1 be illustrated in vehicle C from current location along estimating under the situation that course Tr retreats, vehicle C becomes the position of the back axle when parallel with white line approximate expression Sg.That is, vehicle C from current location to the 1st line segment St1, if under the state of keeping current rudder angle along estimating that course Tr turns, then the back axle of vehicle C is consistent with the 1st line segment St1.Therefore, at the back axle of vehicle C with after the 1st line segment St1 is consistent, if vehicle C is rearward kept straight on, then can be with vehicle C and white line 100 Parking abreast.
After obtaining the 1st line segment St1 in this wise, image processor 15 is obtained from point of contact P1, P2 and white line approximate expression Sg or the parallel line of run-in index Sg1, promptly with the 2nd line segment St2 of the conduct craspedodrome course of the roughly the same length of vehicle commander.Owing to utilize the 2nd line segment St2, can hold the relative distance of car body and white line 100, thus know after turning with current rudder angle, and the space off normal of the vehicle C both sides when having carried out craspedodrome.These the 1st line segments St1 and the 2nd line segment St2 constituted provide with turn after the keep straight on craspedodrome guide line St of guiding sign of the conduct of the relevant information of craspedodrome.In addition, image processor 15 cuts off estimates course Tr, obtains length and is the turning guide line Rv as the prediction locus sign from rear vehicle end to the 1 line segment St1.These turning guide line Rv and craspedodrome guide line St are overlapped on the image based on generated data G3.
Below, with reference to Fig. 7~Figure 11, the treatment step of present embodiment is described.As shown in Figure 7, at first control part 3 is waited for the input (step S1) of beginning trigger pip according to the Parking auxiliary routine that is stored among the ROM5.The beginning trigger pip is based on the input signal of the starting of ignition coil electronics breaker device (not shown) in the present embodiment.If imported the beginning trigger pip, under the control of control part 3, carry out system's starting management processing (step S2), data accumulation processing (step S3), synthetic processing (step S4), describe to handle (step S5).Then, control part 3 judges whether the input (step S6) of end trigger signal, under the situation of not input, returns step S2.In the present embodiment, the end trigger signal is based on the input signal of closing of the closed condition or the Parking backup system 1 of ignition coil electronics breaker device.
Below, with reference to Fig. 8, the starting management processing S2 of system is described.At first, control part 3 upgrades the gear as variable (step S2-1) that is stored in the primary memory 4 by the 13 input shift signal SPP of vehicle side I/F portion.Then, control part 3 judges whether gear is reverse gear (step S2-2).Being judged as gear is (among the step S2-2, being) when reversing gear, and the current location of vehicle C is set at the reference position.
In addition, control part 3 judges whether the system's start identification that is stored in the primary memory 4 is OFF (step S2-3).System's start identification is a sign of having represented whether to start the Parking auxiliary mode.Control part 3 is (among the step S2-3, not) when judging that system's start identification is ON, enters following data accumulation and handles (step S3).
When gear becomes when reversing gear, control part 3 judges that system's start identifications are OFF (among the step S2-3, being).In the case, control part 3 is updated to ON (step S2-4) to system's start identification.And then, the backway Δ DM as variable that is stored in the primary memory 4 is set at " 0 ", carry out initialization (step S2-5).In addition, under the control of control part 3, image processor 15 from camera 25 input image data G (step S2-6), is looked down conversion (step S2-7) with this view data G by image data input unit 14 as from Fig. 4 (a) to Fig. 4 (b), generate to look down data G1.
In addition, image processor 15 is by control part 3, is stored in the data area (step S2-8) corresponding with the reference position in the video memory 17 with looking down data G1.If that has stored the reference position in this wise looks down data G1, then enter following data accumulation and handle.
With reference to Fig. 9 data accumulation processing is described.At first, control part 3 is imported the signals of vehicles (step S3-1) that is made of vehicle speed pulse VP and rotation direction sensor signal STP.As described above, control part 3 is counted its umber of pulse according to the vehicle speed pulse VP that retreats input along with vehicle C.Then, according to this umber of pulse, upgrade the amount of movement Δ d (step S3-2) that is stored in the primary memory 4.In addition, by this amount of movement Δ d being added to the backway Δ DM after the initialization in step S2-5, upgrade backway Δ DM (step S3-3).If backway Δ DM is updated, complex displacement momentum Δ d then.Then, judge that whether this backway Δ DM is more than or equal to image recording distance D 1 (step S3-4).In the present embodiment, image recording distance D 1 is set to 100mm.
(among the step S3-4, not) enters step S6 (with reference to Fig. 7) when being judged as backway Δ DM less than image recording distance D 1, judges the input that has or not the end trigger signal.And under the situation of the input that does not have the end trigger signal (among the step S6, not), retrieval system starting management processing (step S2).
On the other hand, (among the step S3-4, be) image processor 15 input recording view data G (step S3-5) when being judged as backway Δ DM more than or equal to image recording distance D 1.Then this view data G is looked down conversion, generate such data G1 (step S3-6) that looks down shown in the figure (b).In addition, will take coordinate data and the rudder angle data input of record with view data G, and will look down the position (step S3-7) corresponding in the storage area that data G1 writes video memory 17 with these coordinate datas and rudder angle data.Write video memory 17 if will look down data G1, then increase the data counting number that is stored in the primary memory 4, backway Δ DM is initialized as " 0 " (step S3-8).
Then, control part 3 is according to the data counting number that is stored in the primary memory 4, judge in video memory 17, whether stored specified quantity look down data G1 (step S3-9).Specified quantity for example is set to 10 groups etc., control part 3 be judged as store specified quantity look down data G1 the time (among the step S3-9, be), the displayable sign that is stored in the primary memory 4 is arranged to ON (step S3-10), enter following synthetic processing (step S4).Displayable sign is to represent whether to synthesize the sign that has used the generated data G3 that respectively looks down data G1.Be judged as do not store specified quantity look down data G1 the time (among the step S3-9, not), enter step S6, judge the input that has or not the end trigger signal, under the situation of the input that does not have the end trigger pulse, return step S2.
Below, with reference to Figure 10 synthetic the processing is described.At first, image processor 15 extracts the regulation zone (step S4-1) based on the current coordinate of vehicle C, current rudder angle among the storage area A that has write the video memory 17 of respectively looking down data G1.In the present embodiment, read among the storage area A that is written into video memory 17 the rear portion that is equivalent to current vehicles C and the periphery the zone in the pixel value of looking down data G1.
Respectively look down data G1 if from video memory 17, extracted, then the data G1 that respectively looks down that extracts is rotated conversion, make its consistent with current rudder angle (step S4-2).
In addition, image processor 15 uses and respectively looks down data G1 and current image date G2 generation generated data G3 (step S4-4) from camera 25 input current image date G2 (step S4-3).At length say, as shown in figure 12, will respectively look down top among the viewing area Zd that data G1 is configured in display 8 after the rotation conversion.And then G2 looks down conversion with current image date, and is configured in the below among the viewing area Zd of display 8.Thus, above the Zd of viewing area, show that conduct is based on the peripheral record images image 50 of respectively looking down data G1.Document image 50 is to have shown the rear portion of current vehicles C and the image of periphery thereof, and it has shown the dead angle area of current camera 25.Below the Zd of viewing area, show present image 51 based on the conduct periphery image of current image date G2.Present image 51 is the images in the shooting area Z of current camera 25, becomes the image of looking down the road surface that is comprised from the top among shooting area Z.
Next, image processor 15 carries out the processing of describing shown in Figure 11.At first, 15 couples of generated data G3 of image processor carry out white line detection processing (step S5-1) as described above.Image processor 15 converts generated data G3 to gray-scale data, and extracts the edge according to the brightness of each pixel.And then 15 couples of edge extracting results of image processor carry out Hough transformation etc., calculate white line approximate expression Sg (step S5-2) such shown in Fig. 6 (b).At this moment, go out edge EG as long as can be only in the front-end detection of each white line 100.In addition, because not detect edge EG etc. former thereby can not calculate under the situation of 1 couple of white line approximate expression Sg, just will estimate that merely course Tr is depicted on the present image 51 in the present embodiment.
In addition, control part 3 is by vehicle side I/F portion 13, and input redirect sensor signal STP (step S5-3) calculates and estimates course Tr (step S5-4).
Then, shown in Fig. 6 (b), calculate point of contact P1, the P2 (step S5-5) that estimate course Tr and white line approximate expression Sg.As described above, estimating under course Tr and the nontangential situation of white line approximate expression Sg, calculate parallel with white line approximate expression Sg, and with estimate the tangent run-in index Sg1 of course Tr, obtain point of contact P1, the P2 of expectation course Tr and run-in index Sg1.
If calculated point of contact P1, P2, then calculating with this point of contact P1, P2 is the turning guide line Rv (step S5-6) of terminal.At this moment, image processor 15 will estimate that the part from rear vehicle end to point of contact P1, P2 among the course Tr cuts off shown in Fig. 6 (c), with this line segment as turning guide line Rv.
In addition, image processor 15 calculates craspedodrome guide line St (step S5-7).That is, as described above point of contact P1, P2 are connected, the 1st line segment St1 is obtained as orthoscopic.In addition, be basic point with point of contact P1, P2, calculate 1 couple parallel the 2nd line segment St2 respectively with white line approximate expression Sg or run-in index Sg1.
Then, the image processor 15 generated data G3 that will be stored among the VRAM16 outputs to display 8 (step S5-8).And then at the position of regulation output vehicle image data 5a, and the composograph 52 of general's conduct periphery image as shown in Figure 12 is presented at display 8 (step S5-9).
Then, to composograph 52 output turning guide line Rv and craspedodrome guide line St (step S5-10).Thus, demonstration Parking auxiliary image 55 as shown in Figure 13 on display 8.In driver's confirm to keep straight on deviation of white line of guide line St and composograph 52, and with under the also passable situation of current rudder angle turning, till the back axle of vehicle C arrives the 1st line segment St1, keeping under the constant state of bearing circle mode of operation, vehicle C is turned.On the other hand, the driver is bigger in the deviation of the white line that is judged as craspedodrome guide line St and composograph 52, perhaps between each white line and each the 2nd line segment St2 between be separated with bigger off normal, need under the situation of the current rudder angle of change steering wheel rotation.At this moment, restart above-mentioned each and handle refigure turning guide line Rv and craspedodrome guide line St.
Then, according to the affirmation of the relative position of craspedodrome guide line St and white line, keep the turning under the constant state of rudder angle, if the back axle of vehicle C has arrived the 1st line segment St1, vehicle C and white line become almost parallel.At this moment, on display 8, demonstrate as shown in Figure 14 Parking auxiliary image 55.Do not show turning guide line Rv, in present image 51, overlapping have a craspedodrome guide line St.The driver plays the positive dirction dish constantly at this, and vehicle C is rearward kept straight on, thereby can be in the suitable position Parking of Parking target area.
If finish Parking, the driver then operates shift handle, gear is changed to reversing gears in addition such as parking or neutral.Its result, in the step S2-2 of Fig. 8, control part 3 judges that gear is not reverse gear (among the step S2-2, not), and judges whether system's start identification is ON (step S2-9).Under the situation that the Parking auxiliary mode has finished, because system's start identification is OFF (among the step S2-9, not), so judge the input (step S6) that has or not the end trigger signal.On the other hand, after Parking is finished, immediately system's start identification is set at ON.Therefore control part 3 judges that system's start identifications are that ON (among the step S2-9, being) will be stored in variable in the primary memory 4 reset (step S2-10).And, system's start identification is set at OFF (step S2-11).Then, (among the step S6, be) end process when having imported the end trigger pip.
According to the 1st embodiment, can obtain following effect.
(1) in the 1st embodiment, in display 8, when the composograph 52 of vehicle-surroundings is looked down in demonstration, also shows the turning guide line Rv corresponding and represent the craspedodrome guide line St of the position that begins to keep straight at the Parking target area with current rudder angle.Therefore, as long as the driver is with the vehicle C craspedodrome starting position shown in the craspedodrome guide line St of turning, and plays the positive dirction dish from this craspedodrome starting position and keeps straight on and get final product.Therefore, in Parking when operation that requires driving efficiency, bearing circle operation that can service vehicle C.
(2) in the 1st embodiment, under the situation of keeping current rudder angle, the position that the back axle of vehicle C and white line 100 is parallel is as the craspedodrome starting position, and utilizes the 1st line segment St1 to represent.Therefore, driver's general location of starting position of can determining to keep straight on.In addition, under the inappropriate situation in craspedodrome starting position, can operate by the travel direction dish and change rudder angle.
(3) in the 1st embodiment, describe the 2nd line segment St2 parallel with white line 100 from vehicle C and white line 100 the 1st parallel line segment St1.Therefore, the driver can hold the 2nd line segment St2 and the deviation of the white line in the composograph 52 and the off normal of being described in the space between the 2nd line segment St2 and each white line etc.Thereby inferior in the situation that the deviation of the 2nd line segment St2 and white line is bigger, the usable rotational direction dish changes the turning course.
(4) in the 1st embodiment,, can confirm the Parking target area of demonstration in composograph 52 and the relative position of vehicle C easily owing in composograph 52, show vehicle image 30.In addition, each guide line Rv, the St that the driver is understood easily be depicted in the present image 51 on directly perceived, be turning path and the craspedodrome track of vehicle C.
(5) in the 1st embodiment, in Parking auxiliary image 55, show based on the present image 51 of current image date G2 and the document image of taking based on the past that is accumulated of looking down data G1 50.Therefore, because the image of vehicle-surroundings is looked down in demonstration, so can hold the distance of the white line and the vehicle C of vehicle both sides easily.In addition, but because displayed record image 50 and present image 51, so can confirm the Parking target area in wideer scope.
(the 2nd embodiment)
Below, with reference to Figure 15 and Figure 16, describe specializing the 2nd embodiment of the present invention.In addition, the 2nd embodiment is because its structure has just changed the craspedodrome guide line St of the 1st embodiment, so omit its detailed explanation for identical part.
The turning guide line Rv of the 2nd embodiment and craspedodrome guide line St describe handle in (with reference to Figure 11), to step S5-6, carry out the processing same at step S5-1 with the 1st embodiment.Then, image processor 15 is not described the guide line St that keeps straight in step S5-7, but describes the craspedodrome lead-frame F as the estimating position sign.Craspedodrome lead-frame F represents that vehicle C and white line approximate expression Sg become the profile of parallel locational vehicle.
About this craspedodrome lead-frame F, describe with reference to Figure 15.Craspedodrome lead-frame F is by estimating axletree position F1, estimate back-end location F2, estimate front position F3, estimating that side position F4, F5 constitute.Estimating that axletree position F1 is illustrated in makes vehicle C from current location not under the steering wheel rotation situation that course Tr (with reference to Fig. 6) has carried out retreating according to expectation, and vehicle C and white line approximate expression Sg become the estimating position of the back axle when parallel.Here, though be to describe, also can describe by with dashed lines with solid line.In addition, estimate that axletree position F1 is depicted on the position identical with the 1st line segment St1 of the 1st embodiment.
Estimate back-end location F2 and estimate that front position F3 is, under the situation that course Tr (with reference to Fig. 6) has carried out retreating according to expectation, respectively vehicle C and white line approximate expression Sg are become the rear end of the vehicle C when parallel and the position of front end and carried out the line of estimating and describing.In addition, estimate that side position F4, F5 represent that rear wheel, rear end or the front end of vehicle C are in the side of the vehicle C when estimating axletree position F1, estimate back-end location F2 or estimating front position F3.That is, estimate that axletree position F1 is depicted on the direction with white line approximate expression Sg quadrature, estimate side position F4, that F5 is depicted as is parallel with white line approximate expression Sg.
When the coordinate time (step S5-7) that calculates craspedodrome lead-frame F according to point of contact P1, P2, image processor 15 outputs to generated data G3 (step S5-8) in the display 8 as the 1st embodiment.In addition, image processor 15 outputs to vehicle image data 5a the position (step S5-9) of regulation of the viewing area Zd of display 8.
If exported generated data G3 and vehicle image data 5a, then image processor 15 will calculate the craspedodrome lead-frame F of coordinate and turning guide line Rv as guide line output (step S5-9) at step S5-7.
Its result shows Parking auxiliary image 65 as shown in Figure 16 in display 8.In Parking auxiliary image 65, show composograph 66 based on generated data G3.The composograph 66 of present embodiment is looked down the scope of data G1 owing to expanded extraction, so depict the vehicle-surroundings zone bigger than the 1st embodiment.Composograph 66 show based on the present image 67 of current image date G2 and based on the past take be accumulated in the document image of looking down data G1 68 in the video memory 17.In addition, the vehicle image 30 of overlapping expression vehicle integral body in document image 68.
In addition, in composograph 66, overlapping have craspedodrome lead-frame F and a turning guide line Rv.Craspedodrome lead-frame F is exactly according to the vehicle location after the turning guide line Rv turning owing to be depicted as big or small roughly the same size with vehicle C so the user can hold craspedodrome lead-frame F intuitively.In addition, because the parallel situation of white line in can confirming craspedodrome lead-frame F and be depicted in composograph 66 on picture so the driver can understand intuitively, needs only and rearward keeps straight on from craspedodrome lead-frame F.If described guide line (step S5-10), then enter step S6, judge to have or not the end trigger signal.
According to the 2nd embodiment, can obtain following effect.
(6) in the 2nd embodiment, as the straight line starting position, according to white line approximate expression Sg or run-in index Sg1 and the point of contact P1, the P2 that estimate course Tr, having described becomes craspedodrome lead-frame F when parallel at vehicle C and white line 100.That is, owing to utilize the frame of the profile of expression vehicle C to describe the position that begins to keep straight on, so the driver can understand the effect of craspedodrome lead-frame F intuitively.
In addition, the respective embodiments described above also can be carried out following change.
In the respective embodiments described above, be to look down in the zone of the correspondence among the storage area A that data G1 writes video memory 17, but also can will look down data G1 and coordinate data and the rudder angle data are stored explicitly.In addition, also view data G can be stored in the video memory 17 without looking down conversion.
In the respective embodiments described above, image recording distance D 1 is set at 100mm, but also can be set at distance in addition.
In the respective embodiments described above, when data accumulation and Flame Image Process, also can replace rotation direction sensor signal STP, and use orientation detection signal GRP based on gyroscope 21.
Also can adopt above-mentioned steps synthetic method in addition to generate composograph 52.For example, also the view data from camera 25 inputs can not accumulated, and the data of looking down that directly will look down after the conversion output to display 8.Perhaps, even be not the image of looking down after the conversion, guide line Rv or craspedodrome guide line St also can describe to turn on the peripheral image of vehicle peripheries such as the place ahead of having taken vehicle C or rear.In addition, view data G is being presented under the situation of display 8 without looking down conversion, when carrying out edge extracting, can view data G not looked down conversion yet, and directly carry out edge extracting in order to calculate turning guide line Rv and craspedodrome guide line St.
In the respective embodiments described above, when the looking down data G1 and be accumulated in the video memory 17 of specified quantity, make the displayable ON of being masked as, but also can be written under the situation in whole predetermined data zone looking down data G1, make the displayable ON of being masked as.Perhaps, even also can in video memory 17, only accumulate under 1 group of situation of looking down data G1, make the displayable ON of being masked as or show composograph 52 (document image 50).In this case, the accumulation zone of looking down data G1 also can the display shadowiness plate etc. the image of the non-demonstration of expression.
Parking auxiliary image 55,65 also can be to omit document image 50,68, has shown the picture of each guide line Rv, St and present image 51.
In the 1st embodiment, turning guide line Rv, craspedodrome guide line St also can be above-mentioned guide lines in addition.For example, shown in Figure 17 (a), like that, also can describe the 3rd line segment St3 parallel in the terminal of the 2nd line segment St2 with the 1st line segment St1, and the size of expression car body.Perhaps, shown in Figure 17 (b) like that, the 1st line segment St1 among the guide line St that also can only describe to keep straight on, and omit the 2nd line segment St2.In this case, also can near the 1st line segment St1, show the sign 60 of expression craspedodrome starting position.Perhaps, shown in Figure 17 (c) like that, also can be at vehicle C near during the 1st line segment St1 etc., from loudspeaker 12 output voice 61 such as " please keep straight on ".
In the respective embodiments described above, only under the situation that calculates 1 couple of white line approximate expression Sg, describe each guide line Rv, St, even but, also can describe only calculating under 1 the situation.For example, image processor 15 as shown in figure 18, is obtained 1 white line approximate expression Sg (or with the parallel run-in index of white line approximate expression Sg) and the point of contact P1 that estimates course Tr.Then, obtain vertical line V, and obtain the intersection point P3 of vertical line V and the opposing party's expectation course Tr by the white line approximate expression Sg (or run-in index) of point of contact P1.Obtain the 4th line segment St4 that has connected point of contact P1 and intersection point P3 then.The 4th line segment St4 represents the starting position of keeping straight on equally with the 1st line segment St1 in the 1st embodiment, in the 2nd embodiment, axletree position F1 is estimated in expression.And then, point of contact P1 and intersection point P3 as basic point, are described 2nd line segment St2 parallel with white line approximate expression Sg and craspedodrome lead-frame F (omitting diagram in Figure 18).
In the 2nd embodiment, Parking auxiliary image 69 as shown in figure 19 is such, also can show the vehicle image 30 at the rear portion of only representing vehicle C.In addition, as shown in Figure 19, in composograph 69a, also can not comprise craspedodrome lead-frame F integral body, and as long as comprise a part at least.In the case, owing to also can judge the craspedodrome lead-frame F situation parallel with the white line of composograph 69a, so can understand the effect of craspedodrome lead-frame F easily.
In the 2nd embodiment, shown in Figure 20 (a), also craspedodrome lead-frame F can be formed and omit the four frame shapes of estimating axletree position F1.In addition, shown in Figure 20 (b), like that, also craspedodrome lead-frame F can be made of position, the angle F6~F10 and the expectation axletree position F1 that are illustrated in vehicle C and white line approximate expression Sg and become each bight of the vehicle C when parallel.
In the 2nd embodiment, also can be such as shown in figure 21, describe to estimate craspedodrome route Ls in direction of retreat (direct of travel) side of craspedodrome lead-frame F.Estimate that craspedodrome route Ls is depicted on the extended line of estimating side position F4, F5, the expectation course of expression when craspedodrome lead-frame F has carried out retreating.This expectation craspedodrome course Ls is equivalent to the 2nd line segment St2 of the 1st embodiment.Like this, can hold the vehicle C when craspedodrome lead-frame F has carried out retreating and the relative distance and the relative direction of white line.
In the respective embodiments described above, with the 1st line segment St1 with estimate that axletree position F1 describes as the back axle position that becomes in vehicle C and Parking target area when parallel.In addition, can and estimate that also axletree position F1 describes as the estimating position of the rear end of vehicle C with the 1st line segment St1.In this case, turning along turning guide line Rv, and the rear end of vehicle C and the 1st line segment St1 or estimate that axletree position F1 becomes when consistent, by steering wheel rotation, rearward craspedodrome can be parked in vehicle C in the Parking target area.
In the respective embodiments described above, show Parking auxiliary image 55 according to the camera 25 shot image data G that are installed in rear vehicle end, but also can use the camera that is installed in vehicle side or front end.For example, if use the camera be installed in vehicle front, then when coming in to park cars before making vehicle, it is auxiliary to carry out Parking.
Claims (10)
1. a parking assist method according to the view data that obtains from the camera head that is arranged at vehicle, outputs to display unit with peripheral image, it is characterized in that,
The craspedodrome guiding sign of the position that expression is begun to keep straight on to the Parking target area together outputs to above-mentioned display unit with above-mentioned peripheral image.
2. an EPH is installed in vehicle, it is characterized in that having:
View data obtains the unit, obtains view data from the camera head that is arranged at above-mentioned vehicle;
Output control unit will output to display unit based on the peripheral image of above-mentioned view data; With
The 1st sign delineation unit, the craspedodrome guiding sign of the position that expression is begun to keep straight on to the Parking target area is together described with above-mentioned peripheral image.
3. EPH according to claim 2 is characterized in that,
Also have: the 2nd sign delineation unit, will identify with the corresponding prediction locus of the rudder angle of above-mentioned vehicle, together describe with above-mentioned peripheral image.
4. according to claim 2 or 3 described EPHs, it is characterized in that,
Also have: the line detecting unit, detect the line of dividing the Parking target area,
Above-mentioned the 1st sign delineation unit calculate the above-mentioned line of expression straight line or with the straight line of this straight line parallel, with point of contact, the position at above-mentioned point of contact is described as the craspedodrome starting position based on the expectation course of the rudder angle of above-mentioned vehicle.
5. EPH according to claim 4 is characterized in that,
Above-mentioned the 1st sign delineation unit will represent above-mentioned line straight line or with the straight line of this straight line parallel, with the above-mentioned point of contact of above-mentioned expectation course as cardinal extremity, show the craspedodrome course parallel with above-mentioned line.
6. according to claim 2 or 3 described EPHs, it is characterized in that,
Also have: the line detecting unit, detect the line of dividing the Parking target area,
Above-mentioned the 1st sign delineation unit calculate the above-mentioned line of expression straight line or with the straight line of this straight line parallel, with above-mentioned point of contact based on the expectation course of the rudder angle of above-mentioned vehicle, and, calculating above-mentioned vehicle and above-mentioned line according to above-mentioned point of contact becomes vehicle location when parallel, describes to represent the estimating position sign of the profile of above-mentioned vehicle at above-mentioned vehicle location.
7. according to any described EPH in the claim 4~6, it is characterized in that,
Above-mentioned the 2nd sign delineation unit with in the above-mentioned expectation course from above-mentioned rear view of vehicle, to the straight line of the above-mentioned line of expression or with the straight line of this straight line parallel and the above-mentioned point of contact of above-mentioned expectation course, identify as above-mentioned prediction locus and to describe.
8. according to any described EPH in the claim 2~7, it is characterized in that,
Also have: delineation unit is put in this parking stall, shows the current location sign of the current location of the above-mentioned vehicle of expression in above-mentioned peripheral image.
9. according to any described EPH in the claim 2~8, it is characterized in that,
Also have: image data memory cell, will store as recording image data from the view data that above-mentioned camera head is obtained;
The image synthesis unit, above-mentioned recording image data and up-to-date above-mentioned view data is synthetic, generate the dead angle area that shows current above-mentioned camera head and the generated data of camera watch region.
10. according to any described EPH in the claim 2~9, it is characterized in that,
Also have: graphics processing unit, above-mentioned view data is carried out Flame Image Process, generate the data of looking down of having looked down above-mentioned vehicle-surroundings, and,
Above-mentioned output control unit will be shown in above-mentioned display unit based on the above-mentioned eye view image of looking down data,
Above-mentioned the 1st sign delineation unit is together described above-mentioned craspedodrome guiding sign with above-mentioned eye view image.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006148698 | 2006-05-29 | ||
JP2006148698 | 2006-05-29 | ||
JP2006313510 | 2006-11-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101082502A true CN101082502A (en) | 2007-12-05 |
Family
ID=38912189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200710097353 Pending CN101082502A (en) | 2006-05-29 | 2007-05-11 | Parking assist method and parking assist apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101082502A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102791517A (en) * | 2010-03-10 | 2012-11-21 | 丰田自动车株式会社 | Vehicle parking assist system, vehicle including the same, and vehicle parking assist method |
CN102804763A (en) * | 2010-03-26 | 2012-11-28 | 爱信精机株式会社 | Vehicle periphery monitoring device |
CN103171552A (en) * | 2011-12-23 | 2013-06-26 | 现代自动车株式会社 | AVM top view based parking support system |
CN103625467A (en) * | 2012-08-28 | 2014-03-12 | 怡利电子工业股份有限公司 | Back-up guiding method of back-up parking command system |
CN103778617A (en) * | 2012-10-23 | 2014-05-07 | 义晶科技股份有限公司 | Moving image processing method and moving image processing system |
CN103057473B (en) * | 2008-06-10 | 2015-04-29 | 日产自动车株式会社 | Parking assist apparatus and parking assist method |
CN104571101A (en) * | 2013-10-17 | 2015-04-29 | 厦门英拓通讯科技有限公司 | System capable of realizing any position movement of vehicle |
CN105262980A (en) * | 2014-07-10 | 2016-01-20 | 现代摩比斯株式会社 | Around view system and operating method thereof |
CN105313777A (en) * | 2014-06-11 | 2016-02-10 | 现代摩比斯株式会社 | Parking system of vehicle |
CN105539585A (en) * | 2014-10-27 | 2016-05-04 | 爱信精机株式会社 | Parking auxiliary apparatus |
CN112078519A (en) * | 2020-09-09 | 2020-12-15 | 上海仙塔智能科技有限公司 | Vehicle-mounted holographic projector control system and vehicle-mounted holographic projector |
CN113997931A (en) * | 2020-07-13 | 2022-02-01 | 佛吉亚歌乐电子有限公司 | Bird's-eye view image generation device, bird's-eye view image generation system, and automatic parking device |
-
2007
- 2007-05-11 CN CN 200710097353 patent/CN101082502A/en active Pending
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103057473B (en) * | 2008-06-10 | 2015-04-29 | 日产自动车株式会社 | Parking assist apparatus and parking assist method |
CN102791517B (en) * | 2010-03-10 | 2015-03-25 | 丰田自动车株式会社 | Vehicle parking assist system, vehicle including the same, and vehicle parking assist method |
CN102791517A (en) * | 2010-03-10 | 2012-11-21 | 丰田自动车株式会社 | Vehicle parking assist system, vehicle including the same, and vehicle parking assist method |
CN102804763B (en) * | 2010-03-26 | 2016-05-04 | 爱信精机株式会社 | Vehicle periphery monitoring apparatus |
US9919650B2 (en) | 2010-03-26 | 2018-03-20 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US10479275B2 (en) | 2010-03-26 | 2019-11-19 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US9308863B2 (en) | 2010-03-26 | 2016-04-12 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
CN105774656B (en) * | 2010-03-26 | 2018-05-22 | 爱信精机株式会社 | Vehicle periphery monitoring apparatus |
CN102804763A (en) * | 2010-03-26 | 2012-11-28 | 爱信精机株式会社 | Vehicle periphery monitoring device |
CN105774656A (en) * | 2010-03-26 | 2016-07-20 | 爱信精机株式会社 | Vehicle Periphery Monitoring Device |
CN103171552A (en) * | 2011-12-23 | 2013-06-26 | 现代自动车株式会社 | AVM top view based parking support system |
CN103625467A (en) * | 2012-08-28 | 2014-03-12 | 怡利电子工业股份有限公司 | Back-up guiding method of back-up parking command system |
CN103778617A (en) * | 2012-10-23 | 2014-05-07 | 义晶科技股份有限公司 | Moving image processing method and moving image processing system |
CN103778617B (en) * | 2012-10-23 | 2016-08-03 | 义晶科技股份有限公司 | Moving image processing method and moving image processing system |
CN104571101A (en) * | 2013-10-17 | 2015-04-29 | 厦门英拓通讯科技有限公司 | System capable of realizing any position movement of vehicle |
CN105313777A (en) * | 2014-06-11 | 2016-02-10 | 现代摩比斯株式会社 | Parking system of vehicle |
CN105313777B (en) * | 2014-06-11 | 2018-01-23 | 现代摩比斯株式会社 | Vehicle parking system |
CN105262980B (en) * | 2014-07-10 | 2018-10-16 | 现代摩比斯株式会社 | Panoramic view monitoring image system and its working method |
CN105262980A (en) * | 2014-07-10 | 2016-01-20 | 现代摩比斯株式会社 | Around view system and operating method thereof |
CN105539585A (en) * | 2014-10-27 | 2016-05-04 | 爱信精机株式会社 | Parking auxiliary apparatus |
CN105539585B (en) * | 2014-10-27 | 2019-02-12 | 爱信精机株式会社 | Parking aid |
CN113997931A (en) * | 2020-07-13 | 2022-02-01 | 佛吉亚歌乐电子有限公司 | Bird's-eye view image generation device, bird's-eye view image generation system, and automatic parking device |
CN113997931B (en) * | 2020-07-13 | 2024-05-24 | 佛吉亚歌乐电子有限公司 | Overhead image generation device, overhead image generation system, and automatic parking device |
US12033400B2 (en) | 2020-07-13 | 2024-07-09 | Faurecia Clarion Electronics Co., Ltd. | Overhead-view image generation device, overhead-view image generation system, and automatic parking device |
CN112078519A (en) * | 2020-09-09 | 2020-12-15 | 上海仙塔智能科技有限公司 | Vehicle-mounted holographic projector control system and vehicle-mounted holographic projector |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101082502A (en) | Parking assist method and parking assist apparatus | |
EP1862375B1 (en) | Parking assist method and parking assist device | |
JP4561479B2 (en) | Parking support method and parking support device | |
US20200307616A1 (en) | Methods and systems for driver assistance | |
CN101426669B (en) | Parking assistance device and parking assistance method | |
JP3803021B2 (en) | Driving assistance device | |
CN101909972B (en) | Vehicle parking assist system and method | |
US7924171B2 (en) | Parking assist apparatus and method | |
JP4863791B2 (en) | Vehicle peripheral image generation apparatus and image switching method | |
JP4815993B2 (en) | Parking support method and parking support device | |
JP5472026B2 (en) | Parking assistance device | |
JP2008132881A (en) | Parking support method and parking support apparatus | |
CN104512337A (en) | Vehicle periphery display device | |
JP2008302711A (en) | Start support device | |
CN106379235B (en) | The implementation method and device of aid parking | |
JP2007237930A (en) | Driving support device | |
JP6917330B2 (en) | Parking support device | |
US20140118549A1 (en) | Automated vehicle periphery monitoring apparatus and image displaying method | |
JP4696691B2 (en) | Parking support method and parking support device | |
JP2011235677A (en) | Parking support system and navigation device | |
JP2010012838A (en) | Parking assisting device and parking assisting method | |
JP4645254B2 (en) | Vehicle periphery visual recognition device | |
JP6031973B2 (en) | Vehicle acceleration suppression device | |
JP5041165B2 (en) | GUIDANCE CONTROL DEVICE, GUIDANCE CONTROL METHOD, AND GUIDANCE CONTROL PROGRAM | |
JP4765362B2 (en) | Vehicle periphery visual recognition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20071205 |