CN109579868A - The outer object localization method of vehicle, device and automobile - Google Patents
The outer object localization method of vehicle, device and automobile Download PDFInfo
- Publication number
- CN109579868A CN109579868A CN201811512069.7A CN201811512069A CN109579868A CN 109579868 A CN109579868 A CN 109579868A CN 201811512069 A CN201811512069 A CN 201811512069A CN 109579868 A CN109579868 A CN 109579868A
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- vehicle
- target
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3623—Destination input or retrieval using a camera or code reader, e.g. for optical or magnetic codes
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the present invention provides a kind of outer object localization method of vehicle, device and automobile, which comprises processing equipment obtains the first image based on first area of the acquisition of the first camera in multiple cameras;Processing equipment obtains collected the second image based on second area of the second camera in multiple cameras;Wherein, the first camera, second camera are the camera being disposed adjacent;Processing equipment based on the first image, the second image judge first area, second area intersection region in whether comprising first object;If so, processing equipment calculates the position of first object based on the position of the first image, the second image, the position of the first camera, second camera.This method can position target outside the vehicle in the visual field overlapping region in adjacent camera, and positioning accuracy is high, can reduce manufacture, use cost while improving detection performance.
Description
Technical field
The present invention relates to vehicle safety ancillary technique fields, in particular to object localization method, device outside a kind of vehicle
And automobile.
Background technique
With the fast development of highway communication, ascendant trend is presented in traffic accident rate, and traffic safety increasingly becomes people
Focus of attention.
Although current vehicle safety auxiliary driving technology and vehicle-mounted awareness apparatus can provide extraneous ring to driver
The reference picture in border to enable a driver to check that the reference picture learns the targets such as body exterior someone, vehicle, object, but is driven
Traffic accident caused by the person of sailing still is easy because of subjective factor.
This is because image processing means during obtaining reference picture after a series of complex, can not keep away
There is certain distortion in the reference picture that will lead to exempted from.Outside the reference targets such as people, vehicle, object actually in reference picture and vehicle
Realistic objective has differences, if driver carries out according only to whether there is the targets such as people, vehicle, object in the reference picture seen
Vehicle control is easy to produce erroneous judgement and causes traffic accident.
Summary of the invention
In view of this, the embodiment of the present invention is designed to provide a kind of outer object localization method of vehicle, device and automobile, with
The positional relationship between the outer target of vehicle and vehicle is specified, driver is avoided to make erroneous judgement according only to the reference picture seen and cause
Traffic accident.
In a first aspect, being applied to the vehicles, the friendship the embodiment of the invention provides object localization method outside a kind of vehicle
Logical tool includes viewing system, and the viewing system includes processing equipment and multiple cameras, which comprises
The processing equipment obtains first based on first area of the acquisition of the first camera in the multiple camera
Image;
The processing equipment obtains collected based on second area of the second camera in the multiple camera
Two images;Wherein, first camera, the second camera are the camera being disposed adjacent;
The processing equipment is based on the first image, second image judges the first area, secondth area
It whether include first object in the intersection region in domain;
If so, the processing equipment based on the first image, second image, first camera position,
The position of the second camera calculates the position of the first object.
In the scheme of the embodiment of the present invention, provides and realize target outside vehicle using the viewing system of the vehicles such as automobile
The mode of positioning.Multiple cameras in viewing system are first passed through to acquire image, obtain vehicle external information, are then by looking around
Processing equipment in system obtains the first image of adjacent camera acquisition, the second image, further according to the first obtained image,
Second image judges in the visual field overlapping region or intersection region of adjacent camera with the presence or absence of first object.First object is vehicle
Outer target, it may be possible to the target objects such as people, vehicle, object.If there are first objects for visual field overlapping region, then further according to acquisition
To the first image, the second image and two cameras for acquiring image position calculate the position of the first object
It sets.As for after obtaining the position of first object, location information can be shown on screen in the car, can also use other means
Occupant is notified, for example, by using voice prompting.The visual field in adjacent camera can be intersected by viewing system with this
The outer target of vehicle in region is accurately positioned, and the location information for the outer target of vehicle that user obtains in intersection region, phase are conducive to
It checks reference picture for relying solely on and judges the mode that whether there is the targets such as people, vehicle, object outside vehicle, the embodiment of the present invention
Scheme is capable of providing the target position information for camera visual field overlapping region, and user can be allowed in intersection region
Positional relationship between the outer target of vehicle and vehicle has relatively sharp understanding, estimates, is can be avoided due to right without user oneself
The positional relationship makes erroneous judgement and causes traffic accident.For driver unskilled for driving technology, even more there is guidance
Meaning can reduce traffic accident incidence.
With reference to first aspect, in alternative embodiment of the invention, the first image, second image are with for the moment
The acquisition image inscribed.
It can be avoided by obtaining the acquisition image under synchronization since the image bring for using different moments is missed
Difference can obtain image of the same target under different perspectives under synchronization, improve positioning accuracy.
With reference to first aspect, in alternative embodiment of the invention, the first image, second image pass through following
Mode obtains:
The processing equipment carries out clock to adjacent camera difference acquired image and synchronizes, and obtains under synchronization
Two groups of acquisition images.
In this way, the acquisition image under the same time can be obtained in time, can be avoided due to two cameras pair
It is different in the acquisition time of same target, and lead to the position difference, and then influence for acquiring the reference object in image
Positioning accuracy.
With reference to first aspect, in alternative embodiment of the invention, the processing equipment is based on the first image, described
Second image, the position of first camera, the second camera position calculate the position of the first object, wrap
It includes:
The processing equipment obtains the first image, the matching image characteristic point of second image, the matching figure
As characteristic point is for identifying the first object;
It is constructed according to the optical center of the matching image characteristic point, the optical center of first camera, the second camera
Triangle;
Using the practical optical center distance between the triangle and optical center, calculates the matching image characteristic point and referring to
Position in coordinate system, wherein the reference frame is used to indicate the position of the first object, any camera.
By the above method, a kind of mode of outer target of calculating vehicle is provided, the matching figure of two parts of acquisition images is first obtained
As characteristic point, such as matching image characteristic point can be obtained by some feature point extraction algorithms, matching algorithm, be then based on
To matching image characteristic point and the optical center of the camera for acquiring image establish triangle, which can be used as
The model of positioning calculation process, to the target progress location Calculation outside the vehicle in intersection region.Due to for mesh outside mark vehicle
Target matching image characteristic point takes part in the process of building triangle, and as triangular apex, need to only learn triangle
A little positions of the point (for example, optical center position of camera) in reference frame, it can be learnt that matching image characteristic point is referring to
Position in coordinate system.The position of the matching image characteristic point in reference frame can be embodied using coordinate form, can also
To be embodied by the distance between itself and coordinate origin value.It, can be to the target progress outside the vehicle in intersection region with this
Positioning.
With reference to first aspect, in alternative embodiment of the invention, it is described according to the matching image characteristic point, described
The optical center building triangle of the optical center of one camera, the second camera, comprising:
Using coordinate transformation method, first seat of the matching image characteristic point relative to reference frame is calculated
The optical center of second coordinate of the optical center in the reference frame of mark and first camera, the second camera
Third coordinate in the reference frame;
Using first coordinate, second coordinate, the third coordinate as three vertex, triangle is constructed.
In this way, available triangle relevant to the outer target of vehicle, if the coordinate plus reference frame is former
Point, available multiple triangles.On the one hand, target outside the vehicle under current time can be obtained in time in vehicle travel process
Corresponding Computational frame, the Computational frame include the triangle of reference frame and building;On the other hand, ginseng can be passed through
The positioning that coordinate system is examined as the outer target of vehicle provides reference data, lays the foundation for subsequent accurately calculate.
With reference to first aspect, in alternative embodiment of the invention, the calculating matching image characteristic point is being referred to
Position in coordinate system, comprising:
Target range is calculated using triangle telemetry, which indicates the matching image characteristic point and reference coordinate
The distance between coordinate origin in system.
In this way, the relativeness between the coordinate origin in vehicle outer target and reference frame can be specified,
On the basis of this, if further to obtain the relative distance between the outer target of vehicle and vehicle body, it will become to be more easier.
With reference to first aspect, in alternative embodiment of the invention, the method also includes:
According to the position of the structural parameters and the first object of the vehicle of setting itself in the reference frame
It sets, obtains the minimum range between the first object and body structure.
In this way, can be in conjunction with the outer target of the relationship and vehicle of body structure and reference frame in reference coordinate
Position in system, to obtain the minimum range outside vehicle between target and body structure, user, can after learning this minimum range
Consciously to avoid target outside vehicle, avoid that traffic accident occurs.For R & D design user, learn this most
Small distance is conducive to carry out path planning.
With reference to first aspect, in alternative embodiment of the invention, the vehicles further include display equipment, the side
Method further include:
The display equipment shows the location information of the first object and sits with the associated reference of the vehicles
Mark the coordinate information of system.
In this way, the location information of the outer target of exhibition vehicle can be specified, and the reference frame for reference
Information, for a user, it is seen that the position that the outer target of vehicle can be specifically learnt after these location informations judges without oneself.
User can be allowed directly to pass through the location information that display window learns the outer target of vehicle, without according to the experience of driver itself come
Judged, reduce the requirement for driver, certain drivers is avoided due to subjective judgement fault to lead to that traffic accident occurs.
With reference to first aspect, in alternative embodiment of the invention, after obtaining the location information of first object, the side
Method further include: the first object is identified.
In this way, can recognize that the specific form of the outer target of vehicle, prompt is provided, target outside vehicle can be distinguished with this
The specifically objects such as people or object or vehicle.Such as can by extracting the first image, the image characteristic point in the second image,
And these image characteristic points and preset database are subjected to matching comparison, to identify, what target is specifically outside vehicle.Wherein,
The characteristic information of a large amount of objects can be stored in advance in database.
Second aspect, the embodiment of the invention also provides target locating set outside a kind of vehicle, described device includes:
First image collection module, for obtain the acquisition of the first camera in multiple cameras based on first area
First image;
Second image collection module, it is collected based on second area for obtaining the second camera in multiple cameras
The second image;Wherein, first camera, the second camera are the camera being disposed adjacent;
Judgment module, for judging the first area, secondth area based on the first image, second image
It whether include first object in the intersection region in domain;
Computing module, for the position, described based on the first image, second image, first camera
The position of second camera calculates the position of the first object.
The third aspect, the embodiment of the invention also provides a kind of mobile unit, which includes:
Memory;
Processor;
The memory is used to store the program for supporting processor to execute the method that above-mentioned first aspect provides, described
Processor is configurable for executing the program stored in the memory.
Fourth aspect is equipped with display screen and looks around the embodiment of the invention also provides a kind of automobile on the automobile
System, the viewing system include processing equipment and multiple cameras;
The image that the processing equipment is used to be acquired according to the multiple camera calculates the position of target outside vehicle;
The display screen is also used to show the vehicle in intersection region for showing the multiple camera acquired image
The location information of outer target, wherein the intersection region indicates the view of any two adjacent camera in the multiple camera
Field overlapping region.
User can be allowed directly to pass through display screen with this and check that vehicle whether there is obstacle target outside, if so, can further lead to
The location information for crossing the obstacle target that display screen is checked in the visual field overlapping region about adjacent camera, is seeing without user
Oneself judged further according to experience after obstacle target in display screen.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, special embodiment below, and appended by cooperation
Attached drawing is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the block diagram of mobile unit provided in an embodiment of the present invention.
Fig. 2 is the flow chart of the outer object localization method of vehicle provided in an embodiment of the present invention.
Fig. 3 is the signal of the layout type of the camera in the viewing system in an example provided in an embodiment of the present invention
Figure.
Fig. 4 is the schematic diagram of the layout type of the camera in another example provided in an embodiment of the present invention.
Fig. 5 is the functional block diagram of the outer target locating set of vehicle provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below
Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
Since the reference picture that existing vehicle-mounted awareness apparatus provides the user with may be distorted, and these vehicle-mounted perception are set
Standby has a single function, and is only capable of providing a user a reference picture to show car scene, if driver excessively relies on distortion
Reference picture, or after seeing reference picture to vehicle outside object generate erroneous judgement (referring to distance estimations mistake), be easy
Cause traffic accident.Therefore, applicant after study, proposes a kind of outer object localization method of vehicle, with existing vehicle-mounted vision
Awareness apparatus is hardware foundation, is positioned using image.
Applicant has found that monocular cam is to first pass through images match to carry out target identification (a variety of models, row after study
People, object etc.), then by the size of target object in the picture go estimation target object and oneself the distance between or target
The distance between object and vehicle body.Wherein, accurately identify be accurate estimated distance the first step.Accomplish this point, it is necessary to
It establishes and constantly safeguards a huge sample characteristics database, guarantee that this database includes whole features of target to be identified
Data.Such as in some Special sections, for special detection larger animal, it is necessary to establish the database of larger animal in advance;And
For other some regions, there are some unconventional vehicles, also first the characteristic of these vehicles to be added in database.
If lacking clarification of objective data to be identified, the system of will lead to can not identify these vehicles, object, barrier,
To also just can not accurately estimate between the distance between these targets and vehicle or these targets and user's self-position
Distance.
Applicant continues to study, and discovery uses a certain camera (example of vehicle-mounted forward sight camera and vehicle foreside
Camera such as front cover position and the camera at windshield) progress ranging can promote robustness jointly, but measure
Position is limited, the obstacle target being only capable of measuring in front of camera, (normal for each corner location of other positions especially vehicle
It is often blind area) it is difficult to carry out ranging.If therefore carrying out target positioning only by vehicle-mounted preceding viewing system, measurement position can be by
To limitation.
And if binocular camera be installed directly on vehicle obtain depth image and carry out ranging, installation cost is high, and
If being desirable to measure multiple orientation, it may be necessary to which additionally mounted multiple groups binocular camera, complex steps are at high cost.
In view of the above problems, applicant passes through long felt, proposes following embodiment to solve the above problems.Below with reference to
Attached drawing elaborates to the embodiment of the present application.The embodiment of the present invention can also be subject to reality by different specific embodiments
It applies or applies, the various details in this specification can also be based on different viewpoints each without departing from carrying out under spirit herein
Kind modifications and changes.In the absence of conflict, the feature in following embodiment and embodiment can be combined with each other.
In order to make it easy to understand, some nouns occurred in embodiment will be explained below.
Viewing system: also known as panoramic looking-around system, is a kind of DAS (Driver Assistant System), setting up usually around vehicle body can cover
Multiple cameras of all field ranges of lid vehicle-surroundings, according to the classification and actual needs of practical camera, camera
Quantity may be 4, it is also possible to 8.The collected multipath video of multiple cameras can be processed into a width by viewing system
360 ° of vehicle-surroundings of vehicle body top view, and show on screen in the car, the content that driver can check on screen.Due to
DAS (Driver Assistant System) may include a variety of assistant subsystems (for example, preceding viewing system, back-sight visual system), wherein the original of viewing system
Manage from be only used for before it is different to the preceding viewing system of auxiliary, it is also different from the back-sight visual system for being only used for reversing auxiliary (assist backward),
Viewing system is commonly used in providing a user panorama reference picture.
But current viewing system is merely provided for panorama reference picture, does not have for the reference target in reference picture
Make other processing, user be only capable of after checking reference picture in oneself estimation reference picture between the target shown and user away from
From, it may be possible to it is estimated by size of the target in picture in entire picture, it is also possible to be to come by other means
Estimation, however, being which kind of evaluation method, the subjective consciousness that can all rely on user judges, and error is larger.
First embodiment
Present embodiments provide a kind of mobile unit 100.As shown in Figure 1, the mobile unit 100 includes the outer target positioning of vehicle
Device 110, memory 120, storage control 130, processor 140, Peripheral Interface 150, acquisition unit 160.Memory 120,
Storage control 130, processor 140, Peripheral Interface 150 and each element of acquisition unit 160 are directly or indirectly electric between each other
Property connection, to realize the transmission or interaction of data.For example, these elements between each other can by one or more communication bus or
Signal wire, which is realized, to be electrically connected.The outer target locating set 110 of vehicle includes that at least one can be with software or firmware (firmware)
Form is stored in memory 120 or is solidificated in soft in the operating system (operating system, OS) of mobile unit 100
Part functional module.Processor 140 is for executing the executable module stored in memory, such as the outer target locating set 110 of vehicle
Including software function module or computer program.
Optionally, memory 120 may be, but not limited to, random access memory (Random Access Memory,
RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only
Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM),
Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
Wherein, memory 120 is for storing program, and the processor 140 executes described program, this hair after receiving and executing instruction
Method performed by the mobile unit 100 that the process that bright embodiment any embodiment discloses defines can be applied to processor 140
In, or realized by processor 140.
Optionally, processor 140 may be a kind of IC chip, the processing capacity with signal.Above-mentioned processing
Device 140 can be at general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network
Manage device (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (DSP), specific integrated circuit
(ASIC), field programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components.Processor 140 may be implemented or execute disclosed each method, step and logic in the embodiment of the present invention
Block diagram.General processor can be microprocessor or the processor is also possible to any conventional processor etc..
Peripheral Interface 150 can couple various input/output devices (such as acquisition unit 160) to processor 140 with
And memory 120.In some embodiments, Peripheral Interface 150, processor 140 and storage control 130 can be in single cores
It is realized in piece.In some other example, they can be realized by independent chip respectively.
In the present embodiment, acquisition unit 160 is for acquiring image data.The acquisition unit 160 looks around camera shooting including multiple
There is overlapping in head, multiple position differences looking around camera and being mounted on automobile, the adjacent visual field for looking around camera of any two
Region, in embodiments of the present invention, the visual field overlapping region are also referred to as intersection region.Wherein, the acquisition unit 160 is each
The image of camera acquisition can be used for forming panorama.
In practical applications, which can be used as vehicle-mounted visual perception equipment, looks around camera and installs respectively
In the different location of body of a motor car, for example, looking around the different location that camera may be mounted in same level to acquire figure
Picture.
Optionally, mobile unit 100 can also include display unit, and display unit can provide a user an interactive boundary
Face (such as user interface) is referred to for display image data to user.For example, display unit can show that acquisition is single
Each camera acquired image in member 160, can also show panorama, can also show the position letter of the outer target of vehicle
Breath, the location information may include minimum range between the outer target of vehicle and vehicle body, outer seat of the target in reference frame of vehicle
Mark.Wherein, display unit can be liquid crystal display or touch control display.
It will appreciated by the skilled person that structure shown in FIG. 1 is only to illustrate, not to mobile unit 100
Structure cause to limit.For example, mobile unit 100 may also include the more perhaps less component than shown in Fig. 1 or have
The configuration different from shown in Fig. 1.
Second embodiment
The present embodiment provides object localization methods outside a kind of vehicle.This method can be applied to the traffic work for being equipped with viewing system
Tool, the viewing system include processing equipment and multiple cameras, and any camera can be wide-angle camera, and (or flake is taken the photograph
As head).These cameras have focal length short, the big feature of field angle.In some instances, each camera in viewing system
Field angle can be equal to or more than 180 °.These cameras may be mounted at the different location of body of a motor car, pass through multiple camera shootings
The image of head acquisition can form panorama and show picture, and the method for the present embodiment will carry out location Calculation to using viewing system,
It is positioned with target outside the vehicle in the visual field overlapping region to adjacent camera, is estimated without user oneself.
Referring to Fig. 2, being the flow chart of the outer object localization method of vehicle provided in an embodiment of the present invention.It below will be to Fig. 2 institute
The detailed process shown is described in detail.
Step S210, processing equipment obtain first based on first area of the acquisition of the first camera in multiple cameras
Image.Wherein, first area can be the region that the visual field of the first camera is covered.
Step S220, processing equipment obtain collected based on second area of the second camera in multiple cameras
Two images.Wherein, second area can be the region that the visual field of second camera is covered.First camera, second camera
For the camera being disposed adjacent, therefore, first area, second area are capable of forming visual field overlapping region or intersection region.
Wherein, multiple cameras of looking around in viewing system can be separately mounted to the different location of body of a motor car to obtain
The acquisition image at multiple visual angles can illustrate only 4 cameras in Fig. 3 refering to Fig. 3 about the installation site for looking around camera
Position.It is appreciated that the quantity of camera can be more, such as 8 wide-angle cameras can be set to acquire image with shape
Picture is shown at panorama.
Step S230, processing equipment judge the intersection region of first area, second area based on the first image, the second image
It whether include inside first object.Wherein, first object is the outer target of vehicle, it may be possible to vehicle, pedestrian, animal or other obstacles
Object.There are many judgment modes, in one embodiment, it is only necessary to extract the matching image feature of the first image, the second image
Point can learn that (having extracted matching image characteristic point can prove that there are vehicles with the presence or absence of the outer target of vehicle in intersection region
Outer target).The outer target of vehicle in intersection region can be during vehicle actual travel, influence vehicle driving in vehicle periphery
The factor of safety.
Step S240, if so, processing equipment is based on the first image, the second image, the position of the first camera, the second camera shooting
The position of head calculates the position of first object.
It can be according to the position of the first image of acquisition, the matching image characteristic point that the second image obtains and adjacent camera
Set the position for calculating the outer target of vehicle.For example, being taken the photograph camera adjacent in viewing system as the binocular in binocular range measurement principle
As head, location Calculation is carried out in conjunction with binocular range measurement principle.
It should be noted that the sequence between above-mentioned steps S210, step S220 should not be construed as limiting the invention,
As long as the acquisition image about intersection region can be collected by camera adjacent in viewing system.
By the above method, the viewing system in the vehicles such as automobile is utilized to mesh outside the vehicle in visual field intersection region
Mark is positioned.Multiple cameras in viewing system are first passed through to acquire image, obtain vehicle external information, are then by looking around
Processing equipment in system obtains the first image of adjacent camera acquisition, the second image, further according to the first obtained image,
Second image judges in the visual field overlapping region of adjacent camera with the presence or absence of the outer target of vehicle.If there are for visual field overlapping region
One target, then further according to the position of the first image, the second image and two cameras for acquiring image that get
It sets to calculate the position of the first object.It can be by viewing system in the visual field intersection region in adjacent camera with this
The outer target of vehicle be accurately positioned, be conducive to the location information for the outer target of vehicle that user obtains in intersection region, relative to only
It only relies on and checks reference picture to judge the mode that whether there is the targets such as people, vehicle, object outside vehicle, the scheme energy of the embodiment of the present invention
Enough target position informations provided for camera visual field overlapping region can allow user to the mesh outside the vehicle in intersection region
Positional relationship between mark and vehicle has relatively sharp understanding, estimates, is can be avoided due to the position without user oneself
Relationship makes erroneous judgement and causes traffic accident.For driver unskilled for driving technology, even more there is directive significance, energy
Enough reduce traffic accident incidence.
For be capable of forming panorama look around camera for, since the field angle of camera is big, in actual photographed
When, the adjacent visual field for looking around camera can be overlapped, and can determine intersection region.The case where not using panorama display technology
Under, these intersection regions are often the blind area of driver, after using panorama display technology, although driver can see
About the reference picture of the intersection region, but it can be seen that often by complicated image processing step (such as distortion school
Just, suture splicing, remove ghost image etc.) after obtained image, and be different, cause between these reference pictures and realistic objective
It is difficult to determine the position of realistic objective.Therefore the present embodiment is acquired by obtaining two under synchronization adjacent cameras of looking around
Two groups of acquisition images, outer image of the target under different perspectives of vehicle in intersection region can be obtained, may further be to this
The outer target of vehicle in intersection region is accurately positioned.
Wherein, it is obtaining to show behind the position of target outside vehicle on screen in the car, other means can also be used
It notifies occupant, such as voice prompting can be used.
In the present embodiment, the first image, the second image in above-mentioned steps are the acquisition image under synchronization.By obtaining
It takes the acquisition image under synchronization to can be avoided the image bring error due to using different moments, can obtain same
When image under different perspectives of the same target inscribed, improve positioning accuracy.
Although look around camera it is collected be vehicle front, rear, side target image, if two
The time that camera acquires same target is different, and the position of the reference object in image will difference.In order to guarantee
Positioning accuracy, it is necessary to clock be carried out to the image for looking around camera acquisition and synchronized, the image of acquisition is shot under the same time
It can carry out accurate positioning calculating.Wherein, it realizes that the synchronous mode of clock has very much, such as the same hardware control can be used
Device processed realizes that clock is synchronous, and to obtain the acquisition image under synchronization, identical acquisition time can also be arranged by software
To obtain the acquisition image under synchronization.It is specific that those skilled in the art can select clock to synchronize according to actual needs
Implementation, as long as two groups of acquisition images under synchronization can be obtained.
In the present embodiment, in order to obtain the acquisition image under synchronization, it this can be implemented so that the processing equipment to phase
Adjacent camera difference acquired image carries out clock and synchronizes, and obtains two groups of acquisition images under synchronization.
Wherein, there are two types of the implementations of clock synchronization mode:
The first, it is (or main to hang over the same main control unit for the signal of multiple cameras acquisition in viewing system
Controller) on, which can be processing equipment.Coordinate the acquisition time of each camera by main control unit.One
In a example, timestamp can be stamped in the image that synchronization shoots each camera, certain moment, there are take the photograph if finding
As the case where head frame losing, then remaining image of camera (adjacent camera) at the moment is also abandoned simultaneously, is not involved in location Calculation,
Reacquire the acquisition image under synchronization.Wherein, timestamp can by ECU (Electronic Control Unit,
Electronic control unit, also known as car running computer, vehicle-mounted computer etc.) end SOC (System on Chip, system on chip) provide.
Second, by IMU (Inertial measurement unit, Inertial Measurement Unit) provide synchronization signal with
It realizes and synchronizes.For example, IMU provides camera trigger signal, processing equipment once receives the trigger signal, then acquisition, which is looked around, is
The acquisition image of multiple cameras in system.Its principle is similar with a kind of upper clock synchronization principles.
Since binocular scheme is more complicated than monocular scheme, operand is bigger, in an example, can be using processing chip
The signal of S32V234 reception camera.S32V234 on piece has two-way MIPI-CSI2 camera interface, in external imaging sensor
Cooperation under, S32V234 can support the different methods of synchronization.When two-way camera work in master slave mode, from main phase machine to
Synchronization signal is sent from camera, when two-way camera all works in slave pattern, synchronization can be generated by S32V234 timer internal
Signal, while being sent to two-way camera.Camera in this can be the camera in viewing system.It the advantage is that, use
S32V234 can be realized the Pixel-level processing of picture signal, can save the external ISP of binocular camera (Image Signal
Processing, image-signal processor) cost, and calculation resources and bandwidth can be supported to be up to two-way 1080p@
The real-time processing of 30fps picture signal, meets two-way quality of image signals and consistency, and treatment effeciency is high.It should be noted that
This merely provides a kind of scheme that can satisfy synchronous requirement, but not in this, as limitation of the present invention.
By the above method, the acquisition image under the same time can be obtained in time, can be avoided due to two cameras
It is different for the acquisition time of same target, and lead to acquire the position of the reference object in image difference, Jin Erying
Ring positioning accuracy.
It may include step S241- step S243 about above-mentioned steps S240 in the present embodiment.
Two groups of acquisition images that two in the case where getting synchronization adjacent to look around camera acquisition (the first image, the
Two images) after, execute step S241.
Step S241: processing equipment obtains the matching image characteristic point of the first image, the second image, wherein matching image
Characteristic point is for identifying first object.Image characteristic point is first extracted for the first image, the second image that get, then to two figures
Image characteristic point matched, obtain matching image characteristic point.Wherein, image characteristic point can be used for identifying reference object.
Reference object includes the target in intersection region, the target in non-crossing region.After being matched to two figures, obtained matching
Image characteristic point can be used for identifying the target in intersection region.
Since the object taken in two groups of acquisition images may be not exactly the same, if there are the outer mesh of vehicle in intersection region
Mark, the identical point of the target in two adjacent images for looking around camera acquisition under the available same time, determines
Matching image characteristic point out may further be accurately calculated based on the matching image characteristic point determined.
In order to realize that above-mentioned steps S241, step S241 can be divided into two stages: feature point extraction stage, characteristic point
With the stage.
In the feature point extraction stage, it this can be implemented so that and extract two groups of acquisition images (the first image, the second image) respectively
Image characteristic point, wherein image characteristic point refer in image there is distinct characteristic and can effectively reflect image substantive characteristics,
It can be identified for that the point of target object in image, these image characteristic points include but are not limited to: angle point, boundary point, cut-point etc..
Optionally, about there are many extracting modes of image characteristic point.Such as ORB (Oriented FAST can be used
And Rotated BRIEF, be a kind of rapid characteristic points extract and the algorithm of description), SIFT (Scale-invariant
Feature transform, Scale invariant features transform method), SURF (Speeded Up Robust Features, accelerate Shandong
Stick characteristic method) in any one or more carry out feature extraction.Wherein, it is not answered about the specific extracting mode of image characteristic point
It is interpreted as limitation of the present invention.
After being extracted the image characteristic point for looking around camera acquisition, into the Feature Points Matching stage, to these images spy
Sign point is matched, if can find out the image characteristic point being mutually matched there are the outer target of vehicle in intersection region, obtain same
The identical point of the outer target of the vehicle in two cameras under time.Wherein, about there are many specific methods of images match, example
Such as the matching way based on Fast Fourier Transform (FFT) can also be used, may be used also using the relevant matching way in region is based on
The images match based on scale invariant feature change algorithm is used in a manner of using feature-based matching, in the present embodiment to calculate
Method determines matching image characteristic point, and Euclidean distance can be selected to carry out matching primitives.It should be noted that the present embodiment is
A kind of mode of the outer clarification of objective point of vehicle in determining intersection region is provided, this can not be interpreted as to limit of the invention
System.
After getting the matching image characteristic point of intersection region, step S242 is executed.
Step S242: triangle is constructed according to the optical center of matching image characteristic point, the optical center of the first camera, second camera
Shape.
For example, triangle can be constructed in this way: using coordinate transformation method, the matching image characteristic point phase is calculated
Second seat of the optical center of the first coordinate and first camera for reference frame in the reference frame
It marks, third coordinate of the optical center of the second camera in the reference frame;It is sat with first coordinate, described second
Mark, the third coordinate construct triangle as three vertex.Wherein, reference frame is a relative coordinate system, is used for table
Show the position of first object, any camera.
That is, be calculated matching image characteristic point relative to reference frame coordinate and the first camera optical center,
Coordinate of the optical center of second camera in reference frame constructs triangle using three coordinates as three vertex.It needs
Illustrate, above-mentioned " first ", " second ", " third " are only used for description and distinguish, and do not imply that significance level.
Matching image characteristic point can be easily obtained using coordinate transformation method, the optical center of two cameras is sat in the reference
Coordinate in mark system target, optical center position provides a basis of reference outside for actual vehicle, i.e., using the reference frame as
The location expression benchmark of all objects, it is only necessary to know that positioning can be realized in position of all objects in the reference frame.
About reference frame specific selection mode there are many, those skilled in the art can select reference coordinate according to actual needs
The representation method of system, such as the mass center of automobile can be used for the coordinate origin of reference frame, it can also be looked around using multiple
Coordinate origin of the center intersection point of camera as reference frame.
Triangle relevant to target outside the vehicle in intersection region can be obtained with upper type, if adding reference frame
Coordinate origin, available multiple triangles.On the one hand, it can be obtained in time in vehicle travel process under current time
The corresponding Computational frame of the outer target of vehicle, which includes the triangle of reference frame and building;On the other hand, energy
Enough positioning by reference to coordinate system for target outside vehicle provide reference data, lay the foundation for subsequent accurately calculate.
After the completion of triangle building, step S243 is executed.
Step S243: it using the practical optical center distance between triangle and optical center, calculates matching image characteristic point and is joining
Examine the position in coordinate system.Triangle telemetry can be used in calculating process, it is fixed for example, by using cosine, sine, tangent, cotangent etc.
Reason is calculated.
Position of the matching image characteristic point in reference frame can using coordinate form embody, can also by its with
The distance between coordinate origin value embodies.
Above-mentioned steps S241- step S243 can be summarized are as follows: after obtaining matched image characteristic point, be looked around in conjunction with two
The physical location of the optical center of camera is constructed triangle and is surveyed using the position of these three points as three vertex of triangle
Away from calculating outer position of the target in world coordinate system of vehicle representated by matching image characteristic point.Wherein, the world coordinate system
That is reference frame is a relative coordinate system, for indicating to look around camera, outside vehicle target position.
Since the matching image characteristic point for identifying parking stall target takes part in the process of building triangle, and as three
Angled peak need to only learn position of the certain points of triangle (for example, optical center position of camera) in reference frame
Learn position of the matching image characteristic point in reference frame.With this, can to target outside the vehicle in the intersection region into
Row positioning.Physical location in view of optical center be it is determining, will not because of calculation variation or acquire image variation and
Change, participates in location Calculation and be able to ascend robustness, positioning accuracy is high.
After introducing reference frame, the importing of body structure parameter can be easily accomplished.For those skilled in the art
For, in the case where determining reference frame, body structure parameter is imported, can learn that car body everywhere sit in the reference by structure
Position in mark system.
For above-mentioned steps S243, after the triangle for obtaining three in reference frame vertex composition, Ke Yili
Target range is calculated with triangle telemetry, accurate positioning can be realized with this.Wherein, which indicates matching image feature
Point and the distance between the coordinate origin in reference frame.It is former that coordinate in the outer target of vehicle and reference frame can be specified in this way
Relativeness between point if further to obtain the relative distance between the outer target of vehicle and vehicle body, will become on this basis
It must be more easier.
The above method focuses on outer position of the target in reference frame of determining vehicle, this is because need to only determine vehicle
On the basis of position of the outer target in reference frame, some preset parameters (such as ginseng relevant to body structure is imported
Number), outer specific location of the target relative to body structure of vehicle can be determined naturally, in view of different types of contour of the vehicle
Difference, therefore the embodiment of the present invention is not restricted vehicle.
Other than said effect, there is also following advantages for the method for the present embodiment:
First: location Calculation being carried out using viewing system, is a kind of breakthrough thought.Previous viewing system is only used for
Image shows, each camera in viewing system is only used for acquisition image, the image after acquisition may only make some image mosaics,
The image processing means such as ghost image are removed, panorama is formed and shows picture.And the embodiment of the present invention is utilized viewing system progress target and determines
Position calculates, and can complete location Calculation on the basis of existing hardware, without introducing new equipment, is not necessarily to additionally mounted binocular camera shooting
Head.For by the way of monocular positioning, positioning accuracy is high.For preceding viewing system, back-sight visual system, Neng Gouyou
Effect ground positions the target in intersection region, and measurement range is wide.
Second, by obtain synchronization under image positioned, can make the positioning result at current time with it is upper
The acquisition image at one moment is unrelated, can not only promote robustness, moreover it is possible to improve the positioning measurement precision about blind area.Its
In, which refers to not using multiple existing blind areas in the case where looking around camera, is looking around camera shooting using multiple
After head, which is referred to as the visual field overlapping region or intersection region of camera.
Third is experiencing intersection region since the position that each camera in viewing system is installed on vehicle is different
When interior same target, the adjacent optical center position for looking around camera is different, constitutes triangle together with photographic subjects and is surveyed
Away from, be in view of the actual range between two adjacent optical centers for looking around camera it is fixed, compared to using single camera into
Row positioning carries out positioning both modes using forward sight camera combination vehicle foreside camera, and range accuracy is mentioned
It rises, is improved especially for the positioning accuracy of intersection region.
4th, entire method is easily achieved, and cost is relatively low, has good market application value.
The scheme of embodiment in order to further illustrate the present invention, below in conjunction with the layout type for specifically looking around camera
Carry out principle explanation.
By taking Fig. 3 as an example, the camera that " A ", " B ", " C ", " D " respectively indicate the different location in viewing system (or is
Look around camera), dash area indicates the two neighboring visual field overlapping region for looking around camera, i.e. intersection region." O " indicates ginseng
Examine the coordinate origin of coordinate system.
Below the vehicle in the intersection region of two cameras will be introduced for looking around camera A, look around camera B
The positioning principle of outer target M.
Referring to Fig. 3, in △ AOB,
In △ ABM,
In conjunction with the triangle of building, it is known that the expression formula of target range is as follows:
That is,
Meanwhile obtaining outer coordinate of the target M in reference frame of the vehicle with can be convenient:
(X, Y)=(L2-L4*cos (∠ ABO+ ∠ ABM), L4*sin (∠ ABO+ ∠ ABM)).
Wherein, in above-mentioned all expression formulas, which, which respectively indicates, looks around the optical center of camera A, looks around camera B's
The distance between optical center and coordinate origin O;L3, the L4 respectively indicate the outer target M of the vehicle and look around the optical center of camera A, look around
The distance between the optical center of camera B;L5 expression looks around camera A, looks around the actual range between the optical center of camera B;
The L6 indicates required target range, i.e. vehicle outer the distance between target M and coordinate origin O, and (X, Y) indicates the outer target M of the vehicle
Coordinate in reference frame.
About L1, L2, L5, it can be determined according to the specific installation site of camera, in actual photographed, look around camera
Certainly it has been installed, therefore, L1, L2, L5 can be understood as parameter preset.About L3, L4, camera bat is looked around at two
It can be obtained by when taking the photograph target M outside vehicle.This is because after two cameras photographed a frame picture, according to the knot of video camera
Structure and optical principle can determine the angle between angle, optical axis and BM between the optical axis, optical axis and AM of two cameras,
It can then learn ∠ MBA, the ∠ MAB in △ ABM, determine side L5 folded by two in △ ABM angles and two angles
In the case where, L3, L4 can be calculated.
Certainly, the optical center of camera is looked around in reference frame in the coordinate origin for determining reference frame, two
It is more using being constituted between this four points after positional relationship between position and the outer target of vehicle and the optical center of two cameras
A triangle, other available calculations, need to only change the triangle of selection or the angle of selection based on the above principles
, such as can choose △ AOM, ∠ BAM, ∠ OAB to carry out location Calculation.It in other embodiments, can be by cosine meter
It calculates formula and replaces with the range of triangle calculation formula such as sinusoidal calculations formula, tangent calculation formula, cotangent calculation formula.These are adapted to
Property change belong to be identical with the embodiment of the present invention simple replacement made by design, should be covered by protection scope of the present invention with
It is interior.
In another example, the layout type of camera can refering to Fig. 4 (A, B, C, D, A in Fig. 4 ', B ', C ', D '
Place can install camera).About Fig. 4 still location Calculation can be carried out using the Computing Principle of previous embodiment.Fig. 4
Merely provide another layout type for looking around camera, it will be understood that look around the intersection of camera in such cases
Region can be the front, rear, side of vehicle, and those skilled in the art can be according to the actual situation to the cloth for looking around camera
Office's mode is selected, and is positioned with reaching target outside the vehicle to different zones.
It is determining outside the vehicle in intersection region behind position of the target in reference frame, the above method can also include step
Rapid S250.
Step S250: according to the structural parameters of the vehicle of setting itself and the first object in reference frame
Position obtains the minimum range between the first object and body structure.User, can be intentional after learning this minimum range
Know ground to avoid target outside vehicle, avoids that traffic accident occurs.For R & D design user, the most narrow spacing can be based on
It is stored in processing equipment from progress path planning, and by the path rule of planning, so that driver transfers.
Wherein, about the position data (distance, coordinate etc.) in above-mentioned all steps, can by a display equipment into
Row is shown, shows that panorama is shown in the window of picture for example, can be directly displayed at, automatic to open up when recognizing target outside vehicle
Show the location information of the outer target of the vehicle under current time.Driver can see the location information in practical driving procedure, with
Target outside the vehicle is avoided.Without being judged according to the experience of driver itself, reduces and driver is wanted
It asks, certain drivers is avoided due to subjective judgement fault to lead to that traffic accident occurs.
It may include: that panorama shows picture, is shown in panorama display picture for showing the display content of equipment
The location information of the mark of reference frame, first object.The display mode of the mark of corresponding reference frame can be so that
Reference frame is identified with particular color or specific lines in display window, such as can show the coordinate of reference frame
Origin may further show vehicle outer the distance between certain target and coordinate origin.As for when carry out display and position letter
The specific display form of breath, those skilled in the art may set according to actual needs, should not be construed as to of the invention
Limitation.
Optionally, after obtaining the location information of first object, the above method can also include step S260: to first
Target is identified.
It is practically used for the method multiplicity of target identification, recognition principle is similar with the identification process in monocular localization method.
In the present embodiment, identification process this can be implemented so that establishes a sample database in advance, and the database is various right for storing
The basic feature information of elephant after target carries out location Calculation outside to vehicle, the image characteristic point of target outside the vehicle can be believed
It ceases and is compared with the basic feature information of the various objects in database, to identify the specific form of target outside the vehicle, and
Provide prompt.
If for example, recognizing in intersection region, there are certain vehicle, target is pedestrian outside, can inform vehicle by voice prompting
Interior user: there is pedestrian/XX car type/XXX barrier at how many meters of vehicle body position, ask slow down etc..In general, it ties
It closes and states method for calculating and locating, whole process can be understood as first positioning and identify again (or to be identified, and monocular images after first ranging
The thinking of head is ranging after first identification), this is built upon used binocular positioning principle on the basis of can realize.
In other words, without knowing measurement target (first object) specifically in the position fixing process before above-mentioned identification step
What, it is only necessary to location Calculation can be completed according to the optical center position of image characteristic point and camera, without relying on a use
In the sample database of feature identification.It is only in need when specifically knowing that the outer target of vehicle is, it can just use for feature
The sample database of identification.It can reduce the data processing amount in positioning calculation process with this, improve location efficiency.And for knowing
In the other stage, although sample database still can be relied on, user can choose whether to know according to actual needs
Not, because user directly shows that picture has been able to learn the substantially form of the outer target of vehicle by panorama in some scenarios, i.e.,
Make the image fault in picture that will not influence user and judges that the outer target of vehicle is specifically.Therefore, for a user
It can freely be arranged and whether be identified.As an implementation, an identification function can be set on a display screen
Button just carries out target identification when the button is triggered.
It should be noted that not repelling simultaneously in such a way that multiple corners using vehicle are as intersection region
Radar is set on vehicle and carries out ranging.For being only not easy to the case where positioning nonmetallic target compared to radar, the present invention
The framing distance measuring method of embodiment can make up for it this defect, and cost is relatively low.
In conclusion to target outside the vehicle in intersection region can accurately determine using viewing system by the above method
Position acquires for image of the target under different perspectives outside same vehicle under synchronization, utilizes target outside the vehicle and two phases
Neighbour looks around camera building triangle, is calibrated by the actual range between the optical center of two adjacent cameras, and it is fixed to complete
Position calculates.Due to actual range between each optical center in viewing system be it is fixed, increase other equipment cost no
In the case of, positioning accuracy can be obviously improved.
3rd embodiment
A kind of outer target locating set 110 of vehicle is present embodiments provided, which includes for executing in previous embodiment
Method and step functional module.
Referring to Fig. 5, being the functional module of the outer target locating set 110 of vehicle shown in FIG. 1 provided in an embodiment of the present invention
Schematic diagram.The outer target locating set 110 of the vehicle includes the first image collection module 111, the second image collection module 112, judgement
Module 113, computing module 114.
First image collection module 111, for obtain the first camera in multiple cameras acquisition based on the firstth area
First image in domain.
Second image collection module 112, for obtaining, the second camera in multiple cameras is collected to be based on second
Second image in region;Wherein, first camera, the second camera are the camera being disposed adjacent.
Judgment module 113, for judging the first area, described the based on the first image, second image
It whether include first object in the intersection region in two regions.
Computing module 114, for based on the first image, second image, the position of first camera, institute
The position for stating second camera calculates the position of the first object.
Wherein, the first image, the second image are the image acquired under synchronization.Acquisition under synchronization in order to obtain
Image, the outer target locating set 110 of vehicle further includes synchronization module, and the synchronization module for collecting adjacent camera respectively
Image to carry out clock synchronous, to obtain two groups of acquisition images under synchronization.
For above-mentioned computing module 114, matched sub-block, building submodule, computational submodule can specifically include.
Matched sub-block is used to obtain the matching image characteristic point of the first image, second image, the matching
Image characteristic point is for identifying the first object.
Building submodule according to the matching image characteristic point, the optical center of first camera, described second for taking the photograph
As the optical center of head constructs triangle.
Computational submodule is used to calculate the matching figure using the practical optical center distance between the triangle and optical center
As position of the characteristic point in reference frame, wherein the reference frame is for indicating the first object, any camera shooting
The position of head.
Wherein, building submodule specifically can be used for that the matching image feature is calculated using coordinate transformation method
Put second relative to the first coordinate of reference frame and the optical center of first camera in the reference frame
Third coordinate of the optical center in the reference frame of coordinate, the second camera;
And using first coordinate, second coordinate, the third coordinate as three vertex, triangle is constructed.
Computational submodule specifically can be used for calculating target range using triangle telemetry, which indicates described
With the distance between coordinate origin in image characteristic point and reference frame.
Optionally, computational submodule specifically can be also used for according to the structural parameters of the vehicle itself of setting and described
Position of the first object in the reference frame, obtains the minimum range between the first object and body structure.
Optionally, the outer target locating set 110 of vehicle further includes display module, and the display module is for showing first mesh
Target location information and coordinate information with the associated reference frame of the vehicles.
About the other details of target locating set 110 outside vehicle described in the present embodiment, please further refer to aforementioned reality
The associated description of method described in example is applied, details are not described herein.
By above-mentioned apparatus, target outside the vehicle in intersection region can be accurately positioned, be brought advantage to the user.
In addition to the above implementation exceptions, the embodiment of the invention also provides a kind of automobile, previous embodiment is installed on the automobile
Mobile unit 100.
The embodiment of the invention also provides a kind of automobile, display screen and viewing system are installed on the automobile, this is looked around
System includes processing equipment and multiple cameras.Processing equipment can be used for carrying out positioning meter according to the image that camera shooting figure acquires
It calculates, which is also used to show the position of the target vehicle in intersection region Wai for showing multiple camera acquired images
Confidence breath, wherein the intersection region indicates the visual field overlapping region of any two adjacent camera in multiple cameras.Even if
The picture shown in automobile is likely to be the distorted image after complicated image processing step, but due to being capable of providing precision
The location informations such as higher distance, coordinate can reduce the probability that user makes erroneous judgement according only to the reference picture of distortion.
The embodiment of the present invention also provides a kind of computer readable storage medium.Computer journey is stored in readable storage medium storing program for executing
Sequence, when computer program is run on computers, so that computer is executed as the outer target of above-mentioned vehicle as described in the examples is fixed
Position method.
In conclusion the outer object localization method of vehicle provided in an embodiment of the present invention, device and automobile, can be using looking around
It unites and target outside the vehicle in the visual field overlapping region in adjacent camera is positioned, and positioning accuracy is high, entire design is closed
Reason can reduce manufacture, use cost while improving detection performance.
Through the above description of the embodiments, each functional module in each embodiment of the present invention can integrate one
It rises and forms an independent part, be also possible to modules individualism, can also be shaped with two or more module collection
At an independent part.If the function is realized in the form of software function module and sells or make as independent product
Used time can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention essence
On in other words the part of the part that contributes to existing technology or the technical solution can embody in the form of software products
Out, which is stored in a storage medium, and storage medium includes: USB flash disk, mobile hard disk, memory, magnetic
The various media that can store program code such as dish or CD.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.Therefore, protection scope of the present invention Ying Yiquan
Subject to the protection scope that benefit requires.
Claims (10)
1. a kind of outer object localization method of vehicle, which is characterized in that be applied to the vehicles, the vehicles include looking around to be
System, the viewing system includes processing equipment and multiple cameras, which comprises
The processing equipment obtains the first image based on first area of the acquisition of the first camera in the multiple camera;
The processing equipment obtains collected the second figure based on second area of the second camera in the multiple camera
Picture;Wherein, first camera, the second camera are the camera being disposed adjacent;
The processing equipment judges the first area, the second area based on the first image, second image
It whether include first object in intersection region;
If so, position, described of the processing equipment based on the first image, second image, first camera
The position of second camera calculates the position of the first object.
2. the method as described in claim 1, which is characterized in that the first image, second image are under synchronization
Acquisition image.
3. method according to claim 2, which is characterized in that the first image, second image are in the following manner
It obtains:
The processing equipment carries out clock to adjacent camera difference acquired image and synchronizes, and obtains two under synchronization
Group acquisition image.
4. the method according to claim 1, which is characterized in that the processing equipment be based on the first image,
Second image, the position of first camera, the second camera position calculate the position of the first object,
Include:
The processing equipment obtains the first image, the matching image characteristic point of second image, and the matching image is special
Sign point is for identifying the first object;
Triangle is constructed according to the optical center of the matching image characteristic point, the optical center of first camera, the second camera
Shape;
Using the practical optical center distance between the triangle and optical center, the matching image characteristic point is calculated in reference coordinate
Position in system, wherein the reference frame is used to indicate the position of the first object, any camera.
5. method as claimed in claim 4, which is characterized in that described to be taken the photograph according to the matching image characteristic point, described first
As the optical center of head, the optical center of the second camera construct triangle, comprising:
Using coordinate transformation method, first coordinate of the matching image characteristic point relative to reference frame is calculated, with
And the optical center of second coordinate of the optical center of first camera in the reference frame, the second camera is described
Third coordinate in reference frame;
Using first coordinate, second coordinate, the third coordinate as three vertex, triangle is constructed.
6. method as claimed in claim 4, which is characterized in that described to calculate the matching image characteristic point in reference frame
In position, comprising:
Target range is calculated using triangle telemetry, which indicates in the matching image characteristic point and reference frame
The distance between coordinate origin.
7. method as claimed in claim 4, which is characterized in that the method also includes:
According to the position of the structural parameters and the first object of the vehicle of setting itself in the reference frame, obtain
Minimum range between the first object and body structure.
8. the method as described in claim 1, which is characterized in that the vehicles further include display equipment, and the method is also
Include:
It is described display equipment show the first object location information and with the associated reference frame of the vehicles
Coordinate information.
9. a kind of outer target locating set of vehicle, which is characterized in that described device includes:
First image collection module, for obtaining first based on first area of the acquisition of the first camera in multiple cameras
Image;
Second image collection module, for obtaining collected based on second area of the second camera in multiple cameras
Two images;Wherein, first camera, the second camera are the camera being disposed adjacent;
Judgment module, for judging the first area, the second area based on the first image, second image
It whether include first object in intersection region;
Computing module, for based on the first image, second image, the position of first camera, described second
The position of camera calculates the position of the first object.
10. a kind of automobile, which is characterized in that be equipped with display screen and viewing system, the viewing system packet on the automobile
Include processing equipment and multiple cameras;
The image that the processing equipment is used to be acquired according to the multiple camera calculates the position of target outside vehicle;
The display screen is also used to show the mesh vehicle in intersection region Wai for showing the multiple camera acquired image
Target location information, wherein the intersection region indicates the visual field weight of any two adjacent camera in the multiple camera
Folded region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811512069.7A CN109579868A (en) | 2018-12-11 | 2018-12-11 | The outer object localization method of vehicle, device and automobile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811512069.7A CN109579868A (en) | 2018-12-11 | 2018-12-11 | The outer object localization method of vehicle, device and automobile |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109579868A true CN109579868A (en) | 2019-04-05 |
Family
ID=65929677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811512069.7A Pending CN109579868A (en) | 2018-12-11 | 2018-12-11 | The outer object localization method of vehicle, device and automobile |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109579868A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223235A (en) * | 2019-06-14 | 2019-09-10 | 南京天眼信息科技有限公司 | A kind of flake monitoring image joining method based on various features point combinations matches |
CN111915779A (en) * | 2020-07-31 | 2020-11-10 | 浙江大华技术股份有限公司 | Gate control method, device, equipment and medium |
CN112215048A (en) * | 2019-07-12 | 2021-01-12 | 中国移动通信有限公司研究院 | 3D target detection method and device and computer readable storage medium |
CN112406700A (en) * | 2020-11-25 | 2021-02-26 | 深圳瑞为智能科技有限公司 | Blind area early warning system based on upper and lower binocular vision analysis range finding |
CN112770056A (en) * | 2021-01-20 | 2021-05-07 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device and electronic equipment |
CN113011445A (en) * | 2019-12-19 | 2021-06-22 | 斑马智行网络(香港)有限公司 | Calibration method, identification method, device and equipment |
CN113573039A (en) * | 2020-04-29 | 2021-10-29 | 思特威(上海)电子科技股份有限公司 | Target depth value obtaining method and binocular system |
WO2022143237A1 (en) * | 2020-12-31 | 2022-07-07 | 华为技术有限公司 | Target positioning method and system, and related device |
CN115014296A (en) * | 2022-07-06 | 2022-09-06 | 南方电网数字电网研究院有限公司 | Camera-based power transmission line distance measuring method and device and computer equipment |
-
2018
- 2018-12-11 CN CN201811512069.7A patent/CN109579868A/en active Pending
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223235B (en) * | 2019-06-14 | 2023-08-08 | 南京天眼信息科技有限公司 | Fisheye monitoring image splicing method based on combination and matching of various characteristic points |
CN110223235A (en) * | 2019-06-14 | 2019-09-10 | 南京天眼信息科技有限公司 | A kind of flake monitoring image joining method based on various features point combinations matches |
CN112215048A (en) * | 2019-07-12 | 2021-01-12 | 中国移动通信有限公司研究院 | 3D target detection method and device and computer readable storage medium |
CN112215048B (en) * | 2019-07-12 | 2024-03-22 | 中国移动通信有限公司研究院 | 3D target detection method, device and computer readable storage medium |
CN113011445A (en) * | 2019-12-19 | 2021-06-22 | 斑马智行网络(香港)有限公司 | Calibration method, identification method, device and equipment |
CN113573039A (en) * | 2020-04-29 | 2021-10-29 | 思特威(上海)电子科技股份有限公司 | Target depth value obtaining method and binocular system |
CN111915779A (en) * | 2020-07-31 | 2020-11-10 | 浙江大华技术股份有限公司 | Gate control method, device, equipment and medium |
CN111915779B (en) * | 2020-07-31 | 2022-04-15 | 浙江大华技术股份有限公司 | Gate control method, device, equipment and medium |
CN112406700A (en) * | 2020-11-25 | 2021-02-26 | 深圳瑞为智能科技有限公司 | Blind area early warning system based on upper and lower binocular vision analysis range finding |
WO2022143237A1 (en) * | 2020-12-31 | 2022-07-07 | 华为技术有限公司 | Target positioning method and system, and related device |
CN112770056A (en) * | 2021-01-20 | 2021-05-07 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device and electronic equipment |
CN115014296A (en) * | 2022-07-06 | 2022-09-06 | 南方电网数字电网研究院有限公司 | Camera-based power transmission line distance measuring method and device and computer equipment |
CN115014296B (en) * | 2022-07-06 | 2024-09-24 | 南方电网数字电网研究院有限公司 | Camera-based power transmission line ranging method and device and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109579868A (en) | The outer object localization method of vehicle, device and automobile | |
CN110322702B (en) | Intelligent vehicle speed measuring method based on binocular stereo vision system | |
CN109471096B (en) | Multi-sensor target matching method and device and automobile | |
CN102313536B (en) | Method for barrier perception based on airborne binocular vision | |
CN102510734B (en) | Pupil detection device and pupil detection method | |
CN103745452B (en) | Camera external parameter assessment method and device, and camera external parameter calibration method and device | |
TWI332453B (en) | The asynchronous photography automobile-detecting apparatus and method thereof | |
WO2018128667A1 (en) | Systems and methods for lane-marker detection | |
CN113129241B (en) | Image processing method and device, computer readable medium and electronic equipment | |
CN106529495A (en) | Obstacle detection method of aircraft and device | |
TW200846218A (en) | Device and method for detecting obstacle by stereo computer vision | |
CN110517216A (en) | A kind of SLAM fusion method and its system based on polymorphic type camera | |
JP2006053890A (en) | Obstacle detection apparatus and method therefor | |
CN107636679A (en) | A kind of obstacle detection method and device | |
CN102692236A (en) | Visual milemeter method based on RGB-D camera | |
CN107980138A (en) | A kind of false-alarm obstacle detection method and device | |
CN101408422A (en) | Traffic accident on-site mapper based on binocular tridimensional all-directional vision | |
CN111768332A (en) | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device | |
CN113963254A (en) | Vehicle-mounted intelligent inspection method and system integrating target identification | |
CN109741241A (en) | Processing method, device, equipment and the storage medium of fish eye images | |
CN110109465A (en) | A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle | |
CN107688174A (en) | A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment | |
CN110998409A (en) | Augmented reality glasses, method of determining the pose of augmented reality glasses, motor vehicle adapted to use the augmented reality glasses or method | |
CN111976601A (en) | Automatic parking method, device, equipment and storage medium | |
CN106627373B (en) | A kind of image processing method and system for intelligent parking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190405 |
|
RJ01 | Rejection of invention patent application after publication |