CN108765496A - A kind of multiple views automobile looks around DAS (Driver Assistant System) and method - Google Patents
A kind of multiple views automobile looks around DAS (Driver Assistant System) and method Download PDFInfo
- Publication number
- CN108765496A CN108765496A CN201810507410.3A CN201810507410A CN108765496A CN 108765496 A CN108765496 A CN 108765496A CN 201810507410 A CN201810507410 A CN 201810507410A CN 108765496 A CN108765496 A CN 108765496A
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- automobile
- coordinate system
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000013507 mapping Methods 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000012937 correction Methods 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 7
- 238000003384 imaging method Methods 0.000 claims description 31
- 230000004927 fusion Effects 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000010365 information processing Effects 0.000 claims description 14
- 238000013519 translation Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 230000003313 weakening effect Effects 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 abstract description 2
- 230000008859 change Effects 0.000 abstract description 2
- 238000007781 pre-processing Methods 0.000 abstract 1
- 239000007787 solid Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses the present invention relates to a kind of multiple views automobiles to look around DAS (Driver Assistant System), which includes multiple cameras, message processing module, output module.Pass through a binocular camera mounted on vehicle body all around, the image of three fish-eye camera acquisition body of a motor car surroundings;Message processing module pre-processes the image collected, texture mapping, brightness correction, image co-registration, the processing such as viewpoint change, and by binocular camera the image collected by pre-processing, and the processing such as images match obtain the distance of front obstacle;By output module by 360 solid of automobile of formation look around image include in car-mounted display equipment, and realize risk distance alarm.
Description
Technical Field
The invention relates to the technical field of automobile auxiliary parking, in particular to a multi-view automobile all-around auxiliary driving system.
Background
According to the statistical data of the national statistical bureau, the holding quantity of civil automobiles in China shows a linear increasing trend. The traditional panoramic auxiliary driving system utilizes a vision sensor, and four cameras are arranged above a front bumper, a left rear-view mirror, a right rear-view mirror and a vehicle tail license plate of an automobile to display the scene around the automobile body to a driver in a virtual top view mode, so that the driver can conveniently observe the distance information of pedestrians and vehicles around the automobile through a panoramic image, the vision blind area of the automobile is eliminated, and the driver is helped to drive and park more safely and conveniently. However, a typical panoramic auxiliary driving system is a planar spliced image with a single viewing angle, so that three-dimensional space information around a vehicle body is weakened, the view range of a driver is limited, and certain potential safety hazards still exist. At present most anticollision early warning need use lidar, and lidar precision is very high, and detection distance is far away, but the work difficulty under rainy and foggy weather, and the price is expensive under the eyes.
Disclosure of Invention
The invention aims to provide a multi-view automobile all-around auxiliary driving system which can convert acquired images around an automobile body into a complete panoramic image through processing, display the panoramic image on a vehicle-mounted display, enable a driver to visually know the conditions around the automobile body, avoid potential safety hazards caused by visual blind spots, timely make corresponding operations according to early warning of front obstacles, avoid traffic accidents caused by the visual blind spots or wrong operations, and improve driving safety.
The technical scheme of the application is as follows.
A multi-view automobile all-round vision auxiliary driving system comprises a plurality of cameras, an information processing module and an output module;
the camera acquires images around the automobile body; the camera comprises a first camera arranged on the automobile bumper, a second camera arranged on the left rearview mirror, a third camera arranged on the right rearview mirror and a fourth camera arranged on the automobile trunk;
the information processing module performs image processing: establishing a three-dimensional model required by a panoramic system, and processing an image acquired by a camera to obtain a three-dimensional panoramic image around a vehicle body; checking an image of the surrounding environment of the vehicle body from a plurality of viewpoints based on an image viewpoint transformation algorithm and calculating the distance of a front obstacle based on the image acquired by the first camera;
the output module outputs the result of the image processing by the information processing module.
The first camera is a binocular camera, and the second camera, the third camera and the fourth camera are fisheye cameras;
the binocular camera comprises two image sensors, wherein the two image sensors are on the same baseline, the time sequences of the two image sensors are synchronous, and the shot images have overlapping areas.
The three-dimensional model is a net-shaped three-dimensional model, the bottom surface is a three-dimensional plane, the connecting surface is an arc surface, the annular surface is a cylindrical surface, and the bottom surface, the connecting surface and the annular surface are connected.
The generation of the three-dimensional all-around image comprises the steps of calibrating a camera, establishing a texture mapping relation between a three-dimensional model vertex and image pixel points, mapping an image acquired by the camera onto the three-dimensional model according to the texture mapping relation, performing brightness correction on the mapped image, and fusing images in a splicing area.
The camera calibration comprises the steps of obtaining internal parameters and external parameters of the camera;
the internal parameters of the camera comprise a focal length, an image center and a distortion coefficient; the camera external reference comprises four cameras, a camera pose of a world coordinate system is established relative to the vehicle center serving as an original point, and a rotation translation relation between the two sensors of the first camera.
The output module comprises an image display device and an alarm. The result of processing the information includes the distance between the stereoscopic surround view image around the vehicle body and the obstacle ahead, which is output by the image display device, and the dangerous distance warning output by the alarm device.
A multi-view automobile all-round-looking auxiliary driving method comprises the following steps:
step S1, image acquisition:
collecting images around the vehicle body through a camera; the cameras comprise a first camera 1 arranged on a bumper of the automobile, a second camera 2 arranged on a left rearview mirror, a third camera 3 arranged on a right rearview mirror and a fourth camera 4 arranged on a trunk of the automobile, and the four cameras are used for respectively collecting images in four directions of the front, the left, the right and the back of the automobile;
step S2, processing the collected images in an information processing module to synthesize a target image;
establishing a mesh-shaped three-dimensional model consisting of a three-dimensional plane, an arc surface connecting surface and an annular cylindrical surface, calibrating a camera to obtain internal parameters and external parameters of the camera, mapping images shot by the camera in four directions, namely front, back, left and right directions, of the automobile to the mesh-shaped three-dimensional model by using the internal parameters and the external parameters of the camera, mapping textures to obtain a three-dimensional panoramic image around the automobile, correcting brightness according to the brightness of an original image, fusing the images in an image fusion area, and weakening a splicing seam of the panoramic image; measuring the distance from the front obstacle to the vehicle by using the image shot by the first camera, setting different observation visual angles, and displaying images of the vehicle in different directions by using the viewpoint transformation matrix;
and step S3, the output module outputs the synthesized stereo panoramic image to the vehicle-mounted display equipment of the automobile, the distance of the front obstacle is obtained according to the binocular camera ranging principle, and the alarm gives out a dangerous distance alarm.
Step S2 specifically includes the following steps:
(201) calibrating a camera: based on the checkerboard calibration plate, shooting the checkerboard calibration plate from different directions by a camera to be calibrated, and solving internal parameters of the camera according to the geometric relation between image coordinates of angular points (characteristic points of the checkerboard calibration plate) on the checkerboard calibration plate and coordinates of the checkerboard calibration plate in a world coordinate system, wherein the internal parameters of the camera comprise the focal length, the image center and the distortion coefficient of the camera;
when solving the external reference of the camera, establishing a world coordinate system by taking the center of the vehicle bottom as a world origin, carrying out distortion correction according to the obtained internal reference of the camera, and obtaining the rotation amount and the translation amount of the camera coordinate system relative to the world coordinate system according to the projection relation between the characteristic points under the world coordinate system and image points in image imaging;
(202) texture mapping is the process of mapping texels in texture space to pixels in screen space; the three-dimensional model is composed of points in actual world coordinates, and according to the imaging model of the camera, a one-to-one corresponding relation is established between the model vertex and pixel coordinates in the image acquired by the camera, so that the image is mapped to the surface of the mesh three-dimensional model;
(203) and brightness correction: calculating the required gain of brightness balance according to the brightness of the images acquired by the front camera, the rear camera, the left camera, the right camera and the left camera, and acting on the corresponding images acquired by the cameras to eliminate the brightness difference among spliced images;
(204) and image fusion, namely enabling the synthesized image to realize smooth transition according to a pixel-level weighted fusion algorithm to obtain a high-quality fused image.
Setting a calibration template plane on a plane with a world coordinate system Z being 0, and obtaining internal parameters and external parameters of the camera according to the imaging principle of the camera and the linear relation between the image shot by the camera and the object in the three-dimensional space; the imaging process of the camera involves a conversion between four coordinate systems.
The step (201) specifically comprises the following steps:
(201a) setting a calibration template plane on a plane with a world coordinate system Z being 0, converting the world coordinate system to a camera coordinate system plane, and obtaining an internal parameter and an external parameter of a camera according to a linear relation between an image shot by the camera and an object in a three-dimensional space according to an imaging model of the camera:
wherein, (Xw, Yw, Zw) is the world coordinate of the angular point in the calibration template, (Xc, Yc, Zc) is the coordinate of the angular point in the calibration template in the camera coordinate system, R and T are respectively a rotation matrix and a translation matrix in the camera external reference, and the conversion from the world coordinate system to the camera coordinate system is completed according to the relation of formula (1);
(201b) according to the similar triangular relation of the pinhole imaging principle, the conversion from a camera coordinate system to an imaging plane coordinate system is completed:
wherein, f is the focal length of the camera, (x, y) is the image plane coordinates of the angular point, and (Xc, Yc, Zc) is the coordinates of the angular point in the calibration template in the camera coordinate system;
(201c) sampling and quantizing an image plane to obtain pixel values of angular points;
wherein,the sizes of the CMOS pixels in the x direction and the y direction are respectively, the (Cx, Cy) is the optical center of the camera, and the (x, y) is the image plane coordinates of the angular point, so that the conversion from an imaging plane to a pixel plane is completed; (u, v) are the coordinates of the corner points on the camera acquired image.
The method for calculating the distance between the front obstacle by the image acquired by the first camera specifically comprises the following steps:
(301) and eliminating distortion: and performing distortion correction on the image according to camera internal parameters obtained by calibrating the cameras, specifically, calibrating the binocular cameras to obtain internal parameters of each camera, and measuring the relative position between the two cameras (namely a translation vector and a rotation matrix of the right camera relative to the left camera) through calibration.
(302) And binocular correction: to calculate the parallax of the target point on the left and right views, first matching the two corresponding image points of the target point on the left and right views; and reducing the search range of matching by using epipolar constraint, improving the matching efficiency, aligning the two images after distortion correction, enabling epipolar lines of the two images to be on the same horizontal line, and performing one-dimensional search and matching to a corresponding point in the alignment line.
(303) And calculating parallax to realize ranging: the Block Matching algorithm is adopted for stereo Matching, and the difference directly existing in the abscissa of the target point imaged on the left view and the right view, namely the parallax, and the distance from the target point to the imaging plane have an inverse proportional relation:
wherein, (X, Y, Z) is the coordinate of the target point in a world coordinate system with the optical center of the left camera as the origin, Z is the distance from the target point to the imaging plane in the left camera coordinate system, Tx is the center distance of the left camera and the right camera, f is the focal length of the cameras, and d is the difference directly existing between the abscissa of the target point imaged on the left view and the right view, namely parallax.
The parallax d can be obtained by the following formula,
d=xleft-xright
wherein x isleftAnd xrightRespectively, the abscissas of the target points on the imaging planes of the left and right cameras. f and Tx obtain the initial value through calibration, and optimize through the stereoscopic calibration, make two cameras totally parallel to put in mathematics; and solving the parallax according to the inverse proportion relation to obtain the distance between the target point and the camera.
After the information processing module finishes processing the image, the output module displays the generated automobile all-round-looking image on the vehicle-mounted platform, real-time monitoring and early warning are carried out according to the distance measured by the binocular camera, and when the distance of the front obstacle is within a dangerous distance, the alarm gives an alarm to a driver.
Compared with the prior art, the invention has the beneficial effects that:
the invention discloses a multi-view automobile all-around vision auxiliary driving system camera, which realizes three-dimensional images of the environment around an automobile body and front anti-collision alarm, a driver can freely adjust the angle according to the requirement, timely find pedestrians or objects close to the automobile and send out dangerous distance alarm to front obstacles, so that the occurrence of traffic accidents can be reduced, the driving safety is improved, and the cost is lower;
the invention relates to a multi-view automobile all-around auxiliary driving method, which realizes the display of 360-degree scenes around an automobile by using images acquired by a plurality of cameras, effectively restores three-dimensional object scenes around the automobile, and a driver can observe the environment around an automobile body from different views according to the requirement. When the effect of looking around is realized, the anticollision early warning of the place ahead barrier is provided, the cost is lower, and the function is abundant. In the process of driving and parking of the automobile, abundant and clear blind area scenes are provided for a driver, and dangerous distance warning is timely performed, so that the driving safety of the automobile is greatly improved, and traffic accidents can be effectively reduced.
Drawings
The invention is further explained below with reference to the figures and examples;
FIG. 1 is a flow chart of a multi-viewpoint automobile all-around auxiliary driving method of the present invention;
FIG. 2 is a side view of a camera mounting position in an embodiment of a multi-viewpoint automobile all-around auxiliary driving system of the present invention;
FIG. 3 is a top view of a camera mounting position in an embodiment of a multi-viewpoint automotive all-around auxiliary driving system of the present invention;
FIG. 4 is a schematic diagram illustrating the specific steps of the present invention for synthesis of a surround view image;
FIG. 5 is a diagram illustrating the specific steps of the binocular camera ranging of the present invention;
FIG. 6 is a schematic diagram of a stitching fusion region of a panoramic image according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments, which are illustrative only and not limiting, and the scope of the present invention is not limited thereby.
In order to achieve the objectives and effects of the technical means, creation features, working procedures and using methods of the present invention, and to make the evaluation methods easy to understand, the present invention will be further described with reference to the following embodiments.
As shown in fig. 1, a multi-view automobile all-round-looking auxiliary driving system comprises a plurality of cameras, an information processing module and an output module;
the camera acquires images around the automobile body; as shown in fig. 2 and 3, the cameras include a first camera mounted on the bumper of the automobile, a second camera mounted on the left rearview mirror, a third camera mounted on the right rearview mirror and a fourth camera mounted on the trunk of the automobile;
the information processing module performs image processing: establishing a three-dimensional model required by a panoramic system, and processing an image acquired by a camera to obtain a three-dimensional panoramic image around a vehicle body; checking an image of the surrounding environment of the vehicle body from a plurality of viewpoints based on an image viewpoint transformation algorithm and calculating the distance of a front obstacle based on the image acquired by the first camera;
the output module outputs the result of the image processing by the information processing module.
The first camera is a binocular camera, and the second camera, the third camera and the fourth camera are fisheye cameras;
the binocular camera comprises two image sensors, wherein the two image sensors are on the same baseline, the time sequences of the two image sensors are synchronous, and the shot images have overlapping areas.
The three-dimensional model is a net-shaped three-dimensional model, the bottom surface is a three-dimensional plane, the connecting surface is an arc surface, the annular surface is a cylindrical surface, and the bottom surface, the connecting surface and the annular surface are connected.
The generation of the three-dimensional all-around image comprises the steps of calibrating a camera, establishing a texture mapping relation between a three-dimensional model vertex and image pixel points, mapping an image acquired by the camera onto the three-dimensional model according to the texture mapping relation, performing brightness correction on the mapped image, and fusing images in a splicing area.
The camera calibration includes obtaining internal parameters (focal length, image center, distortion coefficient) and external parameters (rotation matrix R and translation matrix T) of the camera. Step (201) in step S2 described below is a specific step of camera calibration.
The camera external reference comprises camera poses of four cameras relative to a world coordinate system established by taking the center of the vehicle as an origin and a rotational-translational relation (a rotation matrix R and a translation matrix T) between two sensors of the first camera.
The output module comprises an image display device and an alarm. The result of processing the information includes the distance between the stereoscopic surround view image around the vehicle body and the obstacle ahead, which is output by the image display device, and the dangerous distance warning output by the alarm device.
Step S1, image acquisition:
collecting images around the vehicle body through a camera; the cameras comprise a first camera 1 arranged on a bumper of the automobile, a second camera 2 arranged on a left rearview mirror, a third camera 3 arranged on a right rearview mirror and a fourth camera 4 arranged on a trunk of the automobile, and the four cameras are used for respectively collecting images in four directions of the front, the left, the right and the back of the automobile;
step S2, processing the collected images in an information processing module to synthesize a target image;
establishing a mesh-shaped three-dimensional model consisting of a three-dimensional plane, an arc surface connecting surface and an annular cylindrical surface, calibrating a camera to obtain internal parameters and external parameters of the camera, mapping images shot by the camera in four directions, namely front, back, left and right directions, of the automobile to the mesh-shaped three-dimensional model by using the internal parameters and the external parameters of the camera, mapping textures to obtain a three-dimensional panoramic image around the automobile, correcting brightness according to the brightness of an original image, fusing the images in an image fusion area, and weakening a splicing seam of the panoramic image; measuring the distance from the front obstacle to the vehicle by using the image shot by the first camera, setting different observation visual angles, and displaying images of the vehicle in different directions by using the viewpoint transformation matrix;
as shown in fig. 5, the step of calculating the distance to the front obstacle from the image acquired by the first camera specifically includes the following steps:
(301) and eliminating distortion: and performing distortion correction on the image according to camera internal parameters obtained by calibrating the cameras, specifically, calibrating the binocular cameras to obtain internal parameters of each camera, and measuring the relative position between the two cameras (namely a translation vector and a rotation matrix of the right camera relative to the left camera) through calibration.
(302) And binocular correction: to calculate the parallax of the target point on the left and right views, first matching the two corresponding image points of the target point on the left and right views; and reducing the search range of matching by using epipolar constraint, improving the matching efficiency, aligning the two images after distortion correction, enabling epipolar lines of the two images to be on the same horizontal line, and performing one-dimensional search and matching to a corresponding point in the alignment line.
(303) And calculating parallax to realize ranging: the Block Matching algorithm is adopted for stereo Matching, and the difference directly existing in the abscissa of the target point imaged on the left view and the right view, namely the parallax, and the distance from the target point to the imaging plane have an inverse proportional relation:
wherein, (X, Y, Z) is the coordinate of the target point in a world coordinate system with the optical center of the left camera as the origin, Z is the distance from the target point to the imaging plane in the left camera coordinate system, Tx is the center distance of the left camera and the right camera, f is the focal length of the cameras, and d is the difference directly existing between the abscissa of the target point imaged on the left view and the right view, namely parallax.
The parallax d can be obtained by the following formula,
d=xleft-xright
wherein x isleftAnd xrightRespectively, the abscissas of the target points on the imaging planes of the left and right cameras. f and Tx obtain the initial value through calibration, and optimize through the stereoscopic calibration, make two cameras totally parallel to put in mathematics; according to the inverse proportion relation, the parallax of the sub-pixel precision level (in the prior art, the precision of the sub-pixel level is generally obtained, and the algorithm can obtain the precision of the sub-pixel level, and the precision of the sub-pixel level is higher) is obtained, and the distance between the target point and the camera is obtained.
And step S3, the output module outputs the synthesized stereo panoramic image to the vehicle-mounted display equipment of the automobile, the distance of the front obstacle is obtained according to the binocular camera ranging principle, and the alarm gives out a dangerous distance alarm.
Step S2 specifically includes the following steps:
(201) calibrating a camera: based on the checkerboard calibration plate, shooting the checkerboard calibration plate from different directions by a camera to be calibrated, and solving internal parameters of the camera according to the geometric relation between image coordinates of angular points (characteristic points of the checkerboard calibration plate) on the checkerboard calibration plate and coordinates of the checkerboard calibration plate in a world coordinate system, wherein the internal parameters of the camera comprise the focal length, the image center and the distortion coefficient of the camera;
when solving the external reference of the camera, establishing a world coordinate system by taking the center of the vehicle bottom as a world origin, carrying out distortion correction according to the obtained internal reference of the camera, and obtaining the rotation amount and the translation amount of the camera coordinate system relative to the world coordinate system by using the characteristic points (the projection relation with the image points in image imaging) in the world coordinate system;
referring to fig. 4, a target image is synthesized, the camera is calibrated to obtain internal and external parameters of the camera, the image obtained by the camera is mapped to a three-dimensional model by using the external parameters, a three-dimensional panoramic image of the automobile is generated after texture mapping, brightness correction is performed according to the brightness of an original image, image fusion is performed in an image fusion area, and the splicing seam of the panoramic image is weakened. Setting a calibration template plane on a plane with a world coordinate system Z being 0, and obtaining internal parameters and external parameters of the camera according to the imaging principle of the camera and the linear relation between the image shot by the camera and the object in the three-dimensional space; the imaging process of the camera involves a conversion between four coordinate systems.
The step (201) specifically comprises the following steps:
(201a) setting a calibration template plane on a plane with a world coordinate system Z being 0, converting the world coordinate system to a camera coordinate system plane, and obtaining an internal parameter and an external parameter of a camera according to a linear relation between an image shot by the camera and an object in a three-dimensional space according to an imaging model of the camera:
wherein, (Xw, Yw, Zw) is the world coordinate of the angular point in the calibration template, (Xc, Yc, Zc) is the coordinate of the angular point in the calibration template in the camera coordinate system, R and T are respectively a rotation matrix and a translation matrix in the camera external reference, and the conversion from the world coordinate system to the camera coordinate system is completed according to the relation of formula (1);
(201b) according to the similar triangular relation of the pinhole imaging principle, the conversion from a camera coordinate system to an imaging plane coordinate system is completed:
wherein, f is the focal length of the camera, (x, y) is the image plane coordinates of the angular point, and (Xc, Yc, Zc) is the coordinates of the angular point in the calibration template in the camera coordinate system;
(201c) sampling and quantizing an image plane to obtain pixel values of angular points;
wherein,the sizes of the CMOS pixels in the x direction and the y direction are respectively, the (Cx, Cy) is the optical center of the camera, and the (x, y) is the image plane coordinates of the angular point, so that the conversion from an imaging plane to a pixel plane is completed; (u, v) are the coordinates of the corner points on the camera acquired image.
(202) Texture mapping is the process of mapping texels in texture space to pixels in screen space; the three-dimensional model is composed of points in actual world coordinates, and according to the imaging model of the camera, a one-to-one corresponding relation is established between the model vertex and pixel coordinates in the image acquired by the camera, so that the image is mapped to the surface of the mesh three-dimensional model;
(203) and brightness correction: calculating the required gain of brightness balance according to the brightness of the images acquired by the front camera, the rear camera, the left camera, the right camera and the left camera, and acting on the corresponding images acquired by the cameras to eliminate the brightness difference among spliced images;
(204) and image fusion, namely enabling the synthesized image to realize smooth transition according to a pixel-level weighted fusion algorithm to obtain a high-quality fused image.
Specifically, fig. 6 is a schematic diagram of a ring-view image stitching fusion zone in an embodiment of the present invention, wherein the fusion zone is a zone 1,3,5,7,9, the non-fusion zone is a zone 2,4,6,8,10, and the zone 11 is a car position. In the fusion area, the world coordinates corresponding to the three-dimensional model are calculated and are respectively under the external parameters of two different cameras, the corresponding coordinates in the original image are divided into two equal parts, different weights are taken for the textures corresponding to the two original images at different world coordinate points according to the change of angles, and the fusion of the foreground and the background is realized in the fusion area.
After the information processing module finishes processing the image, the output module displays the generated automobile all-round-looking image on the vehicle-mounted platform, real-time monitoring and early warning are carried out according to the distance measured by the binocular camera, and when the distance of the front obstacle is within a dangerous distance, the alarm gives an alarm to a driver.
Those skilled in the art can design the invention to be modified or varied without departing from the spirit and scope of the invention. Therefore, if such modifications and variations of the present invention fall within the technical scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A multi-view automobile all-round-looking auxiliary driving system is characterized by comprising a plurality of cameras, an information processing module and an output module;
the camera acquires images around the automobile body; the camera comprises a first camera arranged on the automobile bumper, a second camera arranged on the left rearview mirror, a third camera arranged on the right rearview mirror and a fourth camera arranged on the automobile trunk;
the information processing module performs image processing: establishing a three-dimensional model, and processing an image acquired by a camera to obtain a three-dimensional all-round image around a vehicle body; checking an image of the surrounding environment of the vehicle body from a plurality of viewpoints based on an image viewpoint transformation algorithm and calculating the distance of a front obstacle based on the image acquired by the first camera;
the output module outputs the result of the image processing by the information processing module.
2. The multi-viewpoint automotive surround view aided driving system according to claim 1,
the first camera is a binocular camera, and the second camera, the third camera and the fourth camera are fisheye cameras;
the binocular camera comprises two image sensors, the two image sensors are on the same baseline, the time sequences of the two image sensors are synchronous, and the shot images have overlapping areas.
3. The multi-viewpoint automotive circular vision-aided driving system as claimed in claim 1, wherein the three-dimensional model is a mesh-like three-dimensional model, the bottom surface is a three-dimensional plane, the connecting surface is an arc surface, the annular surface is a cylindrical surface, and the bottom surface, the connecting surface and the annular surface are connected.
4. The multi-view automobile all-around driving assisting system according to claim 1, wherein the generation of the stereoscopic all-around image comprises camera calibration, establishment of a texture mapping relation between a three-dimensional model vertex and an image pixel point, mapping of an image acquired by a camera onto the three-dimensional model according to the texture mapping relation, brightness correction of the mapped image, and fusion of images in a splicing region.
5. The multi-viewpoint automobile all-around auxiliary driving system according to claim 4, wherein the camera calibration comprises obtaining internal parameters and external parameters of a camera;
the internal parameters of the camera comprise a focal length, an image center and a distortion coefficient;
the camera external reference comprises four cameras, a camera pose of a world coordinate system is established relative to the vehicle center serving as an original point, and a rotation translation relation between the two sensors of the first camera.
6. The system of claim 1, wherein the output module comprises an image display device and an alarm.
7. A multi-view automobile all-round-looking auxiliary driving method is characterized by comprising the following steps:
step S1, image acquisition:
collecting images around the vehicle body through a camera; the cameras comprise a first camera arranged on the automobile bumper, a second camera arranged on the left rearview mirror, a third camera arranged on the right rearview mirror and a fourth camera arranged on the automobile trunk, and the four cameras respectively collect images in four directions of the front, the left, the right and the back of the automobile;
step S2, processing the collected images in an information processing module to synthesize a target image;
establishing a mesh-shaped three-dimensional model consisting of a three-dimensional plane, an arc surface connecting surface and an annular cylindrical surface, calibrating a camera to obtain internal parameters and external parameters of the camera, mapping images shot by the camera in four directions, namely front, back, left and right directions, of the automobile to the mesh-shaped three-dimensional model by using the internal parameters and the external parameters of the camera, mapping textures to obtain a three-dimensional panoramic image around the automobile, correcting brightness according to the brightness of an original image, fusing the images in an image fusion area, and weakening a splicing seam of the panoramic image; measuring the distance from the front obstacle to the vehicle by using the image shot by the first camera, setting different observation visual angles, and displaying images of the vehicle in different directions by using the viewpoint transformation matrix;
and step S3, the output module outputs the synthesized stereo panoramic image to the vehicle-mounted display equipment of the automobile, the distance of the front obstacle is obtained according to the binocular camera ranging principle, and the alarm gives out a dangerous distance alarm.
8. The multi-viewpoint automobile all-around auxiliary driving method according to claim 7, characterized by comprising the following steps:
step S2 specifically includes the following steps:
(201) calibrating a camera: based on the checkerboard calibration plate, shooting the checkerboard calibration plate by a camera to be calibrated from different directions, and solving internal parameters of the camera according to the geometric relation between the image coordinates of the angular points on the checkerboard calibration plate and the coordinates of the checkerboard calibration plate in a world coordinate system, wherein the internal parameters of the camera comprise the focal length, the image center and the distortion coefficient of the camera;
when solving the external reference of the camera, establishing a world coordinate system by taking the center of the vehicle bottom as a world origin, carrying out distortion correction according to the obtained internal reference of the camera, and obtaining the rotation amount and the translation amount of the camera coordinate system relative to the world coordinate system according to the projection relation between the characteristic points under the world coordinate system and image points in image imaging;
(202) texture mapping is the process of mapping texels in texture space to pixels in screen space; the three-dimensional model is composed of points in actual world coordinates, and according to the imaging model of the camera, a one-to-one corresponding relation is established between the model vertex and pixel coordinates in the image acquired by the camera, so that the image is mapped to the surface of the mesh three-dimensional model;
(203) and brightness correction: calculating the required gain of brightness balance according to the brightness of the images acquired by the front camera, the rear camera, the left camera, the right camera and the left camera, and acting on the corresponding images acquired by the cameras to eliminate the brightness difference among spliced images;
(204) and image fusion, namely enabling the synthesized image to realize smooth transition according to a pixel-level weighted fusion algorithm to obtain a high-quality fused image.
9. The multi-viewpoint automobile all-around auxiliary driving method according to claim 8, comprising the following steps:
the step (201) specifically comprises the following steps:
(201a) setting a calibration template plane on a plane with a world coordinate system Z being 0, converting the world coordinate system to a camera coordinate system plane, and obtaining an internal parameter and an external parameter of a camera according to a linear relation between an image shot by the camera and an object in a three-dimensional space according to an imaging model of the camera:
wherein, (Xw, Yw, Zw) is the world coordinate of the angular point in the calibration template, (Xc, Yc, Zc) is the coordinate of the angular point in the calibration template in the camera coordinate system, R and T are respectively a rotation matrix and a translation matrix in the camera external reference, and the conversion from the world coordinate system to the camera coordinate system is completed according to the relation of formula (1);
(201b) according to the similar triangular relation of the pinhole imaging principle, the conversion from a camera coordinate system to an imaging plane coordinate system is completed:
wherein, f is the focal length of the camera, (x, y) is the image plane coordinates of the angular point, and (Xc, Yc, Zc) is the coordinates of the angular point in the calibration template in the camera coordinate system;
(201c) sampling and quantizing an image plane to obtain pixel values of angular points;
wherein,are large in the x and y directions of the CMOS pixel, respectivelySmall, (Cx, Cy) is the optical center of the camera, and (x, y) is the image plane coordinates of the angular point, so that the conversion from an imaging plane to a pixel plane is completed; (u, v) are the coordinates of the corner points on the camera acquired image.
10. The multi-viewpoint automobile all-around auxiliary driving method according to claim 6, characterized by comprising the following steps:
the method for calculating the distance between the front obstacle by the image acquired by the first camera specifically comprises the following steps:
(301) and eliminating distortion: performing image distortion correction according to camera internal parameters obtained by camera calibration, specifically, obtaining internal parameters of each camera by binocular camera calibration, and measuring the relative position between the two cameras by calibration;
(302) and binocular correction: to calculate the parallax of the target point on the left and right views, first matching the two corresponding image points of the target point on the left and right views; reducing the search range of matching by using epipolar constraint, improving the matching efficiency, aligning the two images after distortion correction, enabling epipolar lines of the two images to be on the same horizontal line, and performing one-dimensional search and matching to a corresponding point in the alignment line;
(303) and calculating parallax to realize ranging: the Block Matching algorithm is adopted for stereo Matching, parallax which directly exists on the abscissa of the target point imaged on the left view and the right view and an inverse proportional relation with the distance from the target point to the imaging plane are utilized:
wherein, (X, Y, Z) is the coordinate of the target point in a world coordinate system with the optical center of the left camera as the origin, Z is the distance between the target point and the imaging plane in the left camera coordinate system, Tx is the center distance of the left camera and the right camera, f is the focal length of the cameras, and d is the difference directly existing in the abscissa of the target point imaged on the left view and the right view, namely parallax;
the parallax d is obtained by the following formula,
d=xleft-xright
wherein x isleftAnd xrightRespectively is the abscissa of the target point on the imaging plane of the left camera and the right camera; f and Tx obtain the initial value through calibration, and optimize through the stereoscopic calibration, make two cameras totally parallel to put in mathematics; and solving the parallax d to obtain the distance between the target point and the camera according to the inverse proportional relation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810507410.3A CN108765496A (en) | 2018-05-24 | 2018-05-24 | A kind of multiple views automobile looks around DAS (Driver Assistant System) and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810507410.3A CN108765496A (en) | 2018-05-24 | 2018-05-24 | A kind of multiple views automobile looks around DAS (Driver Assistant System) and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108765496A true CN108765496A (en) | 2018-11-06 |
Family
ID=64006480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810507410.3A Pending CN108765496A (en) | 2018-05-24 | 2018-05-24 | A kind of multiple views automobile looks around DAS (Driver Assistant System) and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108765496A (en) |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109624854A (en) * | 2018-12-03 | 2019-04-16 | 浙江明航智能科技有限公司 | A kind of 360 ° of panoramas auxiliary visible system suitable for special vehicle |
CN109712194A (en) * | 2018-12-10 | 2019-05-03 | 深圳开阳电子股份有限公司 | Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium |
CN109741456A (en) * | 2018-12-17 | 2019-05-10 | 深圳市航盛电子股份有限公司 | 3D based on GPU concurrent operation looks around vehicle assistant drive method and system |
CN109741455A (en) * | 2018-12-10 | 2019-05-10 | 深圳开阳电子股份有限公司 | A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system |
CN109801357A (en) * | 2018-12-04 | 2019-05-24 | 先临三维科技股份有限公司 | Show method and device, the storage medium, processor of three-dimensional digital model |
CN109801339A (en) * | 2018-12-29 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | Image processing method, device and storage medium |
CN109819169A (en) * | 2019-02-13 | 2019-05-28 | 上海闻泰信息技术有限公司 | Panorama shooting method, device, equipment and medium |
CN109887036A (en) * | 2019-01-21 | 2019-06-14 | 广州市安晓科技有限责任公司 | A kind of automobile looks around the semi-automatic calibration system and method for panorama |
CN109903556A (en) * | 2019-03-01 | 2019-06-18 | 成都众易通科技有限公司 | A kind of vehicle blind zone on-line monitoring early warning system |
CN109949205A (en) * | 2019-01-30 | 2019-06-28 | 广东工业大学 | A kind of automatic Pilot image perception System and method for for simulating human eye |
CN110276716A (en) * | 2019-06-19 | 2019-09-24 | 北京茵沃汽车科技有限公司 | The generation method of the 180 degree correction view of vehicle front-and rear-view fish eye images |
CN110336991A (en) * | 2019-06-28 | 2019-10-15 | 深圳数位传媒科技有限公司 | A kind of environmental cues method and device based on binocular camera |
CN110414487A (en) * | 2019-08-16 | 2019-11-05 | 东软睿驰汽车技术(沈阳)有限公司 | Method and device for identifying lane line |
CN110428361A (en) * | 2019-07-25 | 2019-11-08 | 北京麒麟智能科技有限公司 | A kind of multiplex image acquisition method based on artificial intelligence |
CN110443855A (en) * | 2019-08-08 | 2019-11-12 | Oppo广东移动通信有限公司 | Multi-camera calibration, device, storage medium and electronic equipment |
CN110517216A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of SLAM fusion method and its system based on polymorphic type camera |
CN110796102A (en) * | 2019-10-31 | 2020-02-14 | 重庆长安汽车股份有限公司 | Vehicle target sensing system and method |
CN110827358A (en) * | 2019-10-15 | 2020-02-21 | 深圳数翔科技有限公司 | Camera calibration method applied to automatic driving automobile |
CN110827361A (en) * | 2019-11-01 | 2020-02-21 | 清华大学 | Camera group calibration method and device based on global calibration frame |
CN111098785A (en) * | 2019-12-20 | 2020-05-05 | 天津市航天安通电子科技有限公司 | Driving assistance system, special vehicle and method |
CN111256693A (en) * | 2018-12-03 | 2020-06-09 | 北京初速度科技有限公司 | Pose change calculation method and vehicle-mounted terminal |
CN111277796A (en) * | 2020-01-21 | 2020-06-12 | 深圳市德赛微电子技术有限公司 | Image processing method, vehicle-mounted vision auxiliary system and storage device |
CN111539973A (en) * | 2020-04-28 | 2020-08-14 | 北京百度网讯科技有限公司 | Method and device for detecting pose of vehicle |
TWI702577B (en) * | 2019-07-10 | 2020-08-21 | 中華汽車工業股份有限公司 | A method for generating a driving assistance image utilizing in a vehicle and a system thereof |
CN111698467A (en) * | 2020-05-08 | 2020-09-22 | 北京中广上洋科技股份有限公司 | Intelligent tracking method and system based on multiple cameras |
CN111768332A (en) * | 2019-03-26 | 2020-10-13 | 深圳市航盛电子股份有限公司 | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device |
CN111861891A (en) * | 2020-07-13 | 2020-10-30 | 一汽奔腾轿车有限公司 | Method for realizing panoramic image system picture splicing display based on checkerboard calibration |
CN111986246A (en) * | 2019-05-24 | 2020-11-24 | 北京四维图新科技股份有限公司 | Three-dimensional model reconstruction method and device based on image processing and storage medium |
CN111986248A (en) * | 2020-08-18 | 2020-11-24 | 东软睿驰汽车技术(沈阳)有限公司 | Multi-view visual perception method and device and automatic driving automobile |
CN112208438A (en) * | 2019-07-10 | 2021-01-12 | 中华汽车工业股份有限公司 | Driving auxiliary image generation method and system |
CN112241930A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted all-round looking method, device and system and vehicle |
CN112347825A (en) * | 2019-08-09 | 2021-02-09 | 杭州海康威视数字技术股份有限公司 | Method and system for adjusting vehicle body all-round model |
CN113066158A (en) * | 2019-12-16 | 2021-07-02 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted all-round looking method and device |
CN113135181A (en) * | 2021-06-22 | 2021-07-20 | 北京踏歌智行科技有限公司 | Mining area automatic driving loading and unloading point accurate parking method based on visual assistance |
CN113212427A (en) * | 2020-02-03 | 2021-08-06 | 通用汽车环球科技运作有限责任公司 | Intelligent vehicle with advanced vehicle camera system for underbody hazard and foreign object detection |
CN113313813A (en) * | 2021-05-12 | 2021-08-27 | 武汉极目智能技术有限公司 | Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning |
WO2021226772A1 (en) * | 2020-05-11 | 2021-11-18 | 上海欧菲智能车联科技有限公司 | Surround view display method and apparatus, computer device, and storage medium |
CN113724133A (en) * | 2021-08-06 | 2021-11-30 | 武汉极目智能技术有限公司 | 360-degree all-round-view splicing method for trailer connected by non-rigid bodies |
CN113727064A (en) * | 2020-05-26 | 2021-11-30 | 北京罗克维尔斯科技有限公司 | Method and device for determining field angle of camera |
CN113763534A (en) * | 2021-08-24 | 2021-12-07 | 同致电子科技(厦门)有限公司 | Point cloud mapping method based on visual look-around system |
CN114331867A (en) * | 2021-11-29 | 2022-04-12 | 惠州华阳通用智慧车载系统开发有限公司 | Panoramic system image correction method |
CN114418851A (en) * | 2022-01-18 | 2022-04-29 | 长沙慧联智能科技有限公司 | Multi-view 3D panoramic all-around viewing system and splicing method |
CN114445793A (en) * | 2021-12-20 | 2022-05-06 | 桂林电子科技大学 | Intelligent driving auxiliary system based on artificial intelligence and computer vision |
CN115661366A (en) * | 2022-12-05 | 2023-01-31 | 蔚来汽车科技(安徽)有限公司 | Method for constructing three-dimensional scene model and image processing device |
CN115830118A (en) * | 2022-12-08 | 2023-03-21 | 重庆市信息通信咨询设计院有限公司 | Crack detection method and system for cement electric pole based on binocular camera |
US20230177790A1 (en) * | 2021-12-03 | 2023-06-08 | Honda Motor Co., Ltd. | Control device, control method, and recording medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103231708A (en) * | 2013-04-12 | 2013-08-07 | 安徽工业大学 | Intelligent vehicle obstacle avoiding method based on binocular vision |
CN103810686A (en) * | 2014-02-27 | 2014-05-21 | 苏州大学 | Seamless splicing panorama assisting driving system and method |
CN104574376A (en) * | 2014-12-24 | 2015-04-29 | 重庆大学 | Anti-collision method based on joint verification of binocular vision and laser radar in congested traffic |
CN105678787A (en) * | 2016-02-03 | 2016-06-15 | 西南交通大学 | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera |
-
2018
- 2018-05-24 CN CN201810507410.3A patent/CN108765496A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103231708A (en) * | 2013-04-12 | 2013-08-07 | 安徽工业大学 | Intelligent vehicle obstacle avoiding method based on binocular vision |
CN103810686A (en) * | 2014-02-27 | 2014-05-21 | 苏州大学 | Seamless splicing panorama assisting driving system and method |
CN104574376A (en) * | 2014-12-24 | 2015-04-29 | 重庆大学 | Anti-collision method based on joint verification of binocular vision and laser radar in congested traffic |
CN105678787A (en) * | 2016-02-03 | 2016-06-15 | 西南交通大学 | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera |
Non-Patent Citations (2)
Title |
---|
蔡泽平: "面向无人车的多目融合感知研究", 《万方学位论文》 * |
黄冬: "3D全景辅肋驾驶系统关键技术的硏究", 《中国优秀硕士学位论文全文数据库(科技信息II辑)》 * |
Cited By (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109624854A (en) * | 2018-12-03 | 2019-04-16 | 浙江明航智能科技有限公司 | A kind of 360 ° of panoramas auxiliary visible system suitable for special vehicle |
CN111256693A (en) * | 2018-12-03 | 2020-06-09 | 北京初速度科技有限公司 | Pose change calculation method and vehicle-mounted terminal |
CN111256693B (en) * | 2018-12-03 | 2022-05-13 | 北京魔门塔科技有限公司 | Pose change calculation method and vehicle-mounted terminal |
CN109801357B (en) * | 2018-12-04 | 2023-10-31 | 先临三维科技股份有限公司 | Method and device for displaying three-dimensional digital model, storage medium and processor |
CN109801357A (en) * | 2018-12-04 | 2019-05-24 | 先临三维科技股份有限公司 | Show method and device, the storage medium, processor of three-dimensional digital model |
CN109712194A (en) * | 2018-12-10 | 2019-05-03 | 深圳开阳电子股份有限公司 | Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium |
CN109741455A (en) * | 2018-12-10 | 2019-05-10 | 深圳开阳电子股份有限公司 | A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system |
CN109712194B (en) * | 2018-12-10 | 2021-09-24 | 深圳开阳电子股份有限公司 | Vehicle-mounted all-round looking system, three-dimensional calibration method thereof and computer readable storage medium |
CN109741455B (en) * | 2018-12-10 | 2022-11-29 | 深圳开阳电子股份有限公司 | Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system |
CN109741456A (en) * | 2018-12-17 | 2019-05-10 | 深圳市航盛电子股份有限公司 | 3D based on GPU concurrent operation looks around vehicle assistant drive method and system |
CN109801339A (en) * | 2018-12-29 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | Image processing method, device and storage medium |
CN109887036A (en) * | 2019-01-21 | 2019-06-14 | 广州市安晓科技有限责任公司 | A kind of automobile looks around the semi-automatic calibration system and method for panorama |
CN109949205A (en) * | 2019-01-30 | 2019-06-28 | 广东工业大学 | A kind of automatic Pilot image perception System and method for for simulating human eye |
CN109819169A (en) * | 2019-02-13 | 2019-05-28 | 上海闻泰信息技术有限公司 | Panorama shooting method, device, equipment and medium |
CN109903556A (en) * | 2019-03-01 | 2019-06-18 | 成都众易通科技有限公司 | A kind of vehicle blind zone on-line monitoring early warning system |
CN111768332A (en) * | 2019-03-26 | 2020-10-13 | 深圳市航盛电子股份有限公司 | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device |
CN111768332B (en) * | 2019-03-26 | 2024-05-14 | 深圳市航盛电子股份有限公司 | Method for splicing vehicle-mounted panoramic real-time 3D panoramic images and image acquisition device |
CN111986246A (en) * | 2019-05-24 | 2020-11-24 | 北京四维图新科技股份有限公司 | Three-dimensional model reconstruction method and device based on image processing and storage medium |
CN111986246B (en) * | 2019-05-24 | 2024-04-30 | 北京四维图新科技股份有限公司 | Three-dimensional model reconstruction method, device and storage medium based on image processing |
CN110276716B (en) * | 2019-06-19 | 2023-06-20 | 北京茵沃汽车科技有限公司 | Method for generating 180-degree correction view of front and rear view fisheye images of vehicle |
CN110276716A (en) * | 2019-06-19 | 2019-09-24 | 北京茵沃汽车科技有限公司 | The generation method of the 180 degree correction view of vehicle front-and rear-view fish eye images |
CN110336991A (en) * | 2019-06-28 | 2019-10-15 | 深圳数位传媒科技有限公司 | A kind of environmental cues method and device based on binocular camera |
CN110336991B (en) * | 2019-06-28 | 2021-07-13 | 深圳数位传媒科技有限公司 | Binocular camera-based environment prompting method and device |
CN112208438A (en) * | 2019-07-10 | 2021-01-12 | 中华汽车工业股份有限公司 | Driving auxiliary image generation method and system |
TWI702577B (en) * | 2019-07-10 | 2020-08-21 | 中華汽車工業股份有限公司 | A method for generating a driving assistance image utilizing in a vehicle and a system thereof |
CN112241930A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted all-round looking method, device and system and vehicle |
CN110428361A (en) * | 2019-07-25 | 2019-11-08 | 北京麒麟智能科技有限公司 | A kind of multiplex image acquisition method based on artificial intelligence |
CN110443855A (en) * | 2019-08-08 | 2019-11-12 | Oppo广东移动通信有限公司 | Multi-camera calibration, device, storage medium and electronic equipment |
CN112347825A (en) * | 2019-08-09 | 2021-02-09 | 杭州海康威视数字技术股份有限公司 | Method and system for adjusting vehicle body all-round model |
CN112347825B (en) * | 2019-08-09 | 2023-08-22 | 杭州海康威视数字技术股份有限公司 | Adjusting method and system for vehicle body looking-around model |
CN110414487B (en) * | 2019-08-16 | 2022-05-13 | 东软睿驰汽车技术(沈阳)有限公司 | Method and device for identifying lane line |
CN110414487A (en) * | 2019-08-16 | 2019-11-05 | 东软睿驰汽车技术(沈阳)有限公司 | Method and device for identifying lane line |
CN110517216A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of SLAM fusion method and its system based on polymorphic type camera |
CN110517216B (en) * | 2019-08-30 | 2023-09-22 | 的卢技术有限公司 | SLAM fusion method and system based on multiple types of cameras |
CN110827358B (en) * | 2019-10-15 | 2023-10-31 | 深圳数翔科技有限公司 | Camera calibration method applied to automatic driving automobile |
CN110827358A (en) * | 2019-10-15 | 2020-02-21 | 深圳数翔科技有限公司 | Camera calibration method applied to automatic driving automobile |
CN110796102A (en) * | 2019-10-31 | 2020-02-14 | 重庆长安汽车股份有限公司 | Vehicle target sensing system and method |
CN110796102B (en) * | 2019-10-31 | 2023-04-14 | 重庆长安汽车股份有限公司 | Vehicle target sensing system and method |
CN110827361A (en) * | 2019-11-01 | 2020-02-21 | 清华大学 | Camera group calibration method and device based on global calibration frame |
CN113066158A (en) * | 2019-12-16 | 2021-07-02 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted all-round looking method and device |
CN113066158B (en) * | 2019-12-16 | 2023-03-10 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted all-round looking method and device |
CN111098785A (en) * | 2019-12-20 | 2020-05-05 | 天津市航天安通电子科技有限公司 | Driving assistance system, special vehicle and method |
CN111277796A (en) * | 2020-01-21 | 2020-06-12 | 深圳市德赛微电子技术有限公司 | Image processing method, vehicle-mounted vision auxiliary system and storage device |
CN113212427A (en) * | 2020-02-03 | 2021-08-06 | 通用汽车环球科技运作有限责任公司 | Intelligent vehicle with advanced vehicle camera system for underbody hazard and foreign object detection |
CN111539973A (en) * | 2020-04-28 | 2020-08-14 | 北京百度网讯科技有限公司 | Method and device for detecting pose of vehicle |
CN111698467A (en) * | 2020-05-08 | 2020-09-22 | 北京中广上洋科技股份有限公司 | Intelligent tracking method and system based on multiple cameras |
WO2021226772A1 (en) * | 2020-05-11 | 2021-11-18 | 上海欧菲智能车联科技有限公司 | Surround view display method and apparatus, computer device, and storage medium |
CN113727064B (en) * | 2020-05-26 | 2024-03-22 | 北京罗克维尔斯科技有限公司 | Method and device for determining camera field angle |
CN113727064A (en) * | 2020-05-26 | 2021-11-30 | 北京罗克维尔斯科技有限公司 | Method and device for determining field angle of camera |
CN111861891A (en) * | 2020-07-13 | 2020-10-30 | 一汽奔腾轿车有限公司 | Method for realizing panoramic image system picture splicing display based on checkerboard calibration |
CN111986248A (en) * | 2020-08-18 | 2020-11-24 | 东软睿驰汽车技术(沈阳)有限公司 | Multi-view visual perception method and device and automatic driving automobile |
CN111986248B (en) * | 2020-08-18 | 2024-02-09 | 东软睿驰汽车技术(沈阳)有限公司 | Multi-vision sensing method and device and automatic driving automobile |
CN113313813A (en) * | 2021-05-12 | 2021-08-27 | 武汉极目智能技术有限公司 | Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning |
CN113135181B (en) * | 2021-06-22 | 2021-08-24 | 北京踏歌智行科技有限公司 | Mining area automatic driving loading and unloading point accurate parking method based on visual assistance |
CN113135181A (en) * | 2021-06-22 | 2021-07-20 | 北京踏歌智行科技有限公司 | Mining area automatic driving loading and unloading point accurate parking method based on visual assistance |
CN113724133A (en) * | 2021-08-06 | 2021-11-30 | 武汉极目智能技术有限公司 | 360-degree all-round-view splicing method for trailer connected by non-rigid bodies |
CN113724133B (en) * | 2021-08-06 | 2024-03-05 | 武汉极目智能技术有限公司 | 360-degree circular splicing method for non-rigid body connected trailer |
CN113763534A (en) * | 2021-08-24 | 2021-12-07 | 同致电子科技(厦门)有限公司 | Point cloud mapping method based on visual look-around system |
CN113763534B (en) * | 2021-08-24 | 2023-12-15 | 同致电子科技(厦门)有限公司 | Point cloud mapping method based on visual looking-around system |
CN114331867A (en) * | 2021-11-29 | 2022-04-12 | 惠州华阳通用智慧车载系统开发有限公司 | Panoramic system image correction method |
US20230177790A1 (en) * | 2021-12-03 | 2023-06-08 | Honda Motor Co., Ltd. | Control device, control method, and recording medium |
US12045945B2 (en) * | 2021-12-03 | 2024-07-23 | Honda Motor Co., Ltd. | Control device, control method, and recording medium |
CN114445793A (en) * | 2021-12-20 | 2022-05-06 | 桂林电子科技大学 | Intelligent driving auxiliary system based on artificial intelligence and computer vision |
CN114418851A (en) * | 2022-01-18 | 2022-04-29 | 长沙慧联智能科技有限公司 | Multi-view 3D panoramic all-around viewing system and splicing method |
CN115661366A (en) * | 2022-12-05 | 2023-01-31 | 蔚来汽车科技(安徽)有限公司 | Method for constructing three-dimensional scene model and image processing device |
CN115830118A (en) * | 2022-12-08 | 2023-03-21 | 重庆市信息通信咨询设计院有限公司 | Crack detection method and system for cement electric pole based on binocular camera |
CN115830118B (en) * | 2022-12-08 | 2024-03-19 | 重庆市信息通信咨询设计院有限公司 | Crack detection method and system for cement electric pole based on binocular camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765496A (en) | A kind of multiple views automobile looks around DAS (Driver Assistant System) and method | |
CN109741455B (en) | Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system | |
EP2437494B1 (en) | Device for monitoring area around vehicle | |
JP4861574B2 (en) | Driving assistance device | |
JP5739584B2 (en) | 3D image synthesizing apparatus and method for visualizing vehicle periphery | |
CN103778649B (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
CN108269235A (en) | A kind of vehicle-mounted based on OPENGL looks around various visual angles panorama generation method | |
CN105763854B (en) | A kind of omnidirectional imaging system and its imaging method based on monocular cam | |
CN111559314B (en) | Depth and image information fused 3D enhanced panoramic looking-around system and implementation method | |
JP5455124B2 (en) | Camera posture parameter estimation device | |
JP5953824B2 (en) | Vehicle rear view support apparatus and vehicle rear view support method | |
CN112224132B (en) | Vehicle panoramic all-around obstacle early warning method | |
CN109087251B (en) | Vehicle-mounted panoramic image display method and system | |
KR20190047027A (en) | How to provide a rearview mirror view of the vehicle's surroundings in the vehicle | |
JP2023505891A (en) | Methods for measuring environmental topography | |
JP3301421B2 (en) | Vehicle surrounding situation presentation device | |
CN115239922A (en) | AR-HUD three-dimensional coordinate reconstruction method based on binocular camera | |
JP5083443B2 (en) | Driving support device and method, and arithmetic device | |
CN107364393A (en) | Display methods, device, storage medium and the electronic equipment of vehicle rear view image | |
WO2020177970A1 (en) | Imaging system and method | |
EP3326146B1 (en) | Rear cross traffic - quick looks | |
JP4545503B2 (en) | Image generating apparatus and method | |
CN111862210A (en) | Target object detection and positioning method and device based on panoramic camera | |
JP7074546B2 (en) | Image processing equipment and methods | |
CN114937090A (en) | Intelligent electronic front and rear view mirror system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181106 |
|
RJ01 | Rejection of invention patent application after publication |