Disclosure of Invention
The invention provides a vehicle detection method and a detection device based on sensor multivariate information fusion, aiming at solving the technical problem of low positioning accuracy of the conventional vehicle sensor.
The invention is realized by adopting the following technical scheme: a vehicle detection method based on sensor multivariate information fusion comprises the following steps:
s1: converting a camera coordinate system and a laser radar coordinate system of a vehicle into a detection coordinate system of the vehicle;
s2: preliminarily determining a camera detection area according to an image vanishing line and a camera acquisition visual angle, screening the laser radar point cloud data, finally detecting a mutation position of a laser radar data return value, extracting road boundary position information, projecting the road boundary position information onto an image, and determining a vehicle passing area;
s3: the detection angle of the laser radar is constrained within the visual angle range of a camera, a detection area is determined for an image recognition vehicle, then the object distance information detected by the laser radar is projected onto a visual image, and the visual image is used as a base point to search the interested area of the vehicle recognition on the image;
s4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
The invention is based on the angle of improving the detection precision and reducing the time consumption, and combines the original data characteristics with the advanced characteristic data, thereby fully playing the detection advantages of each sensor, making up for the deficiencies of each sensor and realizing the final target requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, improving the object identification precision through complementary detection, and solving the technical problem of low positioning precision of the existing vehicle sensor.
As a further improvement of the above scheme, one point in the defined space is P under the laser radar coordinate systeml(xl,yl,zl) Under the camera coordinate system is Pc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp) (ii) a The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
wherein, BlAnd BcRespectively, the laser radar coordinate system and the translation matrix from the camera coordinate system to the detection coordinate system.
As a further improvement of the above scheme, when the laser radar point cloud data is screened, if there is no obstacle on the ground, the coordinates of the laser beam of the laser radar located on the ground point cloud in the radar coordinate system are:
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
in the formula, H represents the installation height of the laser radar;
and performing rotation correction on the source data of the laser radar, and comparing the converted data with a coordinate point obtained according to the height:
in the formula, P represents a coordinate point obtained in terms of height.
As a further improvement of the above solution, in step S3, the data return value of each point in the polar coordinate system in the detection range of the laser radar is:
P(ρn,α,ωn)n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, and the searching angle of the laser radar is corrected to be (-0.5M, 0.5M).
As a further improvement of the scheme, the point cloud data of the laser radar is used for road segmentation, namely, vehicle passing areas are extracted according to structural features of road edges, and wavelet analysis is used for performing secondary segmentation on primary segmentation results to determine the vehicle passing areas.
As a further improvement of the above scheme, when an obstacle appears in the radar detection range, the mutation position is detected; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
As a further improvement of the scheme, the vehicle identification detection is carried out on the determined vehicle detection area by applying machine learning, the ground positions of the left and right vehicles are firstly positioned, then the vehicle detection area is determined visually, then the search frame is gradually enlarged, and the vehicles are identified by adopting a search mode from two sides to the middle and from bottom to top.
As a further improvement of the above solution, when projecting the object distance information onto the visual image, the method for deriving the conversion relationship between the camera coordinate system and the image coordinate system includes the following steps:
P(Xc,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using triangle similarityc,Yc,Zc) Projection position in image coordinate system:
wherein f is the focal length of the camera; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Points projected into the coordinate system;
definition (0)uvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
deducing a coordinate conversion formula from the point cloud to the image according to a conversion formula from a laser radar coordinate system to a camera coordinate system, wherein the coordinate conversion formula is as follows:
and projecting the object distance information detected by the laser radar onto the visual image according to a coordinate conversion formula from the point cloud to the image.
As a further improvement of the above solution, a space formed by the laser radar in the vehicle body is defined as: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(2) for the detected objects on the left side and the right side, the spatial point cloud forms two mutually perpendicular surfaces and is in circular arc transition; the object right in front is a plane, and two ends of the object are connected with the circular arc.
The present invention also provides a detection device, which applies any of the above-mentioned vehicle detection methods based on sensor multivariate information fusion, comprising:
a conversion module for converting a camera coordinate system and a lidar coordinate system of a vehicle into a detection coordinate system of the vehicle;
an initial data processing module for processing initial data of the camera and the laser radar; the initial data processing module preliminarily determines a camera detection area according to an image vanishing line and a camera acquisition visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area;
the detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the interested area of the vehicle recognition on the image by taking the object distance information as a base point;
the structural feature fusion identification module is used for guiding and extracting the outline features of the tail part of a vehicle in a guide image according to the outline change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, fusing the texture features to identify a front vehicle, and verifying the image identification result according to the point cloud structure and the space position of the laser radar.
The vehicle detection method and the detection device based on sensor multivariate information fusion have the following beneficial effects:
1. the invention provides a technology for determining a detection range by utilizing the fusion of a camera and a laser radar in vehicle detection. For safe driving of a vehicle, much of the lidar data detected at 360 degrees is redundant, extra calculation cost is brought to a processor, and execution efficiency is affected. The visual angle of the camera is equivalent to the eyes of a driver, and an object on a road is detected when the vehicle runs, so that a reasonable search range is provided for the laser radar by using the visual angle of the camera.
2. The invention provides a vehicle detection technology, which utilizes the characteristic that wavelet analysis is sensitive to sudden change of a laser radar data return value to respectively perform wavelet analysis on point cloud data returned by each laser beam, extracts each laser point data capable of scanning the ground edge, and then performs fitting to restrict the detection range within the vehicle passable range.
3. In the vehicle detection technology, the vehicle identification mode is improved, the detection efficiency is improved, and the ground coordinate position of the lower left corner of the vehicle is positioned by using the detection result of the laser radar (if the vehicle is positioned on the right side, the ground coordinate of the lower right corner is positioned). And based on the point, the search area is continuously adjusted and expanded, the size of the detection frame is adjusted, and the vehicle is rapidly identified.
4. The invention proposes that in the vehicle detection technique, the same laser beam acts on the object, because of the continuity of the surface of the object, the distances between adjacent laser points should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon.
5. The invention provides a vehicle detection technology, wherein a point cloud set formed by acting on an object needs to meet conditions, so that the vehicle can be reliably judged according to the conditions, the missing detection phenomenon is reduced, and meanwhile, the method can be used for verifying whether the vehicle point cloud data identified by an image is accurate or not.
6. The invention is based on the angle of improving the detection precision and reducing the time consumption, and combines the original data characteristics with the advanced characteristic data, thereby fully playing the detection advantages of each sensor, making up for the deficiencies of each sensor and realizing the final target requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, and improving the object identification precision through complementary detection.
The beneficial effects of the detection device of the invention are the same as those of the vehicle detection method based on sensor multivariate information fusion, and are not repeated herein.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Referring to fig. 1-6, the present embodiment provides a vehicle detection method based on sensor multivariate information fusion, the method first converts the coordinate systems of each camera and lidar into a detection coordinate system; then, performing laser point cloud processing, preliminarily positioning a laser radar detection range by using a camera view angle, and extracting road boundary position information by using wavelet analysis; projecting the extracted road boundary onto an image by using an imaging principle to determine a vehicle passing area; secondly, determining an interested area of the identified vehicle in the traffic area by utilizing the positioning function of the laser radar; and finally, verifying the image identification result according to the point cloud structure and the spatial distribution of the laser radar, and eliminating false detection and missing detection in the image. In the embodiment, the vehicle detection method based on sensor multi-information fusion is mainly realized by the following steps, specifically steps S1-S4.
Step S1: and converting the camera coordinate system and the laser radar coordinate system of the vehicle into the detection coordinate system of the vehicle. Referring to fig. 2, each sensor has its own coordinate system, and the detected data is also based on the respective coordinate system of the sensor, and after the camera and the lidar are mounted, the camera coordinate system and the lidar coordinate system are unified to complete the spatial synchronization of the sensors. In this embodiment, each sensor of the vehicle has its own coordinate system, and the detected data is also based on the respective coordinate system of the sensor, and after the installation of the camera and the lidar, the camera coordinate system and the lidar coordinate system need to be converted into the detection coordinate system. Defining a point in space as P under the laser radar coordinate systeml(xl,yl,zl) P in the camera coordinate systemc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp). The laser radar and the camera are respectively arranged at an angle of (theta)l,βl,φl)、(θc,βc,φc). The position relation between the origin points of the coordinate systems can be directly obtained by vehicles, namely the translation matrixes from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively BlAnd Bc. The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
step S2: referring to fig. 3, a camera detection area is initially determined according to an image vanishing line and a camera collection view angle, then the laser radar point cloud data is screened, finally, a mutation position of a laser radar data return value is detected, road boundary position information is extracted and projected onto an image, and a vehicle passing area is determined. The camera acquisition visual angle theta provides transverse constraint, and an image vanishing line formed by image vanishing points provides longitudinal constraint, so that the detection area of the camera is determined. The return values of the radar collected data have independent distance and azimuth angles, and the data are returned on a flat ground with a large enough area in a concentric circle mode. When objects such as vehicles and the like higher than the ground appear in the radar detection range, a plurality of laser radars can simultaneously scan the same object, so that some point cloud aggregation phenomena occur, and for the laser radar point cloud data, the arc point cloud basically belongs to the ground.
The emission angle of each laser radar is fixed, so after the radars are installed and fixed, the point cloud data acquired by the laser beams should have the same distance and receiving angle, and each laser beam corresponds to a unique emission angle. In this embodiment, when screening the laser radar point cloud data, if there is no obstacle on the ground, the coordinates of the laser beam of the laser radar located on the ground point cloud in the radar coordinate system are:
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
in the formula, H represents the installation height of the laser radar;
because there is angular deviation after the radar installation, according to the rotation matrix that preceding analysis obtained, need rotate the correction to laser radar's radar source data, compare the data after the transform and compare according to the coordinate point that the height obtained:
in the formula, P represents a coordinate point obtained in terms of height.
And (4) carrying out the formula processing on the acquired radar data, and providing a ground laser point cloud obtaining effect.
In this embodiment, the point cloud data of the lidar is used for road segmentation, that is, a vehicle passing area is extracted according to structural features of road edges, and a wavelet analysis is used for performing secondary segmentation on a primary segmentation result to determine a vehicle passing area. When an obstacle appears in the radar detection range, detecting the mutation position; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
The wavelet can accurately position the frequency occurrence position, and is sensitive to data change response. When an obstacle appears in the detection range of the radar, the returned value of the obstacle is subjected to a sudden change phenomenon in value due to the blocking of an object, so that the sudden change position can be detected by using a wavelet analysis method. The db6 wavelet can fit the distance data received by the radar well, and the wavelet function can locate the data accurately at the position of sudden change of the data. And respectively performing wavelet analysis on the point cloud data returned by each laser beam, extracting each laser point data capable of scanning the ground edge, performing quadratic fitting to obtain two continuous curves, and constraining the detection range in the passable range of the vehicle.
S3: referring to fig. 4, the camera collection angle of the camera is firstly used to preliminarily determine the detection range of the lidar, i.e. the detection angle of the lidar is constrained within the camera angle range, to determine the detection area for the image recognition vehicle, and then the object distance information detected by the lidar is projected onto the visual image, so as to search the region of interest identified by the vehicle on the image with the base point. The data return value of each point under the polar coordinate system in the detection range of the laser radar is as follows:
P(ρn,α,ωn)n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, the searching angle of the laser radar can be corrected to be (-0.5M, 0.5M), and the detection area is determined for the image recognition vehicle.
After a laser radar detection area and a vehicle passing area are preliminarily determined, the road edge is projected onto an image according to the camera imaging principle, and the original data acquired by a sensor are fused to obtain an interested area of the image. When the object distance information is projected on a visual image, the method for deducing the conversion relation between the camera coordinate system and the image coordinate system comprises the following steps:
P(Xc,Yc,Zc) The camera imaging is based on the pinhole imaging principle, and an object in a real scene is presented in the form of a picture, wherein the position in the picture is related to the position of the object in the camera coordinate system. The transformation relationship between the camera coordinate system and the image coordinate system is derived as follows: p (X)c,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using the similarity of triangles according to the imaging principlec,Yc,Zc) Projection position in image coordinate system:
wherein f is the focal length of the camera; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Projected to a point in the coordinate system.
The following is to derive the conversion relationship of the image coordinate system to the pixel coordinate system, because the two-dimensional coordinate system conversion on the uniform plane is realized only by translating the coordinate system. (0) is defined by integrating the conversion from the camera coordinate system to the image coordinate systemuvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
the laser radar has an independent coordinate system, and a coordinate conversion formula from point cloud to image is derived according to a conversion formula from the laser radar coordinate system to a camera coordinate system and is as follows:
by utilizing the positioning function of the laser radar, according to a coordinate conversion formula from the point cloud to the image, the object distance information detected by the laser radar is projected to the visual image, and the area of interest identified by the vehicle on the image is determined by taking the object distance information as a base point, as shown in fig. 5.
S4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
In the embodiment, machine learning is applied to perform vehicle identification and detection on the determined vehicle detection area, the ground positions of the left and right vehicles are firstly positioned, then the vehicle detection area is determined visually, the search frame is gradually enlarged, and the vehicles are identified by adopting a search mode from two sides to the middle and from bottom to top. The method is generally carried out in a mode of identifying windows line by line, and identification of each vehicle which possibly appears is realized by adjusting the size of the windows. When the method is applied to an image, the identification judgment needs to be carried out line by line from the top left corner of the region of interest, from left to right, and from top to bottom. With this method, the vehicle ahead can be identified, but since the feature analysis calculation is performed for each pixel in the region of interest, which is very time consuming, the following improved method is proposed:
and improving a vehicle identification mode, and positioning the ground coordinate position of the lower left corner of the vehicle by using the detection result of the laser radar (if the vehicle is positioned on the right side, positioning the ground coordinate of the lower right corner). Based on the point, the search area is continuously adjusted and enlarged, the size of the detection frame is adjusted, the whole image is prevented from being identified and detected, and the vehicle is quickly identified.
The same laser beam acts on the object and adjacent laser spots should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon. The method solves the problem of false detection by eliminating the condition that a plurality of detection frames appear on the same target object, extracts the laser point cloud in each detection frame after visually identifying the vehicle, and firstly judges whether the point cloud data in the adjacent detection frames have the point cloud with the same laser beam effect; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using point cloud data in the set.
Defining the space formed by the laser radar in the vehicle body as follows: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(2) as shown in fig. 6, for the detected left and right objects, the spatial point cloud forms two mutually perpendicular planes and transitions in an arc; the object right in front is a plane, and two ends of the object are connected with the circular arc. According to the condition, the vehicle can be reliably judged, the missing detection phenomenon is reduced, and meanwhile, the method can also be used for verifying whether the vehicle point cloud data identified by the image is accurate.
In summary, compared with the existing vehicle collision prediction technology, the vehicle detection method based on sensor multivariate information fusion of the embodiment has the following advantages:
1. the embodiment provides that in the vehicle detection technology, the detection range is determined by means of fusion of a camera and a laser radar. For safe driving of a vehicle, much data collected by the laser radar for 360-degree detection are redundant, extra burden is brought to a processor, and driving safety is affected. The visual angle of the camera is equivalent to the eyes of a driver, and an object on a road is detected when the vehicle runs, so that a reasonable search range is provided for the laser radar by using the visual angle of the camera.
2. In the vehicle detection technology, the characteristic that wavelet analysis is sensitive to sudden change of the data return value of the laser radar is utilized, the wavelet analysis is respectively carried out on the point cloud data returned by each laser beam, each laser point data capable of scanning the ground edge is extracted, fitting is carried out, and the detection range is restricted in the vehicle passable range.
3. In the vehicle detection technology provided by the embodiment, a vehicle identification mode is improved, the detection efficiency is improved, and the ground coordinate position of the lower left corner of the vehicle is positioned by using the detection result of the laser radar (if the vehicle is positioned on the right side, the ground coordinate of the lower right corner is positioned). And based on the point, the search area is continuously adjusted and expanded, the size of the detection frame is adjusted, and the vehicle is rapidly identified.
4. This embodiment proposes that in the vehicle detection technique, the same laser beam is applied to the object, because the surface of the object is continuous and the distances between adjacent laser spots should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon.
5. The embodiment provides that in the vehicle detection technology, the point cloud set formed by acting on an object needs to meet the conditions, so that the vehicle can be reliably judged according to the conditions, the missing detection phenomenon is reduced, and meanwhile, the point cloud data can also be used for verifying whether the vehicle point cloud data identified by the image is accurate or not.
6. In the embodiment, from the viewpoint of improving the detection precision and reducing the time consumption, the original data characteristics are fused with the high-level characteristic data, the detection advantages of each sensor are fully exerted, and the advantages are made up for to meet the final objective requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, and improving the object identification precision through complementary detection.
Example 2
The embodiment provides a detection device, which applies the vehicle detection method based on sensor multi-information fusion in embodiment 1, and specifically includes a conversion module, an initial data processing module, a detection area fusion module, and a structural feature fusion identification module.
The conversion module is used for converting a camera coordinate system and a laser radar coordinate system of the vehicle into a detection coordinate system of the vehicle. The initial data processing module is used for processing initial data of the camera and the laser radar; the initial data processing module firstly preliminarily determines a camera detection area according to an image vanishing line and a camera collecting visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area.
The detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the region of interest of the vehicle recognition on the image by taking the object distance information as a base point. The structural feature fusion identification module is used for guiding and extracting the contour feature of the tail part of the vehicle in the guide image according to the contour change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, then fusing the texture feature to identify the front vehicle, and then verifying the image identification result according to the point cloud structure and the space position of the laser radar.
Example 3
The embodiment provides a detection device, which applies the vehicle detection method based on sensor multivariate information fusion in embodiment 1, and specifically comprises a camera data processing module, a laser radar data processing module, a feature analysis module and a recognition and positioning module.
The camera data processing module is used for preliminarily determining the detection range of the laser radar and reducing the data processing amount and the processing time consumption. And the laser radar data processing module extracts ground point cloud according to the radar installation height, extracts road boundaries by utilizing wavelet analysis and obtains a vehicle passing area. The detection area fusion module is used for determining a reasonable search range for the laser radar by utilizing a camera capture visual angle, further dividing the result, restricting the detection area of the laser radar in a vehicle passing area to the maximum extent, and projecting point cloud coordinates collected by the radar to an image to obtain a vehicle identification region of interest. The feature analysis module verifies the image recognition result according to the point cloud structure and the spatial distribution of the laser radar, eliminates the phenomena of false detection and missing detection in the image and improves the recognition precision. The identification positioning module is used for identifying and positioning the vehicle in the detection range by utilizing the identification function of the camera and the positioning function of the laser radar.
Example 4
The present embodiments provide a computer terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor. The processor executes the program to realize the steps of the vehicle detection method based on the sensor multi-information fusion in embodiment 1.
When the vehicle detection method based on sensor multivariate information fusion is applied, the vehicle detection method can be applied in a software form, for example, a program designed to run independently is installed on a computer terminal, and the computer terminal can be a computer, a smart phone, a control system, other Internet of things equipment and the like. The vehicle detection method based on sensor multivariate information fusion can also be designed into an embedded running program and installed on a computer terminal, such as a singlechip.
Example 5
The present embodiment provides a computer-readable storage medium having a computer program stored thereon. The program, when executed by a processor, implements the steps of the sensor multivariate information fusion-based vehicle detection method of embodiment 1. When the vehicle detection method based on sensor multi-element information fusion is applied, the method can be applied in the form of software, such as a program designed to be independently operated by a computer-readable storage medium, wherein the computer-readable storage medium can be a U disk, or a storage medium designed to exist in the form of a U shield, or a program designed to start the whole method through external triggering by the U disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.