Nothing Special   »   [go: up one dir, main page]

CN114118252A - Vehicle detection method and detection device based on sensor multivariate information fusion - Google Patents

Vehicle detection method and detection device based on sensor multivariate information fusion Download PDF

Info

Publication number
CN114118252A
CN114118252A CN202111390381.5A CN202111390381A CN114118252A CN 114118252 A CN114118252 A CN 114118252A CN 202111390381 A CN202111390381 A CN 202111390381A CN 114118252 A CN114118252 A CN 114118252A
Authority
CN
China
Prior art keywords
vehicle
coordinate system
detection
lidar
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111390381.5A
Other languages
Chinese (zh)
Other versions
CN114118252B (en
Inventor
赵林峰
姜武华
张毅航
蔡必鑫
任毅
马晓东
黄为宇
张曼玲
王天元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202111390381.5A priority Critical patent/CN114118252B/en
Publication of CN114118252A publication Critical patent/CN114118252A/en
Application granted granted Critical
Publication of CN114118252B publication Critical patent/CN114118252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开一种基于传感器多元信息融合的车辆检测方法及检测装置。该方法包括:将车辆的相机坐标系和激光雷达坐标系转换到检测坐标系中;先初步确定相机检测区域,再对激光雷达点云数据筛选,最后提取出道路边界位置信息并投影到图像上,确定车辆通行区域;先将激光雷达的探测角度约束在相机视角范围内,再将物体距离信息投影到视觉图像上,搜索感兴趣区域;提取引导图像中车尾边缘轮廓,融合纹理特征识别前方车辆,再对图像识别结果进行验证。本发明充分发挥各传感器的检测优势,取长补短,从原始数据中提取物体特征,融合两种传感器中的物体特征,通过互补检测提高物体识别精度。

Figure 202111390381

The invention discloses a vehicle detection method and detection device based on sensor multivariate information fusion. The method includes: converting the camera coordinate system and the lidar coordinate system of the vehicle into the detection coordinate system; firstly determining the camera detection area, then screening the lidar point cloud data, and finally extracting the road boundary position information and projecting it onto the image , determine the vehicle passing area; first constrain the detection angle of the lidar within the camera's viewing angle, and then project the distance information of the object onto the visual image to search for the area of interest; extract the edge contour of the rear of the vehicle in the guidance image, and fuse the texture features to identify the front vehicle, and then verify the image recognition results. The invention gives full play to the detection advantages of each sensor, learns from each other, extracts object features from original data, fuses the object features in two sensors, and improves object recognition accuracy through complementary detection.

Figure 202111390381

Description

Vehicle detection method and detection device based on sensor multivariate information fusion
Technical Field
The invention relates to an early warning method in the technical field of automatic driving, in particular to a vehicle detection method based on sensor multivariate information fusion, and further relates to a detection device.
Background
With the continuous improvement of the economic level and the rapid development of the unmanned vehicle technology, the sensor detection technology plays a significant role in the research of the unmanned vehicle, and the difficulty that the accurate detection and positioning of the front vehicle are required to overcome by each sensor detection technology is related to whether the vehicle can safely and reliably participate in traffic operation. The advantages and disadvantages of different sensors are obvious, for example, visual images are as if human eyes can well capture various objects appearing in a visual angle and classify and recognize the objects. However, because of the limited size of the image, the positioning accuracy of the sensor cannot meet the requirement of intelligent driving. Various radar sensors have good positioning capability relative to vision, but the radar sensors are difficult to completely capture all characteristics of an object, and the difficulty in judging the type of the object is increased. The single sensor data processing algorithm can not meet the pursuit of people on driving intelligence, safety, comfort and the like more and more.
Disclosure of Invention
The invention provides a vehicle detection method and a detection device based on sensor multivariate information fusion, aiming at solving the technical problem of low positioning accuracy of the conventional vehicle sensor.
The invention is realized by adopting the following technical scheme: a vehicle detection method based on sensor multivariate information fusion comprises the following steps:
s1: converting a camera coordinate system and a laser radar coordinate system of a vehicle into a detection coordinate system of the vehicle;
s2: preliminarily determining a camera detection area according to an image vanishing line and a camera acquisition visual angle, screening the laser radar point cloud data, finally detecting a mutation position of a laser radar data return value, extracting road boundary position information, projecting the road boundary position information onto an image, and determining a vehicle passing area;
s3: the detection angle of the laser radar is constrained within the visual angle range of a camera, a detection area is determined for an image recognition vehicle, then the object distance information detected by the laser radar is projected onto a visual image, and the visual image is used as a base point to search the interested area of the vehicle recognition on the image;
s4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
The invention is based on the angle of improving the detection precision and reducing the time consumption, and combines the original data characteristics with the advanced characteristic data, thereby fully playing the detection advantages of each sensor, making up for the deficiencies of each sensor and realizing the final target requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, improving the object identification precision through complementary detection, and solving the technical problem of low positioning precision of the existing vehicle sensor.
As a further improvement of the above scheme, one point in the defined space is P under the laser radar coordinate systeml(xl,yl,zl) Under the camera coordinate system is Pc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp) (ii) a The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
wherein, BlAnd BcRespectively, the laser radar coordinate system and the translation matrix from the camera coordinate system to the detection coordinate system.
As a further improvement of the above scheme, when the laser radar point cloud data is screened, if there is no obstacle on the ground, the coordinates of the laser beam of the laser radar located on the ground point cloud in the radar coordinate system are:
Figure BDA0003368470760000031
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
Figure BDA0003368470760000032
in the formula, H represents the installation height of the laser radar;
and performing rotation correction on the source data of the laser radar, and comparing the converted data with a coordinate point obtained according to the height:
Figure BDA0003368470760000033
in the formula, P represents a coordinate point obtained in terms of height.
As a further improvement of the above solution, in step S3, the data return value of each point in the polar coordinate system in the detection range of the laser radar is:
P(ρn,α,ωn)n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, and the searching angle of the laser radar is corrected to be (-0.5M, 0.5M).
As a further improvement of the scheme, the point cloud data of the laser radar is used for road segmentation, namely, vehicle passing areas are extracted according to structural features of road edges, and wavelet analysis is used for performing secondary segmentation on primary segmentation results to determine the vehicle passing areas.
As a further improvement of the above scheme, when an obstacle appears in the radar detection range, the mutation position is detected; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
As a further improvement of the scheme, the vehicle identification detection is carried out on the determined vehicle detection area by applying machine learning, the ground positions of the left and right vehicles are firstly positioned, then the vehicle detection area is determined visually, then the search frame is gradually enlarged, and the vehicles are identified by adopting a search mode from two sides to the middle and from bottom to top.
As a further improvement of the above solution, when projecting the object distance information onto the visual image, the method for deriving the conversion relationship between the camera coordinate system and the image coordinate system includes the following steps:
P(Xc,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using triangle similarityc,Yc,Zc) Projection position in image coordinate system:
Figure BDA0003368470760000041
wherein f is the focal length of the camera; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Points projected into the coordinate system;
definition (0)uvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
Figure BDA0003368470760000042
deducing a coordinate conversion formula from the point cloud to the image according to a conversion formula from a laser radar coordinate system to a camera coordinate system, wherein the coordinate conversion formula is as follows:
Figure BDA0003368470760000043
and projecting the object distance information detected by the laser radar onto the visual image according to a coordinate conversion formula from the point cloud to the image.
As a further improvement of the above solution, a space formed by the laser radar in the vehicle body is defined as: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(1)
Figure BDA0003368470760000044
(2) for the detected objects on the left side and the right side, the spatial point cloud forms two mutually perpendicular surfaces and is in circular arc transition; the object right in front is a plane, and two ends of the object are connected with the circular arc.
The present invention also provides a detection device, which applies any of the above-mentioned vehicle detection methods based on sensor multivariate information fusion, comprising:
a conversion module for converting a camera coordinate system and a lidar coordinate system of a vehicle into a detection coordinate system of the vehicle;
an initial data processing module for processing initial data of the camera and the laser radar; the initial data processing module preliminarily determines a camera detection area according to an image vanishing line and a camera acquisition visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area;
the detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the interested area of the vehicle recognition on the image by taking the object distance information as a base point;
the structural feature fusion identification module is used for guiding and extracting the outline features of the tail part of a vehicle in a guide image according to the outline change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, fusing the texture features to identify a front vehicle, and verifying the image identification result according to the point cloud structure and the space position of the laser radar.
The vehicle detection method and the detection device based on sensor multivariate information fusion have the following beneficial effects:
1. the invention provides a technology for determining a detection range by utilizing the fusion of a camera and a laser radar in vehicle detection. For safe driving of a vehicle, much of the lidar data detected at 360 degrees is redundant, extra calculation cost is brought to a processor, and execution efficiency is affected. The visual angle of the camera is equivalent to the eyes of a driver, and an object on a road is detected when the vehicle runs, so that a reasonable search range is provided for the laser radar by using the visual angle of the camera.
2. The invention provides a vehicle detection technology, which utilizes the characteristic that wavelet analysis is sensitive to sudden change of a laser radar data return value to respectively perform wavelet analysis on point cloud data returned by each laser beam, extracts each laser point data capable of scanning the ground edge, and then performs fitting to restrict the detection range within the vehicle passable range.
3. In the vehicle detection technology, the vehicle identification mode is improved, the detection efficiency is improved, and the ground coordinate position of the lower left corner of the vehicle is positioned by using the detection result of the laser radar (if the vehicle is positioned on the right side, the ground coordinate of the lower right corner is positioned). And based on the point, the search area is continuously adjusted and expanded, the size of the detection frame is adjusted, and the vehicle is rapidly identified.
4. The invention proposes that in the vehicle detection technique, the same laser beam acts on the object, because of the continuity of the surface of the object, the distances between adjacent laser points should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon.
5. The invention provides a vehicle detection technology, wherein a point cloud set formed by acting on an object needs to meet conditions, so that the vehicle can be reliably judged according to the conditions, the missing detection phenomenon is reduced, and meanwhile, the method can be used for verifying whether the vehicle point cloud data identified by an image is accurate or not.
6. The invention is based on the angle of improving the detection precision and reducing the time consumption, and combines the original data characteristics with the advanced characteristic data, thereby fully playing the detection advantages of each sensor, making up for the deficiencies of each sensor and realizing the final target requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, and improving the object identification precision through complementary detection.
The beneficial effects of the detection device of the invention are the same as those of the vehicle detection method based on sensor multivariate information fusion, and are not repeated herein.
Drawings
Fig. 1 is a flowchart of a vehicle detection method based on sensor multivariate information fusion in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of a camera coordinate system, a laser radar coordinate system, and a detection coordinate system in embodiment 1 of the present invention.
Fig. 3 is a schematic view of an acquisition view of a camera in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a lidar data processing area under the camera view angle constraint in embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of determining a visual interesting region from a laser radar road region in embodiment 1 of the present invention.
Fig. 6 is a schematic top view of the point cloud shape of the vehicle at different positions in embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Referring to fig. 1-6, the present embodiment provides a vehicle detection method based on sensor multivariate information fusion, the method first converts the coordinate systems of each camera and lidar into a detection coordinate system; then, performing laser point cloud processing, preliminarily positioning a laser radar detection range by using a camera view angle, and extracting road boundary position information by using wavelet analysis; projecting the extracted road boundary onto an image by using an imaging principle to determine a vehicle passing area; secondly, determining an interested area of the identified vehicle in the traffic area by utilizing the positioning function of the laser radar; and finally, verifying the image identification result according to the point cloud structure and the spatial distribution of the laser radar, and eliminating false detection and missing detection in the image. In the embodiment, the vehicle detection method based on sensor multi-information fusion is mainly realized by the following steps, specifically steps S1-S4.
Step S1: and converting the camera coordinate system and the laser radar coordinate system of the vehicle into the detection coordinate system of the vehicle. Referring to fig. 2, each sensor has its own coordinate system, and the detected data is also based on the respective coordinate system of the sensor, and after the camera and the lidar are mounted, the camera coordinate system and the lidar coordinate system are unified to complete the spatial synchronization of the sensors. In this embodiment, each sensor of the vehicle has its own coordinate system, and the detected data is also based on the respective coordinate system of the sensor, and after the installation of the camera and the lidar, the camera coordinate system and the lidar coordinate system need to be converted into the detection coordinate system. Defining a point in space as P under the laser radar coordinate systeml(xl,yl,zl) P in the camera coordinate systemc(xc,yc,zc) P in the detection coordinate systemp(xp,yp,zp). The laser radar and the camera are respectively arranged at an angle of (theta)lll)、(θccc). The position relation between the origin points of the coordinate systems can be directly obtained by vehicles, namely the translation matrixes from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively BlAnd Bc. The conversion relations from the laser radar coordinate system and the camera coordinate system to the detection coordinate system are respectively as follows:
Pp=Pl·Rl+Bl
Pp=Pl·Rc+Bc
step S2: referring to fig. 3, a camera detection area is initially determined according to an image vanishing line and a camera collection view angle, then the laser radar point cloud data is screened, finally, a mutation position of a laser radar data return value is detected, road boundary position information is extracted and projected onto an image, and a vehicle passing area is determined. The camera acquisition visual angle theta provides transverse constraint, and an image vanishing line formed by image vanishing points provides longitudinal constraint, so that the detection area of the camera is determined. The return values of the radar collected data have independent distance and azimuth angles, and the data are returned on a flat ground with a large enough area in a concentric circle mode. When objects such as vehicles and the like higher than the ground appear in the radar detection range, a plurality of laser radars can simultaneously scan the same object, so that some point cloud aggregation phenomena occur, and for the laser radar point cloud data, the arc point cloud basically belongs to the ground.
The emission angle of each laser radar is fixed, so after the radars are installed and fixed, the point cloud data acquired by the laser beams should have the same distance and receiving angle, and each laser beam corresponds to a unique emission angle. In this embodiment, when screening the laser radar point cloud data, if there is no obstacle on the ground, the coordinates of the laser beam of the laser radar located on the ground point cloud in the radar coordinate system are:
Figure BDA0003368470760000081
in the formula, xl、yl、zlCoordinates, alpha, representing an arbitrary laser beamlRepresenting the search angle of any laser line beam, wherein rho is the detection distance of the laser radar, and omega is the emission angle of the laser radar;
after the height is fixed, the point cloud coordinate points of the laser beam of the laser radar are as follows:
Figure BDA0003368470760000091
in the formula, H represents the installation height of the laser radar;
because there is angular deviation after the radar installation, according to the rotation matrix that preceding analysis obtained, need rotate the correction to laser radar's radar source data, compare the data after the transform and compare according to the coordinate point that the height obtained:
Figure BDA0003368470760000092
in the formula, P represents a coordinate point obtained in terms of height.
And (4) carrying out the formula processing on the acquired radar data, and providing a ground laser point cloud obtaining effect.
In this embodiment, the point cloud data of the lidar is used for road segmentation, that is, a vehicle passing area is extracted according to structural features of road edges, and a wavelet analysis is used for performing secondary segmentation on a primary segmentation result to determine a vehicle passing area. When an obstacle appears in the radar detection range, detecting the mutation position; and extracting laser point cloud data, fitting the distance data received by the laser radar by using a Daubechies wavelet 6-order function, positioning the distance data at the position of data mutation accurately by using the wavelet function, extracting boundary characteristic points, and fitting a series of characteristic points by using a least square method to obtain a vehicle passing area.
The wavelet can accurately position the frequency occurrence position, and is sensitive to data change response. When an obstacle appears in the detection range of the radar, the returned value of the obstacle is subjected to a sudden change phenomenon in value due to the blocking of an object, so that the sudden change position can be detected by using a wavelet analysis method. The db6 wavelet can fit the distance data received by the radar well, and the wavelet function can locate the data accurately at the position of sudden change of the data. And respectively performing wavelet analysis on the point cloud data returned by each laser beam, extracting each laser point data capable of scanning the ground edge, performing quadratic fitting to obtain two continuous curves, and constraining the detection range in the passable range of the vehicle.
S3: referring to fig. 4, the camera collection angle of the camera is firstly used to preliminarily determine the detection range of the lidar, i.e. the detection angle of the lidar is constrained within the camera angle range, to determine the detection area for the image recognition vehicle, and then the object distance information detected by the lidar is projected onto the visual image, so as to search the region of interest identified by the vehicle on the image with the base point. The data return value of each point under the polar coordinate system in the detection range of the laser radar is as follows:
P(ρn,α,ωn)n=1,2,3...
wherein alpha is a search angle, and n represents a beam value of the laser radar; rhonFor a detected distance of beam value n, ωnThe beam value is the emission angle of n; and rotating around the z axis, wherein the range of the beam searching angle of the laser radar is (0,360), the visual angle of the camera is M, the searching angle of the laser radar can be corrected to be (-0.5M, 0.5M), and the detection area is determined for the image recognition vehicle.
After a laser radar detection area and a vehicle passing area are preliminarily determined, the road edge is projected onto an image according to the camera imaging principle, and the original data acquired by a sensor are fused to obtain an interested area of the image. When the object distance information is projected on a visual image, the method for deducing the conversion relation between the camera coordinate system and the image coordinate system comprises the following steps:
P(Xc,Yc,Zc) The camera imaging is based on the pinhole imaging principle, and an object in a real scene is presented in the form of a picture, wherein the position in the picture is related to the position of the object in the camera coordinate system. The transformation relationship between the camera coordinate system and the image coordinate system is derived as follows: p (X)c,Yc,Zc) Calculating P (X) for any point in the camera coordinate system by using the similarity of triangles according to the imaging principlec,Yc,Zc) Projection position in image coordinate system:
Figure BDA0003368470760000101
wherein f is the focal length of the camera; (0-xy) is the image coordinate system, and P (x, y) is P (x)c,yc,zc) Projected to a point in the coordinate system.
The following is to derive the conversion relationship of the image coordinate system to the pixel coordinate system, because the two-dimensional coordinate system conversion on the uniform plane is realized only by translating the coordinate system. (0) is defined by integrating the conversion from the camera coordinate system to the image coordinate systemuvUv) is a pixel coordinate system, and the conversion formula of the camera coordinate system to the pixel coordinate system is:
Figure BDA0003368470760000102
the laser radar has an independent coordinate system, and a coordinate conversion formula from point cloud to image is derived according to a conversion formula from the laser radar coordinate system to a camera coordinate system and is as follows:
Figure BDA0003368470760000111
by utilizing the positioning function of the laser radar, according to a coordinate conversion formula from the point cloud to the image, the object distance information detected by the laser radar is projected to the visual image, and the area of interest identified by the vehicle on the image is determined by taking the object distance information as a base point, as shown in fig. 5.
S4: after a detection area of a vehicle in an image is determined, according to the contour change direction of the laser point cloud concentrated point cloud information, guiding and extracting the contour feature of the tail of the vehicle in a guide image, then fusing the texture feature to identify a front vehicle, and then verifying an image identification result according to the point cloud structure and the space position of the laser radar.
In the embodiment, machine learning is applied to perform vehicle identification and detection on the determined vehicle detection area, the ground positions of the left and right vehicles are firstly positioned, then the vehicle detection area is determined visually, the search frame is gradually enlarged, and the vehicles are identified by adopting a search mode from two sides to the middle and from bottom to top. The method is generally carried out in a mode of identifying windows line by line, and identification of each vehicle which possibly appears is realized by adjusting the size of the windows. When the method is applied to an image, the identification judgment needs to be carried out line by line from the top left corner of the region of interest, from left to right, and from top to bottom. With this method, the vehicle ahead can be identified, but since the feature analysis calculation is performed for each pixel in the region of interest, which is very time consuming, the following improved method is proposed:
and improving a vehicle identification mode, and positioning the ground coordinate position of the lower left corner of the vehicle by using the detection result of the laser radar (if the vehicle is positioned on the right side, positioning the ground coordinate of the lower right corner). Based on the point, the search area is continuously adjusted and enlarged, the size of the detection frame is adjusted, the whole image is prevented from being identified and detected, and the vehicle is quickly identified.
The same laser beam acts on the object and adjacent laser spots should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon. The method solves the problem of false detection by eliminating the condition that a plurality of detection frames appear on the same target object, extracts the laser point cloud in each detection frame after visually identifying the vehicle, and firstly judges whether the point cloud data in the adjacent detection frames have the point cloud with the same laser beam effect; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using point cloud data in the set.
Defining the space formed by the laser radar in the vehicle body as follows: u is TL,M,R(x, y, z), the point cloud set formed by acting on the object needs to satisfy the following condition:
(1)
Figure BDA0003368470760000121
(2) as shown in fig. 6, for the detected left and right objects, the spatial point cloud forms two mutually perpendicular planes and transitions in an arc; the object right in front is a plane, and two ends of the object are connected with the circular arc. According to the condition, the vehicle can be reliably judged, the missing detection phenomenon is reduced, and meanwhile, the method can also be used for verifying whether the vehicle point cloud data identified by the image is accurate.
In summary, compared with the existing vehicle collision prediction technology, the vehicle detection method based on sensor multivariate information fusion of the embodiment has the following advantages:
1. the embodiment provides that in the vehicle detection technology, the detection range is determined by means of fusion of a camera and a laser radar. For safe driving of a vehicle, much data collected by the laser radar for 360-degree detection are redundant, extra burden is brought to a processor, and driving safety is affected. The visual angle of the camera is equivalent to the eyes of a driver, and an object on a road is detected when the vehicle runs, so that a reasonable search range is provided for the laser radar by using the visual angle of the camera.
2. In the vehicle detection technology, the characteristic that wavelet analysis is sensitive to sudden change of the data return value of the laser radar is utilized, the wavelet analysis is respectively carried out on the point cloud data returned by each laser beam, each laser point data capable of scanning the ground edge is extracted, fitting is carried out, and the detection range is restricted in the vehicle passable range.
3. In the vehicle detection technology provided by the embodiment, a vehicle identification mode is improved, the detection efficiency is improved, and the ground coordinate position of the lower left corner of the vehicle is positioned by using the detection result of the laser radar (if the vehicle is positioned on the right side, the ground coordinate of the lower right corner is positioned). And based on the point, the search area is continuously adjusted and expanded, the size of the detection frame is adjusted, and the vehicle is rapidly identified.
4. This embodiment proposes that in the vehicle detection technique, the same laser beam is applied to the object, because the surface of the object is continuous and the distances between adjacent laser spots should be very close. After visually recognizing the vehicle, extracting laser point clouds in each detection frame, and firstly judging whether point cloud data in adjacent detection frames have the point clouds under the same laser beam action; and then judging whether the point cloud depth distance between different detection frames changes suddenly. And comprehensively judging whether the phenomenon that the multiple detection frames identify the same object occurs or not, and determining a final detection frame by using the point cloud data in the set so as to reduce the false detection phenomenon.
5. The embodiment provides that in the vehicle detection technology, the point cloud set formed by acting on an object needs to meet the conditions, so that the vehicle can be reliably judged according to the conditions, the missing detection phenomenon is reduced, and meanwhile, the point cloud data can also be used for verifying whether the vehicle point cloud data identified by the image is accurate or not.
6. In the embodiment, from the viewpoint of improving the detection precision and reducing the time consumption, the original data characteristics are fused with the high-level characteristic data, the detection advantages of each sensor are fully exerted, and the advantages are made up for to meet the final objective requirement. In the original data fusion, noise reduction is taken as a main purpose, interference noise data influencing a detection result is removed, and the purpose of identification and positioning is realized by using the minimum data volume and the minimum detection cost as far as possible; secondly, extracting object features from the original data, fusing the object features in the two sensors, and improving the object identification precision through complementary detection.
Example 2
The embodiment provides a detection device, which applies the vehicle detection method based on sensor multi-information fusion in embodiment 1, and specifically includes a conversion module, an initial data processing module, a detection area fusion module, and a structural feature fusion identification module.
The conversion module is used for converting a camera coordinate system and a laser radar coordinate system of the vehicle into a detection coordinate system of the vehicle. The initial data processing module is used for processing initial data of the camera and the laser radar; the initial data processing module firstly preliminarily determines a camera detection area according to an image vanishing line and a camera collecting visual angle, then screens the laser radar point cloud data, finally detects a mutation position of a laser radar data return value, extracts road boundary position information and projects the road boundary position information onto an image, and determines a vehicle passing area.
The detection area fusion module is used for restraining the detection angle of the laser radar within the visual angle range of the camera, determining a detection area for the image recognition vehicle, projecting the object distance information detected by the laser radar onto the visual image, and searching the region of interest of the vehicle recognition on the image by taking the object distance information as a base point. The structural feature fusion identification module is used for guiding and extracting the contour feature of the tail part of the vehicle in the guide image according to the contour change direction of the laser point cloud concentrated point cloud information after determining the detection area of the vehicle in the image, then fusing the texture feature to identify the front vehicle, and then verifying the image identification result according to the point cloud structure and the space position of the laser radar.
Example 3
The embodiment provides a detection device, which applies the vehicle detection method based on sensor multivariate information fusion in embodiment 1, and specifically comprises a camera data processing module, a laser radar data processing module, a feature analysis module and a recognition and positioning module.
The camera data processing module is used for preliminarily determining the detection range of the laser radar and reducing the data processing amount and the processing time consumption. And the laser radar data processing module extracts ground point cloud according to the radar installation height, extracts road boundaries by utilizing wavelet analysis and obtains a vehicle passing area. The detection area fusion module is used for determining a reasonable search range for the laser radar by utilizing a camera capture visual angle, further dividing the result, restricting the detection area of the laser radar in a vehicle passing area to the maximum extent, and projecting point cloud coordinates collected by the radar to an image to obtain a vehicle identification region of interest. The feature analysis module verifies the image recognition result according to the point cloud structure and the spatial distribution of the laser radar, eliminates the phenomena of false detection and missing detection in the image and improves the recognition precision. The identification positioning module is used for identifying and positioning the vehicle in the detection range by utilizing the identification function of the camera and the positioning function of the laser radar.
Example 4
The present embodiments provide a computer terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor. The processor executes the program to realize the steps of the vehicle detection method based on the sensor multi-information fusion in embodiment 1.
When the vehicle detection method based on sensor multivariate information fusion is applied, the vehicle detection method can be applied in a software form, for example, a program designed to run independently is installed on a computer terminal, and the computer terminal can be a computer, a smart phone, a control system, other Internet of things equipment and the like. The vehicle detection method based on sensor multivariate information fusion can also be designed into an embedded running program and installed on a computer terminal, such as a singlechip.
Example 5
The present embodiment provides a computer-readable storage medium having a computer program stored thereon. The program, when executed by a processor, implements the steps of the sensor multivariate information fusion-based vehicle detection method of embodiment 1. When the vehicle detection method based on sensor multi-element information fusion is applied, the method can be applied in the form of software, such as a program designed to be independently operated by a computer-readable storage medium, wherein the computer-readable storage medium can be a U disk, or a storage medium designed to exist in the form of a U shield, or a program designed to start the whole method through external triggering by the U disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1.一种基于传感器多元信息融合的车辆检测方法,其特征在于,其包括以下步骤:1. a vehicle detection method based on sensor multivariate information fusion, is characterized in that, it comprises the following steps: S1:将车辆的相机坐标系和激光雷达坐标系转换到所述车辆的检测坐标系中;S1: Convert the camera coordinate system and lidar coordinate system of the vehicle into the detection coordinate system of the vehicle; S2:先根据图像消失线和相机采集视角初步确定相机检测区域,再对所述激光雷达点云数据进行筛选,最后检测出激光雷达数据返回值的突变位置,提取出道路边界位置信息并投影到图像上,确定车辆通行区域;S2: First determine the camera detection area based on the vanishing line of the image and the camera's acquisition angle of view, then filter the lidar point cloud data, and finally detect the sudden change of the return value of the lidar data, extract the road boundary position information and project it to On the image, determine the vehicle passing area; S3:先将所述激光雷达的探测角度约束在相机视角范围内,为图像识别车辆确定检测区域,再将激光雷达探测到的物体距离信息投影到视觉图像上,以此为基点搜索图像上车辆识别的感兴趣区域;S3: First constrain the detection angle of the lidar within the range of the camera's viewing angle, determine the detection area for the image recognition vehicle, and then project the distance information of the object detected by the lidar onto the visual image, and use this as a base point to search for vehicles on the image identified regions of interest; S4:确定车辆在图像中的检测区域后,根据所述激光点云集中点云信息的轮廓变化方向,引导提取引导图像中车辆尾部的轮廓特征,然后融合纹理特征以识别出前方车辆,再根据所述激光雷达的点云结构和空间位置,对图像识别结果进行验证。S4: After determining the detection area of the vehicle in the image, according to the contour change direction of the point cloud information in the laser point cloud, guide the extraction of the contour features of the rear of the vehicle in the guide image, and then fuse the texture features to identify the vehicle ahead, and then according to The point cloud structure and spatial position of the lidar are used to verify the image recognition results. 2.如权利要求1所述的基于传感器多元信息融合的车辆检测方法,其特征在于,定义空间中一点在所述激光雷达坐标系下为pl(xl,yl,zl),在所述相机坐标系下为pc(xc,yc,zc),在所述检测坐标系下为Pp(xp,yp,zp);所述激光雷达坐标系和所述相机坐标系到所述检测坐标系的转换关系分别为:2. The vehicle detection method based on sensor multivariate information fusion as claimed in claim 1, wherein a point in the defined space is p l (x l , y l , z l ) under the lidar coordinate system, and at In the camera coordinate system, it is p c (x c , y c , z c ), and in the detection coordinate system, it is P p (x p , y p , z p ); the lidar coordinate system and the The conversion relationships from the camera coordinate system to the detection coordinate system are: Pp=Pl·Rl+Bl P p =P l ·R l +B l Pp=Pl·Rc+Bc P p =P l ·R c +B c 其中,Bl和Bc分别为所述激光雷达坐标系和所述相机坐标系到所述检测坐标系的平移矩阵。Wherein, B l and B c are the translation matrices from the lidar coordinate system and the camera coordinate system to the detection coordinate system, respectively. 3.如权利要求2所述的基于传感器多元信息融合的车辆检测方法,其特征在于,在筛选所述激光雷达点云数据时,若地面没有障碍物,所述激光雷达的激光束位于地面点云在雷达坐标系下的坐标为:3 . The vehicle detection method based on sensor multi-information fusion according to claim 2 , wherein when screening the lidar point cloud data, if there is no obstacle on the ground, the laser beam of the lidar is located at a point on the ground. 4 . The coordinates of the cloud in the radar coordinate system are:
Figure FDA0003368470750000021
Figure FDA0003368470750000021
式中,xl、yl、zl表示任意根激光线束的坐标,αl表示任意根激光线束的搜索角,ρ为所述激光雷达的探测距离,ω为所述激光雷达的发射角;In the formula, x l , y l , z l represent the coordinates of any laser beam, α l represents the search angle of any laser beam, ρ is the detection distance of the lidar, and ω is the emission angle of the lidar; 在高度固定后,所述激光雷达的激光束的点云坐标点为:After the height is fixed, the point cloud coordinates of the laser beam of the lidar are:
Figure FDA0003368470750000022
Figure FDA0003368470750000022
式中,H表示激光雷达安装高度;In the formula, H represents the installation height of the lidar; 对所述激光雷达的源数据进行旋转纠正,比较变换后的数据与依据高度获得的坐标点进行比较:Perform rotation correction on the source data of the lidar, and compare the transformed data with the coordinate points obtained according to the height:
Figure FDA0003368470750000023
Figure FDA0003368470750000023
式中,P表示依据高度获得的坐标点。In the formula, P represents the coordinate point obtained according to the height.
4.如权利要求3所述的基于传感器多元信息融合的车辆检测方法,其特征在于,在步骤S3,所述激光雷达的探测范围中极坐标系下每个点的数据返回值为:4. The vehicle detection method based on sensor multivariate information fusion as claimed in claim 3, wherein in step S3, the data return value of each point in the polar coordinate system in the detection range of the lidar is: P(ρn,α,ωn) n=1,2,3...P(ρ n ,α,ω n ) n=1,2,3... 其中,α为搜索角,n表示所述激光雷达的线束值;ρn为线束值为n的探测距离,ωn线束值为n的发射角;绕z轴旋转,所述激光雷达的线束搜索角的范围为(0,360),相机的视角为M,所述激光雷达的搜索角度修正为(-0.5M,0.5M)。Among them, α is the search angle, and n is the beam value of the lidar; ρ n is the detection distance of the beam value of n, and ω n is the emission angle of the beam value of n; rotating around the z-axis, the beam search of the lidar The range of the angle is (0, 360), the viewing angle of the camera is M, and the search angle of the lidar is corrected to (-0.5M, 0.5M). 5.如权利要求4所述的基于传感器多元信息融合的车辆检测方法,其特征在于,利用所述激光雷达的点云数据进行道路分割,即根据道路边缘所具备的结构特征将车辆通行区域提取出来,并利用小波分析对初步分割结果进行二次分割,以确定车辆可通行区域。5 . The vehicle detection method based on sensor multivariate information fusion according to claim 4 , wherein the point cloud data of the lidar is used for road segmentation, that is, the vehicle passing area is extracted according to the structural features of the road edge. 6 . and use wavelet analysis to perform secondary segmentation on the initial segmentation result to determine the passable area of the vehicle. 6.如权利要求5所述的基于传感器多元信息融合的车辆检测方法,其特征在于,当雷达探测范围内出现障碍物时,将突变位置检测出来;提取激光点云数据,并使用Daubechies小波6阶函数对所述激光雷达接收的距离数据进行拟合,并且在数据突变的位置,其小波函数能够比较准确对其进行定位,提取出边界特征点,再用最小二乘法拟合一系列特征点得出车辆通行区域。6. The vehicle detection method based on sensor multi-information fusion as claimed in claim 5, wherein when an obstacle appears within the radar detection range, the sudden change position is detected; the laser point cloud data is extracted, and the Daubechies wavelet 6 is used. The order function fits the distance data received by the lidar, and at the position of the data mutation, its wavelet function can more accurately locate it, extract the boundary feature points, and then use the least squares method to fit a series of feature points Get the vehicle traffic area. 7.如权利要求6所述的基于传感器多元信息融合的车辆检测方法,其特征在于,应用机器学习对确定的车辆检测区域进行车辆识别检测,通过先定位左右车辆所在地面位置,然后在视觉上在确定车辆检测区域,再逐步放大搜索框,采用从两侧往中间,从下到上的搜索方式识别车辆。7. The vehicle detection method based on sensor multivariate information fusion as claimed in claim 6, wherein the vehicle identification detection is performed on the determined vehicle detection area by applying machine learning, by first locating the ground positions of the left and right vehicles, and then visually After determining the vehicle detection area, gradually enlarge the search box, and use the search method from both sides to the middle and from the bottom to the top to identify the vehicle. 8.如权利要求7所述的基于传感器多元信息融合的车辆检测方法,其特征在于,将物体距离信息投影到视觉图像上时,相机坐标系与图像坐标系的转换关系推导方法包括以下步骤:8. the vehicle detection method based on sensor multivariate information fusion as claimed in claim 7, is characterized in that, when object distance information is projected on the visual image, the conversion relation derivation method of camera coordinate system and image coordinate system comprises the following steps: P(Xc,Yc,Zc)为相机坐标系中任意一点,利用三角形相似计算P(Xc,Yc,Zc)在图像坐标系中的投影位置:P(X c , Y c , Z c ) is any point in the camera coordinate system, and the projected position of P(X c , Y c , Z c ) in the image coordinate system is calculated by triangular similarity:
Figure FDA0003368470750000031
Figure FDA0003368470750000031
式中,f为相机焦距;(0-xy)为图像坐标系,p(x,y)为P(xc,yc,zc)投影到该坐标系中的点;In the formula, f is the focal length of the camera; (0-xy) is the image coordinate system, and p(x, y) is the point where P(x c , y c , z c ) is projected into the coordinate system; 定义(0uv-uv)为像素坐标系,相机坐标系转换到像素坐标系的转换公式为:Define (0 uv -uv) as the pixel coordinate system, and the conversion formula from the camera coordinate system to the pixel coordinate system is:
Figure FDA0003368470750000032
Figure FDA0003368470750000032
根据激光雷达坐标系到相机坐标系的转换公式推导获得点云到图像上的坐标转换公式为:According to the conversion formula from the lidar coordinate system to the camera coordinate system, the coordinate conversion formula obtained from the point cloud to the image is:
Figure FDA0003368470750000033
Figure FDA0003368470750000033
根据点云到图像上的坐标转换公式,将激光雷达探测到的物体距离信息投影到视觉图像上。According to the coordinate conversion formula from the point cloud to the image, the distance information of the object detected by the lidar is projected onto the visual image.
9.如权利要求8所述的基于传感器多元信息融合的车辆检测方法,其特征在于,定义所述激光雷达在车身形成的空间为:Γ:UL,M,R(x,y,z),作用在物体上所形成的点云集需要满足以下条件:9. The vehicle detection method based on sensor multivariate information fusion as claimed in claim 8, wherein the space formed by the lidar in the vehicle body is defined as: Γ: U L, M, R (x, y, z) , the point cloud set formed on the object needs to meet the following conditions: (1)
Figure FDA0003368470750000041
(1)
Figure FDA0003368470750000041
(2)对于探测的左右两侧物体,空间点云形成两个相互垂直的面,并以圆弧过渡;正前方物体则是一个面,两端连接圆弧。(2) For the detected objects on the left and right sides, the spatial point cloud forms two mutually perpendicular planes and transitions with arcs; the object directly in front is a plane, and the two ends are connected with arcs.
10.一种应用如权利要求1-9中任意一项所述的基于传感器多元信息融合的车辆检测方法的检测装置,其特征在于,其包括:10. A detection device applying the vehicle detection method based on sensor multivariate information fusion according to any one of claims 1-9, characterized in that it comprises: 转换模块,其用于将车辆的相机坐标系和激光雷达坐标系转换到所述车辆的检测坐标系中;a conversion module, which is used to convert the camera coordinate system and the lidar coordinate system of the vehicle into the detection coordinate system of the vehicle; 初始数据处理处理模块,其用于对所述相机和所述激光雷达的初始数据进行处理;所述初始数据处理处理模块先根据图像消失线和相机采集视角初步确定相机检测区域,再对所述激光雷达点云数据进行筛选,最后检测出激光雷达数据返回值的突变位置,提取出道路边界位置信息并投影到图像上,确定车辆通行区域;The initial data processing module is used to process the initial data of the camera and the lidar; the initial data processing module firstly determines the camera detection area according to the image vanishing line and the camera acquisition angle, and then The lidar point cloud data is screened, and finally the mutation position of the lidar data return value is detected, and the road boundary position information is extracted and projected onto the image to determine the vehicle passing area; 检测区域融合模块,其用于先将所述激光雷达的探测角度约束在相机视角范围内,为图像识别车辆确定检测区域,再将激光雷达探测到的物体距离信息投影到视觉图像上,以此为基点搜索图像上车辆识别的感兴趣区域;A detection area fusion module, which is used to first constrain the detection angle of the lidar within the range of the camera's viewing angle, determine the detection area for the image recognition vehicle, and then project the distance information of the object detected by the lidar onto the visual image, thereby Search for the region of interest for vehicle identification on the image for the base point; 结构特征融合识别模块,其过程是确定车辆在图像中的检测区域后,根据所述激光点云集中点云信息的轮廓变化方向,引导提取引导图像中车辆尾部的轮廓特征,然后融合纹理特征以识别出前方车辆,再根据所述激光雷达的点云结构和空间位置,对图像识别结果进行验证。Structural feature fusion recognition module, the process of which is to determine the detection area of the vehicle in the image, according to the contour change direction of the point cloud information in the laser point cloud, to guide and extract the contour features of the rear of the vehicle in the guide image, and then fuse the texture features to The vehicle ahead is identified, and the image recognition result is verified according to the point cloud structure and spatial position of the lidar.
CN202111390381.5A 2021-11-23 2021-11-23 Vehicle detection method and device based on sensor multi-element information fusion Active CN114118252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111390381.5A CN114118252B (en) 2021-11-23 2021-11-23 Vehicle detection method and device based on sensor multi-element information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111390381.5A CN114118252B (en) 2021-11-23 2021-11-23 Vehicle detection method and device based on sensor multi-element information fusion

Publications (2)

Publication Number Publication Date
CN114118252A true CN114118252A (en) 2022-03-01
CN114118252B CN114118252B (en) 2024-10-15

Family

ID=80439455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111390381.5A Active CN114118252B (en) 2021-11-23 2021-11-23 Vehicle detection method and device based on sensor multi-element information fusion

Country Status (1)

Country Link
CN (1) CN114118252B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115273547A (en) * 2022-07-26 2022-11-01 上海工物高技术产业发展有限公司 Road anti-collision early warning system
CN115267815A (en) * 2022-06-10 2022-11-01 合肥工业大学 An optimal layout method of roadside lidar group based on point cloud modeling
CN115690261A (en) * 2022-12-29 2023-02-03 安徽蔚来智驾科技有限公司 Parking space map building method based on multi-sensor fusion, vehicle and storage medium
CN116580098A (en) * 2023-07-12 2023-08-11 中科领航智能科技(苏州)有限公司 Cabin door position detection method for automatic leaning machine system
CN117315613A (en) * 2023-11-27 2023-12-29 新石器中研(上海)科技有限公司 Noise point cloud identification and filtering method, computer equipment, medium and driving equipment
CN117809440A (en) * 2024-03-01 2024-04-02 江苏濠汉信息技术有限公司 Tree obstacle mountain fire monitoring and early warning method and system applying three-dimensional ranging
CN118731966A (en) * 2024-08-30 2024-10-01 盛视科技股份有限公司 Lane inspection method and inspection equipment using multi-camera fusion single-beam laser radar

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156727A (en) * 2014-08-26 2014-11-19 中电海康集团有限公司 Lamplight inverted image detection method based on monocular vision
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
KR101991626B1 (en) * 2018-05-25 2019-06-20 가천대학교 산학협력단 Method and system for detecting vanishing point for intelligent vehicles

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156727A (en) * 2014-08-26 2014-11-19 中电海康集团有限公司 Lamplight inverted image detection method based on monocular vision
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
KR101991626B1 (en) * 2018-05-25 2019-06-20 가천대학교 산학협력단 Method and system for detecting vanishing point for intelligent vehicles

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王嶺;许颖浩;沈笠;周军;: "基于场景分析的建筑图像水平自适应调整方法", 电视技术, no. 06, 17 June 2016 (2016-06-17) *
闫尧;李春书;: "基于激光雷达信息和单目视觉信息的车辆识别方法", 河北工业大学学报, no. 06, 15 December 2019 (2019-12-15) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115267815A (en) * 2022-06-10 2022-11-01 合肥工业大学 An optimal layout method of roadside lidar group based on point cloud modeling
CN115273547A (en) * 2022-07-26 2022-11-01 上海工物高技术产业发展有限公司 Road anti-collision early warning system
CN115690261A (en) * 2022-12-29 2023-02-03 安徽蔚来智驾科技有限公司 Parking space map building method based on multi-sensor fusion, vehicle and storage medium
CN116580098A (en) * 2023-07-12 2023-08-11 中科领航智能科技(苏州)有限公司 Cabin door position detection method for automatic leaning machine system
CN116580098B (en) * 2023-07-12 2023-09-15 中科领航智能科技(苏州)有限公司 Cabin door position detection method for automatic leaning machine system
CN117315613A (en) * 2023-11-27 2023-12-29 新石器中研(上海)科技有限公司 Noise point cloud identification and filtering method, computer equipment, medium and driving equipment
CN117809440A (en) * 2024-03-01 2024-04-02 江苏濠汉信息技术有限公司 Tree obstacle mountain fire monitoring and early warning method and system applying three-dimensional ranging
CN117809440B (en) * 2024-03-01 2024-05-10 江苏濠汉信息技术有限公司 Tree obstacle mountain fire monitoring and early warning method and system applying three-dimensional ranging
CN118731966A (en) * 2024-08-30 2024-10-01 盛视科技股份有限公司 Lane inspection method and inspection equipment using multi-camera fusion single-beam laser radar
CN118731966B (en) * 2024-08-30 2024-11-29 盛视科技股份有限公司 Lane inspection method and inspection equipment for multi-camera fused single-beam laser radar

Also Published As

Publication number Publication date
CN114118252B (en) 2024-10-15

Similar Documents

Publication Publication Date Title
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN109858460B (en) Lane line detection method based on three-dimensional laser radar
CN109100741B (en) A target detection method based on 3D lidar and image data
WO2021223368A1 (en) Target detection method based on vision, laser radar, and millimeter-wave radar
CN113156421A (en) Obstacle detection method based on information fusion of millimeter wave radar and camera
US8867792B2 (en) Environment recognition device and environment recognition method
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
US20050232463A1 (en) Method and apparatus for detecting a presence prior to collision
CN112464812B (en) A vehicle-based sunken obstacle detection method
WO2022151664A1 (en) 3d object detection method based on monocular camera
US10776637B2 (en) Image processing device, object recognizing device, device control system, image processing method, and computer-readable medium
CN102629326A (en) Lane line detection method based on monocular vision
US10546383B2 (en) Image processing device, object recognizing device, device control system, image processing method, and computer-readable medium
CN111222441B (en) Point cloud target detection and blind spot target detection method and system based on vehicle-road collaboration
US20210326612A1 (en) Vehicle detection method and device
Ponsa et al. On-board image-based vehicle detection and tracking
CN115327572A (en) A method for detecting obstacles in front of a vehicle
CN116778262B (en) Three-dimensional target detection method and system based on virtual point cloud
CN114740493A (en) Road edge detection method based on multi-line laser radar
WO2023207845A1 (en) Parking space detection method and apparatus, and electronic device and machine-readable storage medium
US11420855B2 (en) Object detection device, vehicle, and object detection process
Raguraman et al. Intelligent drivable area detection system using camera and lidar sensor for autonomous vehicle
CN118038226A (en) A road safety monitoring method based on LiDAR and thermal infrared visible light information fusion
Zhang et al. Rvdet: Feature-level fusion of radar and camera for object detection
Yoneda et al. Simultaneous state recognition for multiple traffic signals on urban road

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant